Resume

Professional profile for AI agents, trustworthy AI, and security-aware technical work.

Jason Chia-Sheng Lin is a doctoral researcher at National Yang Ming Chiao Tung University whose work connects AI agents, trustworthy AI, speech intelligence, cybersecurity, and deployment-ready system design. This page is designed for hiring managers, technical leaders, collaborators, and organizers who want a concise, professional overview.

Current role
Doctoral researcher at NYCU
Base
Artificial Intelligence in Medical Imaging / Signal Analysis Lab
Primary focus
AI agents, speech intelligence, and cybersecurity
Recent public signals
1 official session + 2 English conference papers

Professional Summary

I bring together medical AI lab research, system building, and investigation-informed thinking to design AI agents and IDE-like agent systems that stay useful when evidence, regulation, and deployment constraints matter.

Before doctoral research, Jason worked in cybercrime investigation. That background continues to shape a practical approach to evidence, adversarial behavior, traceability, and the gap between a model that looks good in isolation and a system that remains trustworthy in real-world use.

Current work spans AI agents, speech and language pipelines, trustworthy AI evaluation, medical AI cybersecurity, and deployment-minded system design in environments where reviewability and operational constraints matter.

What Jason Brings to a Team

Agent and systems thinking

Works across models, tools, orchestration, runtime assumptions, and operational constraints rather than treating AI work as an isolated modeling problem.

Security-aware technical judgment

Brings threat modeling, privacy, leakage risk, and deployment realism into system design for environments where failure has real cost.

Clear communication for mixed audiences

Turns technical work into talks, case studies, and structured writing that hiring managers, researchers, and technical collaborators can inspect quickly.

Professional Experience

Current

Doctoral Researcher, NYCU Artificial Intelligence in Medical Imaging / Signal Analysis Lab

Researching trustworthy AI systems, AI agents, medical cybersecurity, speech intelligence, grounded LLM workflows, and security-aware evaluation for real-world deployment.

Previous

Cybercrime Investigation

Worked on digital evidence, online fraud analysis, OSINT, and operational reasoning in high-stakes investigative settings.

Cross-Disciplinary

Investigation-Informed Systems Thinking

Bringing evidence awareness, adversarial thinking, and operational discipline into the way AI agents and systems are designed and evaluated.

Ongoing

Research and Technical Communication

Developing research case studies, technical writing, and speaking material around trustworthy AI, agent systems, speech systems, and deployment risk.

Current Areas of Work

Trustworthy AI agents and systems for operational deployment

IDE-like AI agent systems for research, coding, and analyst workflows

ASR + LLM + RAG pipelines for speech intelligence and evidence-aware analysis

Security, privacy, and evaluation for agentic systems used in high-stakes settings

Human review, traceability, and decision support in analyst-facing workflows

Selected Work

Representative projects that show applied systems thinking.

These case studies show how Jason frames problems, builds systems, and explains technical choices in ways that are inspectable by both technical and cross-functional readers.

Browse all projects
Cybersecurity 2026 Active Study

Federated Learning Leakage Study

A research case study on federated learning privacy leakage, gradient inversion risk, and defense trade-offs for sensitive collaborative training.

PyTorchFederated LearningPrivacyGradient LeakageSecure Aggregation
Fraud Analysis 2026 Active Research

Fraud Conversation Analysis with RAG

A research-led case study on retrieval-augmented fraud conversation analysis, designed to keep LLM outputs grounded in transcript evidence for high-stakes review.

PythonRAGLLM PipelinesTranscript AnalysisEvidence Grounding
AI Systems 2026 Active Research

Speech Evidence Intelligence Pipeline

An evidence-aware speech intelligence pipeline using ASR, retrieval, and LLM extraction to turn long-form conversational audio into structured, reviewable outputs.

PythonASRWhisperRAGLLM Pipelines

Speaking and Publication Signals

CYBERSEC 2026

AI 軟體醫材的資安實戰:從美國 FDA 524B 規範到 Threat Modeling 與 Patch SLA 的完整落地

Breakout session on cybersecurity practice for AI software medical devices, using FDA 524B to connect threat modeling, SBOM, Zero Trust design, and auditable risk governance in heavily regulated environments.

CISC 2025 · Conference Paper

Evolution and Defense Challenges of Ransomware-as-a-Service in the AI Era: A Technical and Strategic Analysis Using Medusa and CrazyHunter as a Case Study

English conference paper examining how AI-era RaaS operations evolve through BYOVD, LOTL, covert C2, and adaptive tradecraft, then mapping those threats to a ZTAID-grounded zero-trust defense strategy.

CISC 2025 · Conference Paper

Integration of Threat Pulse Modeling into the ZTAID Zero Trust Maturity Assessment Model: An Analytical Framework

English conference paper proposing Threat Pulse Modeling (TPM) as a way to translate live cyber threat intelligence into ZTAID maturity signals for continuous zero-trust assessment.

Writing and Technical Communication

essay

From Flat UI to Spatial Interface

Liquid Glass, visionOS, and AI point toward a future where the operating system is redefined less by the kernel than by a new operating surface for search, orchestration, and cross-app work.

essay

Minimal Disclosure for Fraud Intelligence: Cross-Node Pattern Formation in High-Stakes AI

A research-oriented essay on cross-node fraud intelligence, minimal disclosure, and trustworthy AI design for high-stakes pattern formation under fragmented evidence.

Professional signals

  • Interdisciplinary profile spanning doctoral research, agent and system building, and investigation-informed reasoning.
  • Comfortable in research, engineering-adjacent, and technically cross-functional conversations.
  • Public work includes an official CYBERSEC 2026 session and two English CISC 2025 conference papers.

Methods and Technical Toolkit

AI / Agent Systems

PyTorchTransformersAI AgentsLLM PipelinesRAG Systems

Speech / Language

ASRSpeech IntelligenceTranscript ProcessingEvidence ExtractionConversation Analysis

Security / Operations

CybersecurityDigital ForensicsOSINTFraud AnalysisFederated Learning Security

Research / Evaluation

Experiment DesignEvaluation FrameworksReproducible WorkflowsPythonGitHub Actions

Open to the Following Conversations

The strongest fit is with teams, labs, or organizers working near trustworthy AI, speech and language systems, deployment-sensitive workflows, or cybersecurity-minded system design.

Research collaboration and interdisciplinary lab conversations

Speaking invitations for AI agents, speech systems, or cybersecurity

Hiring, technical peer exchange, and IDE-like agent system design

Contact

Email is the best route for hiring conversations, research collaboration, speaking invitations, and technically specific discussion.