Federated Learning Leakage Study
A research case study on federated learning privacy leakage, gradient inversion risk, and defense trade-offs for sensitive collaborative training.
Jason Chia-Sheng Lin is a doctoral researcher at National Yang Ming Chiao Tung University whose work connects AI agents, trustworthy AI, speech intelligence, cybersecurity, and deployment-ready system design. This page is designed for hiring managers, technical leaders, collaborators, and organizers who want a concise, professional overview.
I bring together medical AI lab research, system building, and investigation-informed thinking to design AI agents and IDE-like agent systems that stay useful when evidence, regulation, and deployment constraints matter.
Before doctoral research, Jason worked in cybercrime investigation. That background continues to shape a practical approach to evidence, adversarial behavior, traceability, and the gap between a model that looks good in isolation and a system that remains trustworthy in real-world use.
Current work spans AI agents, speech and language pipelines, trustworthy AI evaluation, medical AI cybersecurity, and deployment-minded system design in environments where reviewability and operational constraints matter.
Works across models, tools, orchestration, runtime assumptions, and operational constraints rather than treating AI work as an isolated modeling problem.
Brings threat modeling, privacy, leakage risk, and deployment realism into system design for environments where failure has real cost.
Turns technical work into talks, case studies, and structured writing that hiring managers, researchers, and technical collaborators can inspect quickly.
Current
Researching trustworthy AI systems, AI agents, medical cybersecurity, speech intelligence, grounded LLM workflows, and security-aware evaluation for real-world deployment.
Previous
Worked on digital evidence, online fraud analysis, OSINT, and operational reasoning in high-stakes investigative settings.
Cross-Disciplinary
Bringing evidence awareness, adversarial thinking, and operational discipline into the way AI agents and systems are designed and evaluated.
Ongoing
Developing research case studies, technical writing, and speaking material around trustworthy AI, agent systems, speech systems, and deployment risk.
Trustworthy AI agents and systems for operational deployment
IDE-like AI agent systems for research, coding, and analyst workflows
ASR + LLM + RAG pipelines for speech intelligence and evidence-aware analysis
Security, privacy, and evaluation for agentic systems used in high-stakes settings
Human review, traceability, and decision support in analyst-facing workflows
These case studies show how Jason frames problems, builds systems, and explains technical choices in ways that are inspectable by both technical and cross-functional readers.
A research case study on federated learning privacy leakage, gradient inversion risk, and defense trade-offs for sensitive collaborative training.
A research-led case study on retrieval-augmented fraud conversation analysis, designed to keep LLM outputs grounded in transcript evidence for high-stakes review.
An evidence-aware speech intelligence pipeline using ASR, retrieval, and LLM extraction to turn long-form conversational audio into structured, reviewable outputs.
CYBERSEC 2026
Breakout session on cybersecurity practice for AI software medical devices, using FDA 524B to connect threat modeling, SBOM, Zero Trust design, and auditable risk governance in heavily regulated environments.
CISC 2025 · Conference Paper
English conference paper examining how AI-era RaaS operations evolve through BYOVD, LOTL, covert C2, and adaptive tradecraft, then mapping those threats to a ZTAID-grounded zero-trust defense strategy.
CISC 2025 · Conference Paper
English conference paper proposing Threat Pulse Modeling (TPM) as a way to translate live cyber threat intelligence into ZTAID maturity signals for continuous zero-trust assessment.
essay
Liquid Glass, visionOS, and AI point toward a future where the operating system is redefined less by the kernel than by a new operating surface for search, orchestration, and cross-app work.
essay
A research-oriented essay on cross-node fraud intelligence, minimal disclosure, and trustworthy AI design for high-stakes pattern formation under fragmented evidence.
Professional signals
The strongest fit is with teams, labs, or organizers working near trustworthy AI, speech and language systems, deployment-sensitive workflows, or cybersecurity-minded system design.
Research collaboration and interdisciplinary lab conversations
Speaking invitations for AI agents, speech systems, or cybersecurity
Hiring, technical peer exchange, and IDE-like agent system design
Contact
Email is the best route for hiring conversations, research collaboration, speaking invitations, and technically specific discussion.