Research

Research at the intersection of AI agents, security, and trustworthy deployment.

My work asks how AI agents and decision-support systems can remain useful, inspectable, and dependable when they are deployed in environments shaped by uncertainty, evidence requirements, and real operational cost.

What is active now? Current work and next steps. What signals depth? Recent conference papers and themes. Want implementation evidence? Go from the agenda to case studies. Considering collaboration? Use the direct contact path.

Active work and current directions

Ongoing work is organized around a small number of durable directions in agents, speech pipelines, and deployment rather than a long list of disconnected experiments.

Active Research

Evidence-Aware Speech Intelligence Pipelines

Developing systems that move from raw conversational audio to structured, reviewable outputs while preserving traceability across retrieval, agent steps, and generated conclusions.

Current next step: Refining evaluation slices for transcript quality, retrieval behavior, agent steps, and reviewer trust.

Speech IntelligenceASRTraceability
Active Research

Grounded Fraud Conversation Analysis

Studying how RAG-based and agentic workflows can support fraud-related conversation analysis without relying on unsupported language-model reasoning.

Current next step: Extending retrieval, tool-use, and answer-grounding evaluation for analyst-facing use.

Fraud AnalysisRAGLLM Systems
Ongoing Study

Leakage and Privacy Risk in Federated Learning

Exploring how collaborative training setups behave under realistic leakage and privacy assumptions in sensitive AI settings.

Current next step: Comparing attack and defense trade-offs across threat models and deployment assumptions.

Federated LearningPrivacySecurity

Recent conference papers

Recent English conference papers connect the research agenda with practical deployment, threat modeling, and measurable defense strategy.

Recent conference paper

Evolution and Defense Challenges of Ransomware-as-a-Service in the AI Era

Presented in English at CISC 2025, this paper analyzes Medusa and CrazyHunter as case studies for AI-era ransomware evolution and connects their tradecraft to a ZTAID-grounded zero-trust defense framework for real operational environments.

Conference: Cryptology and Information Security Conference 2025 (CISC 2025)

Schedule: May 28-29, 2025

Venue: Feng Chia University

Format: Conference Paper · English

RaaSZero TrustZTAIDThreat ModelingSOAR

Recent conference paper

Integration of Threat Pulse Modeling into the ZTAID Zero Trust Maturity Assessment Model

Presented in English at CISC 2025, this paper proposes Threat Pulse Modeling as a way to convert live cyber threat intelligence into pillar-level ZTAID maturity signals for continuous zero-trust assessment and faster operational response.

Conference: Cryptology and Information Security Conference 2025 (CISC 2025)

Schedule: May 28-29, 2025

Venue: Feng Chia University

Format: Conference Paper · English

Threat IntelligenceThreat Pulse ModelingZero TrustZTAIDForecasting

Trustworthy AI and Agent Systems

Designing AI agents and systems where reliability, evaluation, human review, and traceability are part of the architecture rather than afterthoughts.

ReliabilityEvaluationHuman Review
Explore this direction

Speech, Language, and Agent Workflows

Building ASR + LLM + RAG pipelines and agent workflows for conversational analysis, structured extraction, and evidence-aware reasoning over long-form audio and transcripts.

ASRAgentsRAG
Explore this direction

Security and High-Stakes Deployment

Studying privacy, leakage, adversarial risk, and governance constraints that shape AI systems used in regulated or security-sensitive environments.

SecurityPrivacyDeployment
Explore this direction

Questions I care about

  • How can AI agents support human decision-making in high-stakes environments without weakening the chain of evidence?
  • What makes an AI agent or system trustworthy beyond fluent output, benchmark scores, or tool use?
  • How should speech pipelines and IDE-like assistants be designed for real workflows rather than idealized demos?
  • How can deployment constraints, governance, and security be built into AI system design from the start?

Working style

I tend to approach AI as a full-system and agent-workflow problem rather than a single-model problem. That means thinking about data quality, tools, retrieval, evaluation, security assumptions, failure analysis, and human review as connected parts of the same design task.

The common thread across the portfolio is simple: build agents and systems that are capable, inspectable, evidence-aware, and realistic about deployment conditions.

AI AgentsTrustworthy AISpeech IntelligenceCybersecurityHigh-Stakes Deployment

Future directions

  • Trustworthy speech and agent systems for analyst-facing, evidence-sensitive workflows.
  • IDE-like agent systems for research, development, and human-in-the-loop review.
  • Evaluation frameworks for AI deployment in regulated, security-critical, or operationally complex environments.

Collaboration and research fit

I am especially interested in collaborations that value technical depth, careful evaluation, and the realities of deploying AI agents and systems where reliability, reviewability, and governance matter.

Good collaboration fits include research groups, interdisciplinary labs, and technical teams that want to turn strong ideas into inspectable agents, evaluable prototypes, or conference-ready case studies.