Official session CYBERSEC 2026
AI 軟體醫材的資安實戰:從美國 FDA 524B 規範到 Threat Modeling 與 Patch SLA 的完整落地
Breakout session on cybersecurity practice for AI software medical devices, using FDA 524B to connect threat modeling, SBOM, Zero Trust design, and auditable risk governance in heavily regulated environments.
May 6, 2026 · 16:15-16:45 Medical Cybersecurity Forum
Conference paper CISC 2025
Evolution and Defense Challenges of Ransomware-as-a-Service in the AI Era: A Technical and Strategic Analysis Using Medusa and CrazyHunter as a Case Study
English conference paper examining how AI-era RaaS operations evolve through BYOVD, LOTL, covert C2, and adaptive tradecraft, then mapping those threats to a ZTAID-grounded zero-trust defense strategy.
May 28-29, 2025 Cryptology and Information Security Conference
Conference paper CISC 2025
Integration of Threat Pulse Modeling into the ZTAID Zero Trust Maturity Assessment Model: An Analytical Framework
English conference paper proposing Threat Pulse Modeling (TPM) as a way to translate live cyber threat intelligence into ZTAID maturity signals for continuous zero-trust assessment.
May 28-29, 2025 Cryptology and Information Security Conference
Trustworthy AI Beyond Benchmark Performance
How to think about reliability, evidence, human review, and system behavior when AI is used in environments where mistakes carry real cost.
Research groups, labs, interdisciplinary audiences
AI-Era Ransomware and Zero-Trust Defense
How modern RaaS campaigns combine automation, BYOVD, LOTL, and covert C2 techniques, and how ZTAID-aligned zero-trust strategy can structure practical detection, containment, and recovery.
Cybersecurity conferences, blue teams, graduate seminars
Threat Pulse Modeling and Continuous Assessment
How cyber threat intelligence can be translated into pulse events, ZTAID pillar scores, and measurable maturity signals to support faster defensive adaptation.
Cybersecurity researchers, zero-trust programs, graduate seminars
ASR + LLM + RAG for Operational Workflows
Design patterns for speech and language pipelines that move from raw transcripts to grounded, inspectable outputs in analyst-facing settings.
NLP teams, speech researchers, applied AI practitioners
AI Agents and IDE-Like Assistant Systems
How to design tool-using agents and coding-assistant workflows that stay inspectable, grounded, and useful for real work.
AI product teams, research labs, engineering groups
Security-Minded AI System Design
Why privacy, leakage risk, adversarial thinking, and deployment assumptions should be treated as core system questions rather than compliance afterthoughts.
Security teams, engineering groups, policy-adjacent stakeholders