CYBERSEC 2026 Official Session

01 · Title Hero

Designing Cybersecurity for AI in Regulated Environments

Lessons from FDA Section 524B

Why AI Security is a System Problem, Not a Model Problem

Chia-Sheng (Jason) Lin · National Yang Ming Chiao Tung University

Narrative
20 anchored chapters
Regulatory anchor
FDA Section 524B
Core shift
From model security to full-stack trust

02 · Opening Contrast

If an AI recommends the wrong movie, it is fine.

The threshold for harm changes completely when AI participates in clinical judgment.

Ordinary software miss

If an AI recommends the wrong movie, it is fine.

Clinical miss

But what if that AI is outlining a tumor?

Why medical AI is different

That is not a bug.

That is a medical incident.

03 · Real-World Incident

A Real-World Incident in Taiwan

MacKay Memorial Hospital ransomware attack

Public hospital incidents remind us that the clinical system, not only the model artifact, determines safety.

Location

Taipei

Disruption

Medical IT infrastructure disrupted

System lesson

This was not an attack on the model.

It was an attack on the system.

Reported impact

500+ computers affected

04 · Untouched Model Paradox

The Untouched Model Paradox

Integrity checks on weights and checkpoints do not protect the workflow around them.

Model state

The model can remain untouched.

Clinical consequence

The system can still be compromised.

Model integrity does not guarantee clinical trust. The attack surface includes infrastructure, dependencies, orchestration, identity, and the human-facing output path.

05 · FDA Section 524B

FDA Section 524B

In healthcare, AI is not just software. It is clinical risk.

In regulated medical contexts, cybersecurity obligations sit alongside safety and effectiveness.

Requirement

SBOM

Requirement

Postmarket vulnerability monitoring

Requirement

Patching / update capability

Regulatory message

AI security is no longer a best practice. It is a legal requirement.

06 · Full Stack Paradigm

Defending the full stack: a new architectural paradigm

Risk moves across layers, so defense and accountability have to move with it.

Layer

Governance

Layer

Model

Layer

Runtime

Layer

Kernel

Layer

Hardware

AI security is not only an engineering problem. It is also a governance problem, because trust depends on who can change the system, how evidence is collected, and whether operational controls survive deployment pressure.

07 · Hardware and Kernel Fragility

The foundation of fragility: hardware and kernel exploits

The lower layers define whether everything above them can still be trusted.

Foundation

Hardware

Foundation

Kernel

Trust impact

Flip a single bit. Collapse trust.

HardwareKernelBit-flip attacksPrivilege escalationContainer escape

The most fragile layer is not always the model. A trusted model running on an untrusted foundation is still an untrusted clinical system.

08 · Protocol Frontier

The Protocol Frontier: exploiting frameworks and agents

Modern AI applications inherit the risks of orchestration, integration, and tool use.

Layer

Model

Layer

Agent

Layer

Tools

Layer

Connectors

Layer

Clinical systems

As AI systems gain tools, memory, and orchestration, the attack surface expands beyond the model. Security now has to account for workflows, service boundaries, and the connectors that bridge AI to clinical systems.

09 · Local Privilege Escalation

Anatomy of a local privilege escalation in Medical AI

A weak internal foothold can still distort the output path seen by clinicians.

1

Low-privilege foothold

2

Shared memory / queue access

3

Inference path tampering

4

Clinician-facing output impact

Safety note

Conceptual attack anatomy only.

10 · On-Premise Myth

If data never leaves the hospital, is it really safe?

Local hosting narrows some exposure, but it does not remove trust and access problems.

Hospital perimeter

On-premise does not equal trusted.

Inside the hospital does not equal secure.

Identity

Lateral movement

Insider risk

Misconfiguration

11 · Federated Learning

What is Federated Learning?

Federated learning changes where data lives, not whether the overall system needs threat modeling.

Local site

Hospital A

Local training stays inside the institution.

Local site

Hospital B

Local training stays inside the institution.

Local site

Hospital C

Local training stays inside the institution.

Coordinator flow

Raw patient data stays in each hospital. Only model updates move between hospitals and the coordinator.

12 · Leakage Paradox

Data may stay, but information can still leak

Information can leak through updates, gradients, or behavior even when raw records never move.

Distribution truth

Data stays local.

Security truth

Risk can still exist.

GradientsUpdatesReconstruction riskMembership inference

13 · Trust, Not Just Accuracy

The real problem was not accuracy. It was trust.

Accuracy scores alone do not restore confidence if the surrounding chain cannot be explained or controlled.

Trust chain

Data

Trust chain

Runtime

Trust chain

Output

Trust chain

Clinician

14 · LeakPro

LeakPro: test the model before you share it

Leakage testing should happen before a model crosses organizational boundaries.

Before sharing

Model ready for release?

Validation gate

Ask whether it leaks, not only whether it is accurate.

Privacy riskModel sharingPre-release testingEvidence

Pre-release leakage assessment provides evidence for governance, model sharing, and internal review. It turns privacy risk into something testable instead of assumed.

15 · Future Question

The Future Question for Medical AI

Medical AI is governed across time, not just at launch.

Stage 1

Build

Stage 2

Validate

Stage 3

Deploy

Stage 4

Monitor

Stage 5

Update

Stage 6

Revalidate

How do we govern systems that keep changing?

16 · SaMD DNA and Threat Modeling

Embedding security into the SaMD DNA

Threat modeling connects abstract architecture to concrete design and review decisions.

Threat model frame

Architecture becomes control points.

STRIDE

Trust boundary

Assets

Data flow

Control points

17 · Default-Deny Clinical Environment

The 'Default-Deny' clinical environment

A clinical environment benefits from explicit trust boundaries and permitted paths only.

Zone

Clinician workstation

Zone

AI inference enclave

Zone

Hospital core systems

Zone

Vendor access

Policy

Allowed paths only

Inside the hospital is not the same as trusted. A clinical architecture should restrict movement by default, make exceptions explicit, and treat vendor or integration access as bounded paths rather than ambient trust.

18 · Patch Governance Tradeoff

Balancing clinical stability with cybersecurity responsiveness

Security response has to respect both exploit urgency and validation reality.

Exploitability

Clinical impact

Validation burden

Rollback readiness

Decision principle

The right patch decision is not the fastest one. It is the most controllable one.

19 · Patient Safety

Patient safety supersedes cybersecurity speed.

Clinical systems are allowed to move carefully when patient harm is on the line.

Patient safety supersedes cybersecurity speed.

In clinical environments, patching must be fast enough, but also safe enough to trust.

20 · Closing Thesis

AI security is not about protecting models. It is about protecting the entire computing stack.

The endpoint of this talk is a shift from model-centric security to system-centric trust.

AI security is not about protecting models. It is about protecting the entire computing stack.

From model security to full-stack trust