Thought Leaders
Where AI Security Standards Stop — and Runtime Protection Must Begin

With all of the talk about the security risks of AI, one issue that seems to be overlooked is this: the fact that AI systems only function by exposing their most valuable assets — models and data.
Unlike traditional software, AI doesn’t simply execute predefined logic. It continuously blends proprietary models with sensitive inputs to generate outputs, often on infrastructure that was not designed to protect computation.
In this way, traditional security falls short. Encryption is effective when data is stored or transmitted across a network, but not when data is processed or operated on. For AI in particular, the danger arises when a model is deployed. Its parameters are loaded into memory, initialized, and exercised at scale – the point at which encryption stops – exposing it to potential unauthorized access. During inference, sensitive data flows through that same exposed space. The result is a highly vulnerable risk surface: AI systems that might appear secure – but are actually unprotected in the most critical moments.
Standards bodies like the National Institute of Standards and Technology (NIST), the European Union Agency for Cybersecurity (formerly known as the European Network and Information Security Agency, or ENISA), and the Open Web Application Security Project (OWASP) have begun to chart this territory. They describe the risks, name the vulnerabilities, and outline governance principles. But they stop short of prescribing how to protect models as intellectual property and data as confidential assets once execution begins. Closing that gap requires rethinking AI security–not as a compliance exercise, but as a problem of computation itself. This is where encryption-in-use, or end-to-end encryption, plays a role.
The Blind Spot in Modern AI Security
Most AI security conversations still orbit familiar ground: training data governance, access controls, API monitoring, and responsible user policies. These are necessary. However, none of them addresses what happens after deployment, when a model leaves the repository and becomes a living system.
Once deployed, a model’s parameters are no longer abstract artifacts. They are live, memory-resident assets, continuously accessed during inference and often used by multiple tenants or customers through shared AI services. This exposure occurs before any inference request is made, thereby compounding the risk by introducing sensitive inputs and externally observable behavior.
Treating model protection as a pre-deployment concern and inference security as a runtime concern misses the point. In real systems, these risks overlap. Models and data are exposed across initialization, execution, and output. Security that begins and ends with storage controls fails to address these exposures.
What NIST Gets Right — and Where It Stops
The NIST AI Risk Management Framework has become a cornerstone for organizations trying to manage AI risk. Its structure—govern, map, measure, manage—offers a disciplined way to think about accountability, context, impact, and mitigation across the AI lifecycle.
What NIST does particularly well is frame AI risk as systemic rather than accidental. AI failures are rarely single-point events; they emerge from interactions between models, data, people, and infrastructure. That framing is essential.
Where the framework falls short is in failing to dictate how high-value AI assets are protected once systems are live. Model parameters are implicitly treated as design-time artifacts rather than runtime assets. Execution environments are assumed to be trustworthy enough.
In practice, model parameters are often the most valuable intellectual property an organization owns. They are loaded into memory, copied across nodes, cached, and reused. If AI risk management fails to account for the confidentiality of models during deployment and execution, a critical asset remains outside the risk boundary, like a sitting duck.
ENISA and the Reality of AI-Specific Threats
ENISA’s work on AI cybersecurity pushes the conversation further. Its multilayer framework distinguishes between traditional infrastructure security and AI-specific risks, acknowledging that AI systems behave differently—and fail differently—than conventional software.
Why is this important? AI introduces threats that don’t fit neatly into existing controls: model extraction, parameter leakage, co-tenancy exposure, and tampering during execution. These risks don’t require exotic attackers. They arise naturally when high-value models run in shared or externally managed environments.
ENISA’s framework implicitly recognizes that securing AI means securing behavior, not just code. But like most standards, it focuses on what should be considered, not how protections are technically enforced once models are running.
OWASP and the Cost of Observable Intelligence
OWASP’s Top 10 for Large Language Model applications offers a more concrete view of how AI systems break in the real world. Prompt injection, sensitive information disclosure, embedding leakage, excessive output transparency—these aren’t theoretical concerns. They are the byproducts of deploying powerful models without constraining what they reveal.
While these issues are often framed as application-layer problems, their consequences are deeper. Repeated exposure of model behavior can lead to effective cloning; poorly isolated embeddings can reveal structure; and inference abuse becomes a pathway to model replication.
OWASP’s taxonomy makes one thing clear: protecting AI is not just about stopping bad inputs. It’s about limiting what models expose—internally and externally—once they are operational.
A Shared Conclusion, an Unfinished Job
Across NIST, ENISA, and OWASP, there is broad agreement on the fundamentals:
- AI risk spans the lifecycle
- AI systems introduce new threat categories
- Models and data are high-value assets
- Runtime exposure is unavoidable
What these frameworks lack, however, is a mechanism for enforcing confidentiality once models are deployed and computation begins. That omission is not a flaw, as standards define intent and scope. Implementation is typically left to the system designer.
But they leave a critical gap—one that grows wider as AI systems scale.
Encryption-in-Use Changes the Equation
Encryption-in-use shifts the security model. Instead of assuming that data and models must be exposed to be useful, it treats computation as something that can be protected.
In practical terms, this means:
- Models remain encrypted during deployment, initialization, and execution
- Inputs are never visible in plaintext to the execution environment
- Intermediate states cannot be inspected or modified
- Infrastructure no longer needs to be implicitly trusted
This doesn’t replace governance frameworks or application-layer controls – it operationalizes them. It turns risk principles into enforceable guarantees right when AI systems are most vulnerable.
In other words, encryption-in-use is the missing layer between AI policy and AI reality.
When Governance Ends and Execution Begins: Securing AI Computation
AI security breaks down at runtime. Once deployed, AI models and sensitive data must be exposed in memory to function, creating a risk surface that traditional controls—encryption at rest, encryption in transit, and governance frameworks—were never designed to protect.
Standards bodies such as NIST, ENISA, and OWASP have made critical progress in defining AI risk, accountability, and misuse. But their guidance largely treats models as design-time artifacts and assumes execution environments can be trusted. In practice, model parameters and sensitive inputs are continuously accessed, reused, and often processed in shared or externally managed environments.
Closing this gap requires rethinking AI security not as a compliance exercise, but as a problem of protecting computation itself—when models are live, data is in use, and exposure is unavoidable. Encryption-in-use offers a viable way to keep AI models and sensitive inputs secure across every stage of the AI lifecycle.












