Podcast Episode
The standard establishes clear responsibilities for three technical roles. Developers are accountable for building secure models, system operators for deploying them safely, and data custodians for managing the information pipelines that feed AI systems. This division of responsibility ensures that security considerations are embedded at every stage, rather than treated as an afterthought.
ETSI EN 304 223 applies to AI systems incorporating deep neural networks, including generative AI intended for real-world deployments. The standard explicitly excludes systems used strictly for academic research purposes.
Indirect prompt injection represents another sophisticated attack vector. Adversaries hide malicious instructions within web pages or documents that AI systems consume during operation. When the AI processes this content, it unknowingly follows the attacker's commands, bypassing security protocols without detection.
Model obfuscation attacks exploit the complexity of AI architectures to hide malicious behaviour within the model itself. These threats have emerged as particular concerns as organisations deploy AI in sensitive domains including finance, healthcare, and critical infrastructure.
Memory poisoning poses an especially persistent threat for autonomous AI agents. Unlike standard prompt injections that end when an interaction closes, memory poisoning implants malicious information into an agent's long-term storage, where it persists across sessions and continues influencing behaviour over time.
Scott Cadzow, Chair of ETSI's Technical Committee for Securing Artificial Intelligence, emphasised the standard's significance: "ETSI EN 304 223 represents an important step forward in establishing a common, rigorous foundation for securing AI systems. At a time when AI is being increasingly integrated into critical services and infrastructure, the availability of clear, practical guidance that reflects both the complexity of these technologies and the realities of deployment cannot be underestimated."
The standard's publication comes at a critical moment for AI security. Industry forecasts for 2026 identify prompt injection and data poisoning as the "new zero-day" threats, with autonomous agents introducing unprecedented insider threat risks. Unlike traditional vulnerabilities that require exploiting code bugs or hacking servers, AI poisoning attacks simply require tampering with the data supply chain.
A conformity assessment standard, TS 104 216, is currently in development to provide methods for assessing compliance with EN 304 223. This will give organisations concrete frameworks for demonstrating that their AI systems meet the baseline security requirements.
The standard's influence is expected to extend beyond Europe. When rigorous cybersecurity requirements are established in major markets, other regions typically align their practices accordingly. As AI adoption accelerates globally, particularly in critical infrastructure and sensitive applications, the need for internationally recognised security baselines becomes increasingly urgent.
Organisations are facing the realisation that defending AI systems requires protecting both the training pipeline and the model's runtime environment. Security cannot be bolted on after deployment; it must be integrated from the start of AI development.
As AI systems become embedded in finance, healthcare, autonomous vehicles, and critical infrastructure, the consequences of compromised models grow exponentially. The ETSI standard provides the foundation for building trustworthy AI systems at a time when adoption is accelerating faster than security measures have been able to keep pace.
The publication of ETSI EN 304 223 signals a turning point where AI security moves from theoretical concern to practical necessity, backed by internationally recognised standards and clear compliance frameworks.
Europe Releases First Global Cybersecurity Standard for Artificial Intelligence
January 16, 2026
Audio archived. Episodes older than 60 days are removed to save server storage. Story details remain below.
The European Telecommunications Standards Institute has published the first globally applicable cybersecurity standard specifically designed for artificial intelligence systems, marking a significant milestone in AI governance. Released on January 15, 2026, ETSI EN 304 223 establishes baseline requirements to protect AI systems from emerging threats that traditional cybersecurity measures fail to address, including data poisoning, indirect prompt injection, and model obfuscation.
A Lifecycle Framework for AI Security
The new standard provides a comprehensive framework for securing AI models and systems across their entire operational lifecycle. It defines 13 core principles and 72 trackable provisions organised across five distinct phases: secure design, secure development, secure deployment, secure maintenance, and secure end of life.The standard establishes clear responsibilities for three technical roles. Developers are accountable for building secure models, system operators for deploying them safely, and data custodians for managing the information pipelines that feed AI systems. This division of responsibility ensures that security considerations are embedded at every stage, rather than treated as an afterthought.
ETSI EN 304 223 applies to AI systems incorporating deep neural networks, including generative AI intended for real-world deployments. The standard explicitly excludes systems used strictly for academic research purposes.
Addressing AI-Specific Threats
Unlike traditional software, AI systems face unique vulnerabilities that arise from their reliance on massive data pipelines and complex model architectures. Data poisoning, one of the most insidious threats, involves corrupting the training data used to develop AI models. Research has demonstrated that healthcare diagnostic models can be compromised with as few as 100 to 500 poisoned samples, potentially swaying clinical decisions with dangerous consequences.Indirect prompt injection represents another sophisticated attack vector. Adversaries hide malicious instructions within web pages or documents that AI systems consume during operation. When the AI processes this content, it unknowingly follows the attacker's commands, bypassing security protocols without detection.
Model obfuscation attacks exploit the complexity of AI architectures to hide malicious behaviour within the model itself. These threats have emerged as particular concerns as organisations deploy AI in sensitive domains including finance, healthcare, and critical infrastructure.
Memory poisoning poses an especially persistent threat for autonomous AI agents. Unlike standard prompt injections that end when an interaction closes, memory poisoning implants malicious information into an agent's long-term storage, where it persists across sessions and continues influencing behaviour over time.
International Authority and Regulatory Context
The new standard carries stronger international authority than its predecessor, ETSI TS 104 223, having undergone formal review and approval by multiple national standards organisations. This positions ETSI EN 304 223 alongside other AI governance frameworks such as ISO/IEC 42001 and complements the EU AI Act, which becomes fully applicable in August 2026.Scott Cadzow, Chair of ETSI's Technical Committee for Securing Artificial Intelligence, emphasised the standard's significance: "ETSI EN 304 223 represents an important step forward in establishing a common, rigorous foundation for securing AI systems. At a time when AI is being increasingly integrated into critical services and infrastructure, the availability of clear, practical guidance that reflects both the complexity of these technologies and the realities of deployment cannot be underestimated."
The standard's publication comes at a critical moment for AI security. Industry forecasts for 2026 identify prompt injection and data poisoning as the "new zero-day" threats, with autonomous agents introducing unprecedented insider threat risks. Unlike traditional vulnerabilities that require exploiting code bugs or hacking servers, AI poisoning attacks simply require tampering with the data supply chain.
Future Guidance and Compliance
ETSI announced that an upcoming Technical Report, ETSI TR 104 159, will apply these security principles specifically to generative AI systems. That document is expected to address deepfakes, misinformation and disinformation, confidentiality risks, and copyright concerns.A conformity assessment standard, TS 104 216, is currently in development to provide methods for assessing compliance with EN 304 223. This will give organisations concrete frameworks for demonstrating that their AI systems meet the baseline security requirements.
The standard's influence is expected to extend beyond Europe. When rigorous cybersecurity requirements are established in major markets, other regions typically align their practices accordingly. As AI adoption accelerates globally, particularly in critical infrastructure and sensitive applications, the need for internationally recognised security baselines becomes increasingly urgent.
The Broader Security Landscape
The release of ETSI EN 304 223 reflects a broader recognition that AI security must emerge as a formal discipline in 2026. The mainstreaming of agentic AI, autonomous systems that make decisions and act with minimal human oversight, has expanded the attack surface dramatically. Research identified 43 different agent framework components with embedded vulnerabilities introduced via supply chain compromise.Organisations are facing the realisation that defending AI systems requires protecting both the training pipeline and the model's runtime environment. Security cannot be bolted on after deployment; it must be integrated from the start of AI development.
As AI systems become embedded in finance, healthcare, autonomous vehicles, and critical infrastructure, the consequences of compromised models grow exponentially. The ETSI standard provides the foundation for building trustworthy AI systems at a time when adoption is accelerating faster than security measures have been able to keep pace.
The publication of ETSI EN 304 223 signals a turning point where AI security moves from theoretical concern to practical necessity, backed by internationally recognised standards and clear compliance frameworks.
Published January 16, 2026 at 10:18pm