Six Agencies. Five Governments. One Message Your AI Agents Can't Ignore.
Six Agencies. Five Governments. One Message Your AI Agents Can't Ignore.
On April 30, 2026, six of the world's most respected cybersecurity agencies published joint guidance on securing agentic AI. Yes, six agencies from five governments, the US sent two, because apparently one wasn't enough to make the point. CISA, the NSA, the UK's National Cyber Security Centre, Australia's ASD Cyber Security Centre, the Canadian Centre for Cyber Security, and New Zealand's NCSC — the Five Eyes intelligence alliance — spoke in one voice. Their findings align closely with OPAQUE's own 2026 AI Data Leak Report, which mapped 46 exposure vectors across eight categories and three trust boundaries — developed with Anthropic, ServiceNow, and Accenture, and validated with NVIDIA, Intel, AMD, and Azur
This is not a whitepaper from a vendor. This is not an analyst report. This is five governments telling enterprise AI teams exactly what they need to do to secure their AI agents and naming the gaps most organizations aren't closing today.
Here's what they said. And here's exactly where OPAQUE closes each gap.
What Five Governments Just Mandated
The guidance is blunt. AI agents are already running inside critical infrastructure, financial services, healthcare, and enterprise operations. Most organizations have given them far more access than anyone can safely watch — and the governance frameworks designed for human actors don't translate effectively to autonomous AI agents.
The authoring agencies were specific about what organizations need to do:
- "Require agents to perform cryptographic attestation where agents must prove they are running expected and unmodified code."
- "Authenticate agents with fresh cryptographic proofs before every privileged call."
- "Agentic AI systems should produce comprehensive artifacts documenting the agent's actions and decision-making process."
- "Autonomous actions by agentic AI systems introduce new risks, requiring updated governance policies and continuous runtime authentication with centralised policy decision points for each action."
Before. During. After. Every action.
That's not a requirement most organizations can satisfy today. Because most AI governance tools check policy at deploy time and walk away. The agent runs. Nobody knows what it actually did.
The Four Gaps Five Governments Just Named — And How OPAQUE Closes Each One
1. The Agent Identity Gap
The guidance calls out identity spoofing and agent impersonation as a primary risk. When agents authenticate using static keys or shared tokens, a malicious actor operating under a trusted agent identity can invoke sensitive operations while bypassing behavioral guardrails entirely. The audit log looks clean. The breach goes undetected.
Where OPAQUE fits: OPAQUE cryptographically verifies every agent's identity before execution using hardware attestation rooted in NVIDIA, Intel, and AMD silicon. A spoofed agent cannot produce a valid hardware attestation report. The connection never happens. Identity isn't a software promise — it's a hardware proof.
2. The Policy Gap
The guidance identifies a critical failure in how most organizations manage agent governance: static role or permission checks evaluated once at system startup rather than at each invocation. Policies exist on paper. They don't fire at runtime.
Where OPAQUE fits: OPAQUE cryptographically binds your governance policies to the workload at the moment of execution. Not documented. Not hoped for. Enforced by hardware. If an agent tries to reach an endpoint it shouldn't, or access data it isn't authorized to touch, execution stops. The policy doesn't describe what should happen — it enforces what does.
3. The Proof Gap
The guidance is explicit that agentic AI systems must produce comprehensive artifacts documenting every action and decision. But most logs are self-reported and mutable. An auditor asking for proof of what happened gets a document you wrote yourself.
Where OPAQUE fits: After every execution, OPAQUE automatically produces an Attested Evidence Pack — a hardware-signed cryptographic receipt that captures what ran, under what policy, on what data, invoking which tools, with what results. Signed by the hardware itself. Verifiable independently by any third party — your compliance team, your auditor, your regulator — without requiring them to trust OPAQUE or your cloud provider. This is not a log you wrote. It is proof the hardware generated.
4. The Accountability Gap
The guidance describes the accountability problem precisely: when multiple autonomous agents collaborate and something goes wrong, fragmented logs, opaque reasoning, and emergent interactions make it nearly impossible to determine what caused the error, assign responsibility, or demonstrate compliance.
Where OPAQUE fits: Every tool call, every data class touched, every policy decision, every agent handoff is captured in the Attested Evidence Pack at the moment of execution. When something goes wrong — or when a regulator asks — you know exactly what happened, when, under what conditions, and which agent did it. The receipt doesn't just prove compliance. It proves accountability.
The Urgency Is Real
The EU AI Act enforcement provisions go live August 2, 2026. Articles 10, 12, and Annex IV require verifiable proof that governance held at the moment AI processed your data — not documentation, not logs, runtime proof. Five governments just reinforced exactly the same requirement from a security perspective.
The policy gap is real. The proof gap is real. The accountability gap is real.
Five governments and six agencies just put all three in writing — and named cryptographic attestation, runtime enforcement, and comprehensive artifacts as the answer.
The question isn't whether this applies to you. It does. The question is whether you can answer it.
"Can you prove your AI agents did what they were supposed to do — or can you only hope they did?"
Run Your AI. Get Your Receipt.
OPAQUE answers that question automatically. At the hardware level. Without rebuilding your existing workflows. Without changing your cloud. Without adding months to your deployment timeline.
Every AI workload you run should come with a receipt. Simple enough for your team to understand. Rigorous enough to hand to your regulator as proof.
Ready to get started? Contact OPAQUE today at hello@opaque.co or visit opaque.co
OPAQUE 2026 AI Data Leak Report — 46 exposure vectors across eight categories: