The EU AI Act's Runtime Problem & Why Most Teams Don't Know They Have It
The EU AI Act's Runtime Problem & Why Most Teams Don't Know They Have It
The EU AI Act goes live August 2, 2026. Most enterprise AI teams are aware of the deadline. Very few understand what it actually requires.
The assumption most teams are operating under: document your policies, complete your risk assessments, pass your conformity review, and you're covered. That assumption is wrong — and the gap it creates is the most common problem we're hearing about right now across every regulated industry.
What the Act Actually Requires
Articles 10, 12, and Annex IV don't ask for documentation. They ask for proof.
Article 10 (Data Governance) requires demonstrable governance practices at the moment AI processes data, not a policy document describing what should happen, but evidence of what did happen.
Article 12 (Logging & Record-Keeping) requires high-risk AI systems to automatically log events throughout the AI lifecycle. This is a technical requirement, not a manual audit trail. The word "automatically" is doing a lot of work here.
Annex IV (Technical Documentation) requires documentation sufficient for regulators to assess compliance independently. That means a verifiable artifact, not a summary you wrote, but proof that can be validated by a third party.
The bar the Act is setting is not "show me your governance framework." It is "prove to me that governance held at the moment your AI ran."
The Two Gaps That Keep Surfacing
In conversations across financial services, healthcare, identity verification, and enterprise software, two gaps come up consistently.
The Policy Gap. Most teams have governance policies — documented in wikis, configured in workflow engines, reviewed in gate processes. But those policies have no enforcement mechanism at the moment AI actually processes data. The policy exists on paper. It doesn't fire at runtime. An auditor asking "prove this policy was enforced when the model ran" gets a document in response, not a proof.
The Proof Gap. Even when policies are configured correctly, there is no cryptographically verifiable artifact proving they executed as intended. Logs are useful. Logs are not proof. Logs can be altered, selectively generated, or simply not cover what the regulator is asking about. The Act is going to require more than that.
Most teams don't realize they have either gap until they start mapping their stack against the specific articles — and then they realize their current governance infrastructure was designed for a different era of software.
What Closing the Gap Actually Looks Like
The teams getting ahead of this aren't doing more documentation. They're adding a runtime enforcement layer — specifically, one that operates inside the execution environment at the moment AI processes data, not before or after.
The architecture that closes both gaps looks like this:
Before execution: A cryptographic integrity check verifies the workload is exactly what it's supposed to be, before any data is processed. If anything has been tampered with, execution fails.
During execution: Policies are cryptographically bound to the workload and enforced at the moment of processing — not documented as intentions, but enforced as code. Agents can only reach the endpoints they're authorized to reach. Data access is governed at runtime, not just described in policy.
After execution: A hardware-signed cryptographic artifact — an Attested Evidence Pack — is produced automatically. It captures what ran, when, under what policy, and what data was accessed. It is signed by the hardware itself. It can be verified independently by any third party, including regulators, without requiring them to trust the organization that produced it.
This is the difference between governance that describes what should happen and governance that proves what did.
Why August 2026 Is Closer Than It Looks
Procurement cycles for enterprise infrastructure average three to six months. POCs take time. Legal and security reviews take time. Organizations that aren't in active evaluation by Q2 2026 are going to be making compliance decisions under deadline pressure which is the worst time to make them.
The teams moving fastest right now are the ones who started with a simple question: can we prove our governance held at runtime, or can we only promise it?
The answer to that question determines whether you're compliant in August... or scrambling.
OPAQUE provides runtime policy enforcement and hardware-attested compliance evidence for enterprise AI systems. Learn more at opaque.co