We Value your Privacy
We use cookies in the delivery of our services. To learn about the cookies we use and information about your preferences and opt-out choices, please click here.

Overview

What is OPAQUE?

OPAQUE is a confidential AI platform that enables enterprises to securely build and deploy AI pipelines on sensitive data—without ever exposing it. Whether you're deploying models, fine-tuning agents, or running inference workflows, OPAQUE ensures data remains encrypted throughout.

Built on confidential computing, OPAQUE protects data during processing using hardware-backed security and enforces policies with cryptographic guarantees. It extends these protections with enterprise-grade features like distributed compute engines, verifiable audit logs, and secure collaboration across silos.

With OPAQUE, you can accelerate AI initiatives while preserving privacy, enforcing compliance, and maintaining full control over how data is accessed and used.

Who should use OPAQUE?

OPAQUE is for teams developing AI applications and workflows that rely on sensitive data. Whether you're powering confidential RAG pipelines, deploying LLM agents, or training models across silos, OPAQUE lets you do it securely.

It’s especially valuable for AI/ML engineers, data scientists, stewards, and security teams in industries where data privacy, compliance, and control are non-negotiable, such as tech, finance, insurance, government, and manufacturing.

How does it work?

OPAQUE processes encrypted data directly—without decrypting it—using confidential computing.

When data is provisioned, it stays encrypted at every stage: in transit, at rest, and during processing. Computation happens inside trusted execution environments (TEEs), where only verified code runs and access is tightly controlled.

OPAQUE provides workspaces, APIs, and automation tools so teams can analyze data securely, while the platform enforces policies and compliance behind the scenes.

Where does it add the most value?

OPAQUE is purpose-built for teams deploying AI pipelines and gen-AI applications on sensitive data—without compromising privacy, security, or compliance. You can use OPAQUE to:

  • Train and fine-tune models on high-sensitivity datasets—such as customer, financial, or operational records—while keeping data encrypted throughout.
  • Run confidential RAG workflows, where prompts are sanitized and governed before hitting an LLM, or the model is executed securely inside a TEE.
  • Build production-grade AI workflows that enforce cryptographic data access policies and generate verifiable audit trails.
  • Protect proprietary data assets across sectors like finance, healthcare, insurance, and manufacturing—especially where regulatory compliance is non-negotiable.

Whether you're deploying LLM-powered agents or building out scalable AI infrastructure, OPAQUE helps you move fast—without compromising trust, security, or governance.

It’s especially powerful in regulated and data-sensitive sectors such as tech, finance, insurance, government, and manufacturing, where traditional infrastructure falls short of protecting AI workflows.

Where is OPAQUE hosted?

You can deploy OPAQUE via the Azure Marketplace for a streamlined set-up or manually in your own virtual private cloud (VPC). This setup gives you full control over your data, infrastructure, and integrations. (For more on how OPAQUE is built, see our Architecture and Security white paper.)

Can I try OPAQUE?

You can request a demo and we’ll guide you through the platform. We also support proof-of-concept engagements for teams exploring how OPAQUE fits into their AI workflows.

How OPAQUE compares

What makes OPAQUE different from other confidential computing tools?

Most confidential computing tools focus on infrastructure—like encrypted VMs or secure enclaves—but leave it to your team to build secure workflows on top.


OPAQUE delivers a complete confidential AI platform that includes:

  • Encrypted data processing: Your data remains protected during computation, not just at rest or in transit.
  • Governance and auditability: All actions are policy-controlled and logged with cryptographic integrity.
  • Agent-ready AI infrastructure: Supports confidential RAG and gen-AI agents running on sensitive data without exposing prompts, context, or outputs.

With OPAQUE, you don’t need to piece together tools and infrastructure—you get an enterprise-grade platform purpose-built for secure, governed AI on private and regulated data.

How does OPAQUE compare to CSP confidential computing offerings?

OPAQUE offers a complete confidential AI platform that goes beyond the infrastructure-level tools provided by cloud service providers (CSPs) like AWS, Azure, and GCP. While CSPs offer confidential VMs to protect infrastructure, they lack built-in capabilities for enforcing policy, verifying AI workload integrity, or supporting fully governed AI workflows. Here’s how OPAQUE compares:

Unlike CSPs, OPAQUE delivers a confidential AI platform purpose-built for enterprise use—compliant by design, optimized for AI workloads, and equipped with verifiable controls to protect sensitive data throughout the pipeline. For more details on how OPAQUE complements CSPs, see our Bridging the Gap white paper.

How does OPAQUE compare to IBM Watson, RedHat OpenShift AI, or Amazon Bedrock?

These platforms offer powerful tools for building and deploying AI, but they don’t natively protect data during processing. OPAQUE complements them by enabling AI on encrypted data—ensuring sensitive information stays confidential even during training and inference. Whether you're using Watson’s NLP models, deploying with OpenShift AI, or building with Bedrock, OPAQUE adds a verifiable layer of privacy, compliance, and policy enforcement across your AI workflows.

Security, compliance, and performance

What assurances do you provide that your platform is secure?

OPAQUE uses verifiable, hardware-backed protections to keep your data secure. All computation happens inside trusted execution environments (TEEs), where only authorized, attested code can run. Before any data is processed, the platform proves its integrity through remote attestation.

OPAQUE never has access to your encryption keys. Decryption happens only inside the TEEs—and only after the platform has been verified.

To provide full accountability, OPAQUE also generates cryptographically signed audit logs for every data interaction, giving your security and compliance teams complete visibility into how data is used. (For more detail, see our Architecture and Security white paper.)

How does OPAQUE help meet data privacy regulations like GDPR?

OPAQUE’s Confidential AI platform is designed with compliance in mind, providing robust data protection aligned with regulations like GDPR. It secures data in use—not just at rest or in transit—using confidential computing. Key features include protection mechanisms such as:

  • End-to-end encryption, which keeps data protected throughout its lifecycle.
  • Cryptographic policy enforcement, which ensures data is only accessed and processed as authorized.
  • Privacy-by-design architecture, which minimizes data exposure while enabling analytics and AI.

Together, these capabilities align with EU GDPR guidance from ENISA and support compliance with other global privacy regulations. For full accountability, OPAQUE also provides cryptographically verifiable audit logs.

What compliance and audit capabilities does OPAQUE provide?

OPAQUE generates a cryptographically signed audit trail for every data access and computation. These tamper-proof logs provide:

  • Proof that policies were enforced.
  • Transparent audit records anchored in hardware-based attestation.
  • A clear chain of custody for sensitive workloads.

This proves that data was used only in authorized ways, ensuring it wasn’t misused or tampered with—without requiring blind trust in OPAQUE or your cloud provider. These capabilities support both internal governance and external compliance with regulations like GDPR, HIPAA, and other industry-specific standards.

What regulations and data privacy laws does OPAQUE support?

OPAQUE is designed to help organizations meet global data privacy and governance standards, including:

  • GDPR (Europe)
  • HIPAA (U.S. healthcare)
  • CCPA/CPRA (California)
  • GLBA (U.S. financial services)
  • Emerging AI governance frameworks such as the EU AI Act, NIST AI Risk Management Framework, and OECD AI Principles

Its cryptographic policy enforcement, verifiable audit trails, and support for data sovereignty make OPAQUE especially useful in regulated environments where traditional cloud tools fall short.

Can OPAQUE work with our existing data classification and governance policies?

Yes. You don’t need to rewrite your data governance model to use OPAQUE. You can define access and processing policies based on your existing classification labels—OPAQUE enforces them cryptographically and tracks compliance with a verifiable audit trail.

Does OPAQUE support SSO and identity provider integrations?

Yes. OPAQUE integrates with identity providers through Auth0, allowing support for systems like Okta, Azure AD, and others that use standards-based systems like SAML and OIDC. This gives you flexibility to connect your existing authentication infrastructure without custom development.

How does OPAQUE scale compute workloads, including GPU-intensive jobs?

OPAQUE supports scalable, distributed compute for confidential AI workloads, including CPU- and GPU-based processing. The platform is optimized for production-scale pipelines in confidential computing environments and includes GPU support for demanding tasks such as LLM serving.

What kind of performance can I expect from OPAQUE for RAG and LLM workflows?

OPAQUE is optimized for high-performance AI workloads in confidential environments, including RAG pipelines and large-scale inference. It supports GPU-backed confidential computing environments, such as Azure's H100-enabled confidential VMs, which are engineered for secure, large-scale AI processing.

According to Microsoft, these confidential VMs deliver strong performance for AI workloads while maintaining full hardware-based data protection. OPAQUE builds on this foundation to deliver fast, secure, and scalable confidential AI workflows.

Data and AI workflows

How do I get my data into OPAQUE?

You connect your existing data sources directly to OPAQUE, without needing to replicate or move the data. During provisioning, your data is automatically encrypted using your organization’s key, and you define policies that control access and permitted computations. Once provisioned, your data remains encrypted throughout its lifecycle and is ready for secure ML, RAG, or gen-AI workflows—no masking or redaction required.

How does OPAQUE handle encryption and decryption?

OPAQUE processes data securely inside trusted execution environments (TEEs)—specialized hardware that keeps data isolated from the rest of the system, even during computation. Data is decrypted only within these TEEs and only after the platform proves its integrity through remote attestation. Your encryption keys remain under your control, and the entire process is transparent, tamper-resistant, and designed to keep sensitive data protected during processing.

Can I use OPAQUE with data stored across silos?

Yes. OPAQUE enables secure, multi-party collaboration without exposing raw data. Teams can analyze shared datasets, deploy AI models, and enforce fine-grained policies that govern what computations are allowed—ensuring both privacy and control.

How does OPAQUE improve AI and analytics on sensitive data?

Traditional methods like masking or tokenization degrade data utility. OPAQUE lets you analyze encrypted data directly, so you get high-fidelity insights without exposing raw data—accelerating time to value while maintaining compliance.

Does OPAQUE support gen-AI use cases involving sensitive data?

Yes. OPAQUE powers confidential generative AI workflows by allowing organizations to deploy LLM-powered agents on sensitive data with full privacy, security, and policy controls. These agents run inside trusted execution environments (TEEs), ensuring prompts, context, and outputs are protected throughout the workflow.

Use cases like confidential RAG (retrieval-augmented generation) are supported end to end: data access is governed by cryptographic policies, and every action is logged for compliance. Guardrails ensure that agents operate only within approved boundaries—enabling safe, compliant AI on even the most sensitive datasets.

Architecture and Security

What is confidential RAG and how does it differ from standard RAG?

Confidential RAG is a next-generation approach to Retrieval-Augmented Generation designed for sensitive data. Unlike standard RAG, which retrieves and processes data in plaintext (putting it at risk by exposing it outside the boundaries of the source systems), Confidential RAG maintains data protection throughout the workflow, providing verifiable guarantees of privacy, policy enforcement and auditability. All operations are executed within attested environments, with cryptographic controls that ensure data remains secure from infrastructure providers, admins, and unauthorized agents.

What are "guardrails," and how are they defined, verified, and enforced?

Guardrails are runtime policies that ensure LLM applications behave safely, ethically, and in compliance with enterprise or regulatory standards.

In the OPAQUE platform, guardrails are verifiably enforced inside trusted execution environments (TEEs) and applied step-by-step in agentic workflows—ensuring every agent action is governed, logged, and verifiable. OPAQUE supports the NVIDIA NeMo Guardrails framework as a baseline, enabling users to define rules using a flexible policy language and enforce them across inputs, outputs, and tool use in real time.

Does the guardrail system support user-defined logic or only declarative policy rules?

Yes. The guardrail system supports both declarative policy rules and user-defined logic.

Policies are written using Colang, a human-readable, declarative policy language developed by NVIDIA and utilized in their NeMo Guardrails framework. Colang simplifies the definition of safe, structured AI behaviors—such as blocking sensitive topics, restricting tool use, or requiring fallback responses—without the need for low-level programming.

How do you prevent prompt injection, data leakage, or abuse through RAG queries?

OPAQUE prevents prompt injection, data leakage, and misuse through a layered security model—combining confidential computing, policy enforcement, and verifiable auditing.

All RAG queries are executed inside trusted execution environments (TEEs), where sensitive data stays encrypted and inaccessible to the model, cloud provider, or infrastructure admins. Within this environment, user prompts are parsed, filtered, and evaluated against guardrails—declarative and user-defined policies that block unsafe inputs, restrict tool access, and enforce compliance. The TEE ensures that all policies (on data as well as agent behavior) are verifiably enforced. Further, all agent actions are recorded in an immutable, tamper-proof audit log, enabling verification and accountability. This architecture ensures that every agentic workflow operates safely, predictably, and provably—without exposing sensitive data or allowing unauthorized behavior.

Flexibility

What LLMs do you support today? Are customers locked into a specific model provider?

We currently support a variety of open-source LLMs, including LLaMa, Mistral, and Gemma models, with more to come soon. Customers can easily switch between third-party or confidential open-source LLMs, reducing vendor lock-in.

If we use an external LLM, how is data encrypted and what is the chain of custody?

OPAQUE can serve LLMs directly inside its TEEs, ensuring sensitive data stays protected with verifiable privacy, runtime policy enforcement, and tamper-proof audit logs.

Even when using an external LLM, much of the sensitive data processing—such as ingestion, embedding generation, and retrieval—can still run within OPAQUE's TEE, preserving confidentiality and significantly reducing exposure.

When data must leave for external inference, it's encrypted in transit. To mitigate risk, OPAQUE includes built-in Redaction and Tokenization services that can sanitize sensitive fields before any external call.

Do you support deploying in a private cloud?

Yes, we can support deploying into your private cloud, as long as it's running within an Azure / GCP cloud service provider.

Do you support deploying into existing TEE-enabled infrastructure?

No, we do not currently support deploying into existing TEE-enabled infrastructure. Our offering will be deployed into standalone clusters.

Is it possible to bring my own embedding model, vector database, or retriever—the component responsible for retrieving relevant documents from the vector store—into the agentic workflow?

Yes. The platform supports providing your own embeddings or retriever, and integration with your existing vector database installation.

If I do bring my own vector database, e.g., a vector database SaaS like Azure AI Search, is it augmented with the same guarantees as the rest of the platform?

No, because they wouldn't be attested as they are outside of the TEE.

How do I test or simulate a workflow before going to production?

OPAQUE's platform offers a dedicated testing environment where you can simulate workflows. This allows you to validate logic, evaluate guardrails, and ensure proper behavior of the system—all without exposing real or sensitive information. It's a safe way to iterate and refine your setup before moving to production.

How do I ingest documents or data sources?

You can ingest data through multiple methods, including file upload, API integration, or direct database connectors. We support common file types and can connect to data sources, including REST APIs, SQL databases, and cloud storage. Additional connectors and formats may be supported based on deployment needs.

How can OPAQUE interact with the rest of my existing AI stack?

OPAQUE integrates natively into your existing AI stack—no rewrites, no re-platforming. Confidential agentic workflows, including data ingestion, retrieval, generation, and response, are exposed as standard REST APIs, so you can route requests directly without changing your codebase.

All processing occurs inside attested Trusted Execution Environments (TEEs) running on cloud infrastructure, such as Azure and GCP, ensuring that data remains protected—even during computation—and is inaccessible to cloud providers, administrators, or other workloads.

Data Governance and Compliance

How is data privacy handled? How are redaction, role/region‑based access, and custom governance policies defined and enforced inside the confidential runtime?

OPAQUE enforces data privacy through a hardware-attested confidential runtime that verifies the integrity of the runtime, agents, and data connections before any data access or processing begins. All processing is run within a TEE.

Access controls and governance policies—such as role-based restrictions—are defined at the data connection, runtime, and agentic workflow level and cryptographically bound into the runtime. This ensures policies are not just configured—they are enforced during execution, with tamper-proof, verifiable guarantees.

Can I enforce policies across multi-agent interactions (e.g., Agent A can access PII but Agent B cannot)?

Guardrail policies can be specified on an agent-by-agent basis. As you design the workflow, and specify the interactions between two agents running in the same workflow, a guardrail policy can be applied to both the output of A and the input of B. We don't support policy enforcement across agents running in different workflows, outside of the guardrail policies that can be applied to the agents executing inside OPAQUE's platform.

What audit logs does OPAQUE's platform produce and how are they cryptographically verifiable?

OPAQUE creates a tamper‑proof audit trail showing which workflow ran, who ran it, and which tools the agents accessed, omitting sensitive payloads yet remaining fully cryptographically provable and verifiable with standard third‑party tools.

How do you address data residency regulations (GDPR, HIPAA, etc.) across clouds?

Organizations can define regulatory policies—such as data access constraints, usage rules, and data isolation—and bind them cryptographically into the runtime. These policies are enforced at execution, preventing any data exposure that would violate a regulation. Since policy enforcement is tied to cryptographic keys and attested hardware, no unauthorized access is possible—even from infrastructure or cloud providers.

Is there a way to pre-approve workflows before they go live or require dual approval for high-risk flows?

Yes, OPAQUE's workspace policies allow you to define workflows' approval requirements before deployment. Both the approvals and the deployment events are included in the audit trail records.

Other

What observability and monitoring hooks are exposed? Can we integrate with Datadog, Prometheus, etc.?

OPAQUE exposes OpenTelemetry metrics, traces, and health probes that plug directly into Datadog, Prometheus, Grafana, or any OTLP‑compatible observability stack.

What level of logging is exposed—can we trace guardrail hits, agent paths, and LLM responses?

OPAQUE provides detailed, tamper-proof logging of workflow execution—including guardrail hits, agent paths, and system-level actions—without compromising data privacy. All logs are cryptographically signed and exclude sensitive content by design. LLM responses, external data retrieved, and query payloads are never stored or exposed. Instead, OPAQUE records only metadata and execution traces needed to verify compliance and trace agent behavior.

Do you provide support for disaster recovery in a multi-cloud environment?

The platform scales according to demand and maintains encrypted backups and replication, ensuring that data is always secure and never lost.