We Value your Privacy
We use cookies in the delivery of our services. To learn about the cookies we use and information about your preferences and opt-out choices, please click here.

Demonstrating Secure Confidential AI for HR Agentic Experiences

By
James Aliperti | Director of Solution Engineering
2025-07-24
5 min read

Demonstrating Secure Confidential AI for HR Agentic Experiences

In today’s enterprise landscape, using AI to automate complex workflows involving sensitive data requires more than just performance — it demands verifiable trust, security, and policy enforcement. In a recent demo, Jamie Aliperti, Director of Solutions at OPAQUE, showcased how our Confidential AI platform enables a secure, end-to-end agentic RAG pipeline, using an internal HR application as the use case.

A Secure Agentic RAG Workflow, Built for Confidential Data

The demo walked through a complete RAG (Retrieval-Augmented Generation) pipeline designed to answer employee HR questions across a large organization. The key technical building blocks include:

  • End-to-End Encrypted Data Sources: HR documents and internal data remain encrypted throughout their lifecycle. Only the data owner retains the decryption key, and the data is never exposed — not even during model inference.
  • Hosted Models and Agents: The pipeline includes a hosted IBM Granite model and multiple specialized agents orchestrated in a secure chat workspace. The agents manage task routing and contextual enrichment.
  • NeMo Guardrails for Policy Enforcement: Integrated NVIDIA NeMo Guardrails enforce predefined rules, ensuring that only permitted questions are processed — protecting against prompt injection, out-of-scope queries, or leakage of sensitive responses.
  • Attested Preflight Verification: Before any interaction begins, OPAQUE performs a trust attestation — a preflight check that verifies each asset (data source, model, agent) is authorized and will interact in only permissible ways.
  • Confidential Compute Runtime: All processing happens inside a trusted execution environment using hardware-based confidential computing, ensuring runtime data protection.

Compliance Built-In

After each run, the system produces a tamper-proof audit log cryptographically signed by the underlying CPU or GPU. These logs can be exported and verified using third-party tools — supporting both internal governance and external compliance requirements.

A visual trust screen makes the entire flow transparent: from attestation to audit, organizations can see which assets were used, how data moved, and validate that every action followed policy.

Built to Power Sensitive Workflows at Scale

While the demo focused on an HR Q&A scenario, the architecture is adaptable to any sensitive workflow — whether in finance, healthcare, legal, or enterprise operations. With support for tools like LandGraph for enriched data mapping and semantics, the platform supports complex enterprise-grade deployments.

OPAQUE’s Confidential AI platform enables teams to securely orchestrate GenAI pipelines with hosted LLMs, policy-bound agents, and protected data — all while maintaining visibility and control.

This is what it means to build with AI you can prove.

Related Content

Showing 28

GuardRail OSS, open source project, provides guardrails for responsible AI development
This is some text inside of a div block.
GENERAL
Read More
No items found.