Funding
OPAQUE Secures $24M in Series B at a $300 Million Valuation to Push Confidential AI Forward

Enterprise AI adoption continues to accelerate, but trust remains one of its biggest constraints. This week, OPAQUE announced a $24 million Series B funding round, valuing the company at approximately $300 million post-money and bringing total funding to $55.5 million. The round was led by Walden Catalyst, with participation from existing backers including Intel Capital, Race Capital, Storm Ventures, and Thomvest, alongside new strategic investor Advanced Technology Research Council (ATRC).
The raise underscores a growing consensus across the enterprise landscape: AI cannot scale on sensitive data without stronger, verifiable guarantees around privacy, governance, and security.
From Experimental AI to Enterprise Mandate
Over the past year, confidential AI has moved from a largely academic concept to a practical requirement for organizations deploying generative models and AI agents in production. As AI systems increasingly touch regulated data, proprietary IP, and mission-critical workflows, traditional approaches to security — focused on data at rest or in transit — have proven insufficient.
OPAQUE’s work is centered on protecting data and models while they are being used, not just before or after. That distinction matters. Many enterprise AI initiatives stall after early pilots because CISOs, legal teams, and compliance leaders cannot verify what happens to sensitive data during AI execution. The result is hesitation, delays, and in many cases, abandoned deployments.
Confidential AI aims to close this gap by offering cryptographic guarantees that data remains private, policies are enforced, and models are not exposed — even during runtime.
Addressing the Enterprise “Trust Gap”
Enterprises today are eager to deploy AI agents on proprietary data to gain productivity advantages and operational insights. Yet those same data assets are often the most sensitive an organization owns. Without verifiable assurances, AI quickly shifts from opportunity to risk.
OPAQUE positions its platform as a trust layer for enterprise AI, designed to provide provable privacy, policy enforcement, and model integrity before, during, and after AI execution. Rather than relying on assumptions or contractual assurances, the platform focuses on evidence — making it possible to demonstrate compliance and governance in real time.
This approach reflects a broader shift in enterprise thinking. AI systems are no longer evaluated only on performance or accuracy. Increasingly, organizations are asking whether they can prove how AI behaves, what data it accessed, and whether it followed approved rules.
What the New Funding Supports
The Series B capital will be used to accelerate development and deployment of OPAQUE’s Confidential AI platform, with a focus on helping enterprises move from experimentation to production more quickly and safely.
In parallel, the company is expanding into areas such as post-quantum security, confidential AI training, and sovereign cloud environments. These initiatives target organizations operating under strict regulatory, national security, or data residency constraints, where visibility and control over AI workloads are non-negotiable.
OPAQUE has also recently launched OPAQUE Studio, a development environment aimed at simplifying how teams build and deploy confidential AI agents. The goal is to make runtime-verifiable privacy and compliance a default part of the AI development lifecycle rather than an afterthought.
Broader Implications for Enterprise AI
The rise of confidential AI points to a deeper evolution in how organizations will deploy intelligent systems. As AI becomes embedded in decision-making, automation, and customer interactions, governance must shift from policy documents to technical enforcement.
Technologies that can demonstrate, in real time, that data was protected and rules were followed may become foundational to enterprise AI stacks. This is especially true in regulated industries like financial services, healthcare, and insurance, where compliance requirements are tightening rather than loosening.
Confidential AI could also enable new forms of collaboration. Organizations may be able to analyze shared or pooled datasets without exposing raw data, unlocking insights that were previously out of reach due to privacy concerns. In this sense, trust-preserving infrastructure may not just reduce risk — it could expand what is possible with AI.












