Connect with us

Interviews

Jonathan Mortensen, Founder and CEO of Confident Security – Interview Series

mm

Jonathan Mortensen, Founder and CEO of Confident Security, currently leads the development of provably-private AI systems for industries with stringent security and compliance requirements. He also serves as a Founder Fellow at South Park Commons, where he explores the future of AI compute, memory, privacy, and ownership. Before launching Confident Security, he was a Staff Software Engineer at Databricks, integrating bit.io’s technology into its data platform with a focus on multi-tenant security, IAM/ACLs, VPC isolation, encryption, and data ownership. Earlier, he founded and served as CTO of bit.io, building a multi-cloud, multi-region serverless PostgreSQL service that supported hundreds of thousands of secure databases and was later acquired by Databricks.

Confident Security builds infrastructure that allows enterprises to run AI workflows without exposing sensitive information. Its platform is designed so that prompts, data, and model outputs remain fully private, never logged, and never reused, giving organizations a secure way to adopt AI while meeting strict regulatory and compliance standards.

You founded Confident Security in 2024 after building bit.io and working at Databricks. What sparked the realization that AI needed a fundamentally different approach to privacy?

My experience building data infrastructure taught me this: if people are putting sensitive information into a system, trust isn’t enough. They need proof. We built infrastructure where customers owned their data, and we gave them ways to validate that.

When I looked at how companies were using LLMs, that proof didn’t exist. Employees were pasting source code, legal documents, and patient records into models run by third parties they couldn’t verify. We’ve already seen private chats accidentally indexed online and policy changes that suddenly made conversations training data by default. It showed how fragile the current privacy model is.

If AI is going to handle the world’s most sensitive information, we need guarantees that don’t depend on trusting a vendor’s internal promises. That’s what drove me to start Confident Security.

OpenPCC is being described as the “Signal for AI.” Why was it important for this privacy layer to be open, attestable, and interoperable from day one?

End-to-end encryption didn’t take off until it became a standard everyone could adopt. We want the same thing for AI privacy. If only a few companies can offer real guarantees, then privacy won’t scale.

OpenPCC is open source under Apache 2.0, so anyone can build on it or inspect it. There’s no secret trust requirement. Hardware attestation provides cryptographic proof about what’s running and where. And we made sure it works anywhere: any cloud, any model provider, any developer stack.

There’s huge value in a privacy floor that’s consistent and universal. If you’re using OpenPCC, you know your data isn’t visible to model providers, regulators, or even us. A standard only works if the entire ecosystem can participate, so we designed it to be as inclusive as possible from day one.

Before Confident Security, you built large-scale systems for multi-tenancy, encryption, and data ownership. How did those experiences shape OpenPCC’s architecture?

Those systems reinforced two truths: if a system can retain data, eventually it will, whether that’s through logs, misconfigurations, or legal requests. And trust isn’t a privacy model. Users need visibility and control.

OpenPCC runs in a stateless mode so prompts disappear after processing. Attestation lets users verify where their data is going and what code is running. And by isolating control from data, OpenPCC prevents private inputs from ever being treated as executable instructions.

Those constraints are what enterprises have been waiting for: guarantees that data won’t reappear somewhere unexpected.

You’ve argued that most “private AI” solutions rely on trust in opaque systems. Why is independent verification essential for true privacy?

Most privacy language today is effectively “just trust us.” That’s not good enough when the stakes include national security and regulated healthcare data. If the user can’t verify the claim, it isn’t a guarantee—it’s marketing.

Verifiable privacy is different. You don’t trust the operator’s intentions. You validate the hardware, the software image, and the data handling guarantees. Cryptography enforces the boundaries. Logs don’t exist for someone to accidentally leak or subpoena.

When privacy is auditable by the user, you create a fundamentally safer system. It’s accountability rooted in math.

Google’s “Private AI” announcement came shortly after OpenPCC. You publicly challenged them to provide a TPU for independent testing. What motivated that call-out, and what would you expect to find?

To claim privacy guarantees, you should let the community validate them. NVIDIA already allows external verification on its H100 GPUs, and we even open sourced a Go version of their attestation library to encourage adoption.

If Google wants to make similar promises on TPUs, we should be able to measure and verify those promises, not just read about them in a blog post. We’d look for the same controls we expect from any privacy system: strict data retention boundaries, auditable attestation, and no secret pathways where logs or telemetry escape. Privacy claims need to survive scrutiny.

For readers unfamiliar with the mechanics, what makes OpenPCC’s fully encrypted channels different from traditional client-side encryption or confidential computing?

Client-side encryption guards data on the way in, and confidential computing guards it while it’s processed, but there are still gaps before and after where operators or attackers can access sensitive information.

OpenPCC closes those gaps. It creates a sealed end-to-end path between the client and the model that protects the prompt, the response, the user’s identity, and even metadata or timing signals that can quietly reveal intent. Operators can’t decrypt anything. Nothing is logged or retained, even under breach conditions.

Privacy shouldn’t depend on hoping the provider does the right thing behind the scenes. It needs to be enforced cryptographically.

How does verifiable privacy change the equation for regulated industries like finance, healthcare, and defense?

Regulated industries have the most to gain from AI, but also the most to lose if something leaks. Today, 78% of employees paste internal data into AI tools, and one in five cases includes regulated information like PHI or PCI. The exposure is already happening.

Verifiable privacy removes the biggest blocker. Sensitive prompts never exist in plaintext inside a model provider’s environment. Nothing can be used for training. Even lawful requests can’t access what the system itself can’t see.

Risk and compliance teams finally have a path where “yes” becomes the default instead of “no.”

What were the biggest engineering challenges in designing a cloud-agnostic privacy layer that works across any enterprise stack?

Confidential computing and remote attestation is still in its infancy, in my opinion. Each cloud provider and bare metal provider does something slightly different. Some providers, like AWS, don’t even have the hardware necessary to do it. So, every single feature we add is like 1000 cuts and walking a tight rope. But the whole point is to become an open standard, so we need to do it so it works for anyone’s cloud. It’s open source so I encourage folks to add even more supported platforms and configurations!

What does a world with default verifiable encryption look like, and how might it reshape the balance of power between enterprises, cloud vendors, and model providers?

Enterprises keep control of their most valuable asset: their data. Model providers compete on performance and cost rather than who can accumulate the most proprietary information. Clouds enable privacy instead of being silent observers of it.

It’s a healthier balance of power. And the whole ecosystem wins when security is built into the foundation instead of patched on top.

In a future where AI becomes ubiquitous and heavily regulated, how do you see verifiable privacy reshaping the competitive landscape for enterprises, cloud providers, and model developers?

Regulators are already questioning how user data is stored and used. Trust-based privacy won’t satisfy them for long. Users will expect privacy guarantees the way they expect encryption in messaging apps today.

The winners will be the companies that don’t ask users to compromise. If you can prove privacy, you earn the trust of the organizations that have the most valuable data in the world. Data becomes usable in places it’s been locked away.

Thank you for the great interview, readers who wish to learn more should visit Confident Security.

Antoine is a visionary leader and founding partner of Unite.AI, driven by an unwavering passion for shaping and promoting the future of AI and robotics. A serial entrepreneur, he believes that AI will be as disruptive to society as electricity, and is often caught raving about the potential of disruptive technologies and AGI.

As a futurist, he is dedicated to exploring how these innovations will shape our world. In addition, he is the founder of Securities.io, a platform focused on investing in cutting-edge technologies that are redefining the future and reshaping entire sectors.