Connect with us

Interviews

Rony Ohayon, CEO and Founder of DeepKeep – Interview Series

mm

Rony Ohayon, CEO and Founder of DeepKeep, is a seasoned entrepreneur and technologist with a career spanning AI, cybersecurity, autonomous systems, and large-scale video technologies. He has founded and led multiple companies across these domains, including major roles in autonomous vehicle connectivity, live video transmission, and advanced engineering, alongside earlier academic work in computer engineering.

DeepKeep is an AI-security platform designed to help enterprises safeguard AI, generative AI, LLMs, computer vision, and multimodal systems across their entire lifecycle. The company focuses on identifying vulnerabilities, blocking adversarial threats, preventing issues such as data leakage and prompt manipulation, supporting regulatory compliance, and providing continuous monitoring to ensure trustworthy, resilient, and protected AI deployments.

You’ve led major innovations across video transmission, autonomous vehicle connectivity, and AI systems. What from your own career convinced you that the next major challenge you needed to solve was securing enterprise AI?

I’ve always been motivated to tackle high-impact challenges that reshape industries. Over the years, I noticed a recurring pattern where new technology, especially AI, often outpaces security measures, leaving critical vulnerabilities behind.

Enterprise adoption of AI, particularly with the rise of large language models (LLMs) and agentic AI, opened up a new frontier of risks, and businesses often feel overwhelmed and unprepared to leverage these systems safely and confidently. My experience in AI systems highlighted how essential it is to integrate security into the heart of these technologies to ensure they’re not just innovative, but also secure and trustworthy, producing reliable results to support businesses proactively. This realization led to the founding of DeepKeep, with a focus on securing AI systems so businesses can confidently embrace AI to boost productivity and business growth, without compromising security or privacy.

When you and your co-founder launched DeepKeep in 2021, what specific AI-security blind spot convinced you there was an urgent need for a dedicated platform, and how did that insight shape the company’s earliest direction?

After years of working with AI in computer vision, which required significant efforts to ensure it is reliable and robust, we realized it was time to design a dedicated solution to ensure trust and security in computer vision.

In 2021, prior to the LLM explosion, we launched DeepKeep to tackle these risks in computer vision models.

Can you walk us back to the earliest prototype—what it actually did, how small the team was, and how you validated that you were on the right path?

When we founded DeepKeep, the focus was still on traditional AI – computer vision, tabular models, and early NLP – well before the rise of large language models. Our first prototype was a system for red-teaming computer vision classifiers to test their robustness against adversarial attacks. Alongside that, we built an early version of an AI firewall that could detect and flag these attacks in real time.

The initial use cases came from automotive, insurance, and financial services, where model misbehavior carries real operational risk. We were a small team of about eight people at the time, which allowed us to iterate quickly and produce a functional prototype early.

We validated that we were on the right path by speaking directly with potential customers, who consistently highlighted adversarial robustness as an emerging concern. Around the same time, frameworks like MITRE’s ATLAS – first released in 2021 – started to appear, which was an important external signal that the field of AI security and adversarial threat modeling was about to grow. The alignment between customer feedback and industry direction gave us confidence we were heading the right way.

DeepKeep was designed from the start to secure AI systems rather than traditional software. How did you prioritize which model types and attack surfaces to focus on first?

From the outset, we knew that securing enterprise AI required a paradigm shift. While many organizations are savvy enough to know they need to perform penetration testing and evaluation for the models they use, we understood that those actions are just the start. The real risks emerge within the full application ecosystem, not just the models themselves.

So while securing standalone AI chatbots with traditional red-teaming was the industry starting point, we quickly moved on to developing solutions that secure custom AI applications and AI agents, and will evolve to secure the next step in which agents interact with each other and there is significant cross-domain intelligence.

We enable model scanning across all model types, but also protect against the most urgent threats such as adversarial attacks, data leakage, system misuse, and trust erosion by red-teaming the models and applying guardrails that secure AI prompts and responses.

Crucially, we also secure AI’s “semantic layer” by understanding the context in which the models operate. This ensures that models cannot be as easily manipulated.

What were the biggest technical or strategic pivots you made between the founding stage and DeepKeep’s current product direction?

One of the biggest decisions that we made was to expand beyond traditional model security into the realm of AI application security. Initially, we focused on securing individual models, but as the AI landscape evolved, we realized that securing entire AI ecosystems, where multiple models, agents, and use cases intersect, was far more critical. This led us to broaden our approach by incorporating red-teaming, a comprehensive AI firewall to protect every interaction agents, employees, and applications have with AI, and run-time monitoring.

Another key decision was to offer full deployment flexibility, including cloud-agnostic, on-prem and air-gapped solutions, allowing enterprises to securely deploy DeepKeep in any environment. We have also recently integrated an upgraded industry-leading Personally Identifiable Information guardrail into our platform, which has provided our customers with an even deeper level of data protection and ensures that enterprises can meet global compliance requirements as they scale their AI usage.

DeepKeep places equal emphasis on security and trustworthiness. At what stage did that dual focus become core to the company’s identity?

The focus on both security and trustworthiness became clear early on, especially as we began forming a deeper understanding of our customers and their needs.

When dealing with AI models, security and trust go hand in hand and play equal roles because, at the end of the day, both can lead to damaging and destructive results. An application cannot be robust and yet unreliable, and vice versa.

Traditional cybersecurity tools weren’t designed for prompt injection, hallucinations, data leakage, or model manipulation. Which of these emerging threat vectors did you see enterprises struggling with most, and how did those real-world issues influence DeepKeep’s architecture?

Of the emerging threats, prompt injection and data leakage are the most pressing concerns that we see enterprises struggling with. As AI applications and AI agents become more integrated into business processes, the risks of prompt manipulation and the accidental exposure of sensitive data are more pronounced. These issues led us to design DeepKeep with a focus on contextual security, protecting not just the models but the entire flow of data and interactions within AI ecosystems. Our infrastructure was built to pentest these layers in the development phase, and protect during run-time by securing every AI interaction.

Your platform combines guardrails, red-teaming, and data-protection layers. From a technical standpoint, which of these has proven most difficult to engineer at enterprise scale, and what did you learn while building a system that has to adapt to fast-moving AI models?

Each layer – guardrails, red-teaming, and data protection – comes with its own engineering challenges, but we found that the challenges all these layers have in common  were actually the most difficult.

The first is the pace of change: new risks, jailbreaks, and attack techniques arise constantly, so anything static becomes obsolete quickly. The second is context adoption: in the enterprise, a one-size-fits-all approach simply doesn’t work because every application has different policies, data sensitivities, and user behaviors.

To address the first challenge, we built a fully modular architecture with plugin-style components, allowing us to add new attacks to the red-teaming engine or new guardrails to the firewall quickly and without disrupting the system.

And to solve the second challenge, we designed an agentic, context-aware system. It analyzes the application’s environment and automatically adapts the relevant security measures – which is essential when the underlying AI models and use cases evolve so rapidly.

Those two capabilities, modularity and context awareness, have been key to operating at enterprise scale, while keeping up with fast-moving AI systems.

AI security is an evolving discipline. What gaps do you see inside enterprises today—whether in policy, tooling, or risk understanding—that most directly shaped how you designed DeepKeep’s security stack and customer onboarding?

Gaps significantly vary depending on industry, company size, and AI adoption maturity.

One of the biggest gaps we see in large enterprises today is the lack of a single solution covering all AI security needs. Many of these businesses are aware of  their need for AI security solutions, but as adoption increases, so does the requirement for additional security coverage. We learned early on that there is value in a robust, end-to-end solution that includes different capabilities operating in tandem and with seamless integration. The customers benefit from a solution where the whole is greater than the sum of its parts.

As the market matured, we identified another gap, which is that organizations are seeking more customized and less generic security tools to safeguard their agents and applications. One of the reasons behind our context-aware approach was to tackle this gap, with the understanding that every application and agent is different and needs to be secured accordingly.

If we look ahead five years, how do you expect enterprise AI risks to evolve—and where do you believe DeepKeep needs to be positioned to stay ahead of that future?

I expect that the biggest AI risks in the next five years will evolve alongside advancements in AI autonomy. As agents become more autonomous, integrated within every business operation and capable of performing complex tasks, the potential for security breaches and misuse will grow. We foresee the development of an Internet of Agents (IoA), in which agents engage with each other, forming an even more complex web of AI interactions to secure.

To stay ahead, DeepKeep will continue to evolve its platform to secure these increasingly complex AI systems, ensuring that we provide run-time protection across multiple AI models and support the growing trend of AI-powered decision making. Our goal is to be the trusted partner that businesses rely on to secure their entire AI ecosystem, no matter how sophisticated it becomes.

Thank you for the great interview, readers who wish to learn more should visit DeepKeep

Antoine is a visionary leader and founding partner of Unite.AI, driven by an unwavering passion for shaping and promoting the future of AI and robotics. A serial entrepreneur, he believes that AI will be as disruptive to society as electricity, and is often caught raving about the potential of disruptive technologies and AGI.

As a futurist, he is dedicated to exploring how these innovations will shape our world. In addition, he is the founder of Securities.io, a platform focused on investing in cutting-edge technologies that are redefining the future and reshaping entire sectors.