Interviews
Yoav Regev, CEO and Co-Founder of Sentra – Interview Series

Yoav Regev, Co-Founder and CEO, is a seasoned cybersecurity professional with over 20 years in Israel’s military intelligence, where he led the Cyber Department. He now heads Sentra, a company dedicated to tackling modern challenges in cloud and data security.
Sentra is a cloud-native, AI-powered data security platform designed to discover, classify, and monitor both structured and unstructured data across IaaS, PaaS, SaaS, and on-prem environments. It emphasizes keeping customer data within their own infrastructure while applying policy enforcement, anomaly detection, least-privilege access, and compliance safeguards to reduce exposure and secure sensitive assets.
You spent over two decades in Israel’s elite Unit 8200, including as Head of the Cyber Department. How did that experience shape your philosophy toward cybersecurity—and eventually lead to founding Sentra?
After leading cyber operations in Unit 8200 for years, one thing became overwhelmingly clear: everything comes back to data. Whether it was a breach, a ransomware attack, or a cyber operation, the root cause was often the same. Someone didn’t know where their data was, who had access to it, or how it was being used. We saw time and again how that lack of visibility and control could unravel entire systems.
That stayed with me, and when I started speaking with CISOs, CIOs, and security leaders across industries, they kept echoing the same struggle. Everyone wanted to move faster, adopt the cloud, harness AI, but they couldn’t because they didn’t fully understand their data. What’s sensitive? What’s stale? What needs protection?
Data is no longer just an asset. It’s the defining factor between a good company and a great one. But only if you can control it. Sentra was built to give organizations the clarity and control they need to actually leverage their data. To innovate confidently, you need to know your data, understand its context, and protect it at every turn.
What specific challenges or patterns did you see in military cyber operations that you now see playing out in the enterprise AI and data security space?
In the military, success depended on speed. You had to move faster than the adversary. That required constant mapping, continuous assessment, and the ability to act before something became a crisis.
In the enterprise, the stakes are different but the challenge is similar. Organizations want to move quickly, especially with the pace of AI developments. They’re pushing more and more data to the edge to feed models, make decisions, and personalize experiences. But that acceleration comes with risk.
Think of Formula 1 racing. Everyone focuses on how fast the car can go, but it’s the braking system that really wins races. Speed without control is just danger. With AI, the controls are just as important as the capabilities. If you want to go faster, you need to know how to control by design, steering, and stop when necessary. That means understanding your data before it enters the system, not after something goes wrong.
Why are traditional security tools no longer sufficient in the era of AI. What makes securing AI-driven workflows and generative models fundamentally different?
Traditional security tools were never designed for autonomous systems like AI. These tools were built for environments where humans reviewed decisions and managed access with clear perimeters.
AI doesn’t work that way. It processes enormous volumes of data and makes decisions independently. The moment that data is fed in, the opportunity for oversight starts to vanish. This is what makes it so fundamentally different and dangerous if not handled correctly.
With generative AI, you don’t get a second chance. Once the data is in, it’s nearly impossible to pull it back or audit what happened to it. That’s why visibility, classification, and access control prior tothe point of ingestion are absolutely critical. AI isn’t just a compute problem. It’s a data problem at its core.
Your platform uses AI-powered classification with over 95% accuracy at petabyte scale. What’s under the hood—how are you combining AI models with security logic to deliver that precision?
We designed Sentra to operate entirely within the customer’s environment. We don’t move data outside. We don’t share it. That’s a core principle. Everything stays under the customer’s control, and our platform adapts to that requirement without compromise.
Under the hood, we use a combination of AI, LLM techniques, and proprietary security logic to classify data based on its context, not just its metadata. That means even if a sensitive data point is buried inside an unstructured file, our models can still identify it and tell you its purpose. And because we do this without ever removing data from its source, we maintain privacy and compliance at every step.
This approach gives customers the best of both worlds: precision and trust. We are able to automate data understanding and decision-making with a high degree of accuracy, and we do it without sacrificing security or exposing their most valuable asset—their data.
Shadow data and multi-cloud sprawl are accelerating rapidly. What are the biggest blind spots enterprises have when it comes to data visibility—and how is Sentra helping close those gaps?
The biggest gap is visibility. Many companies think they know where their sensitive data is, but they’re only seeing part of the picture. AI tools, cloud migration, and SaaS platforms are generating massive volumes of data that are often outside formal control. This is what we call shadow data.
Shadow data is exactly what attackers go after. It’s sensitive, unprotected, and invisible to most tools. Sentra solves this by giving organizations a complete, real-time map of their data — across cloud, SaaS, on-prem, and AI systems. We classify that data by sensitivity and help security teams prioritize where risk lives, how to fix it, and how to stay compliant.
You bring a unique cross-border perspective from your work with the international partners. How do global data governance and regulatory frameworks influence Sentra’s platform strategy?
Working with international partners made it clear that compliance isn’t just a checkbox. It’s a core function of trust. Different countries have different rules, but the one thing they all demand is control. You need to control where your data is and who is using it.
Sentra was designed to meet those demands. We support regulatory requirements like GDPR, HIPAA, CCPA, and the new wave of AI-related laws. We give customers the tools to demonstrate compliance and enforce policy. We do that across borders and across systems.
You often emphasize offensive security strategies over reactive ones. How does that mindset show up in the product—and why is it critical in today’s AI threat landscape?
Sentra was built on a proactive security philosophy. We don’t believe organizations should wait for incidents to occur. Instead, we help them discover sensitive data no matter where it lives, monitor for anomalies in near real time, and respond immediately when risk is detected.
Our platform continuously scans environments, classifies sensitive information, and integrates with security tooling to initiate remediation automatically. It’s that forward-leaning approach of taking action before an adversary does that defines offensive security. And in an AI-driven world, that proactive approach is no longer optional.
Sentra integrates across IaaS, PaaS, SaaS, and even AI copilots. What are the most surprising or risky behaviors you’ve seen emerge in these environments recently?
Agentic AI introduces serious new risks. It can allow systems to access, process, or share sensitive data with little or no oversight, which adds a layer of complexity for security teams.
LLMs can expose sensitive information when asked to summarize internal content, or when used in ways that bypass existing data protection protocols and access permission controls. Without the right guardrails, these tools can become an unintentional vector for data leakage or inappropriate disclosure.
With the rapid rise of generative AI in enterprises, what’s one security myth you’d like to debunk—and one critical warning you wish more leaders would take seriously?
One of the biggest myths I hear is that AI systems are somehow secure by default and that their complexity makes them harder to breach. But in cybersecurity, complexity is often the enemy. The more intricate the system, the more opportunities there are for misconfigurations, blind spots, or excessive permissions.
The reality is that AI systems, and especially those with access to sensitive data, can become major points of exposure if not governed properly.
A critical warning I’d share with any leader is this: don’t treat AI systems like black boxes. Enforce least privilege from day one. These models should only be allowed to access the data they truly need to perform their function, and no more. Without that discipline, it becomes very easy for sensitive data to be overexposed, misused, or leaked. And because AI can operate independently or even in cascading sequence, those exposures can scale faster than people realize.
Looking ahead, how do you see the intersection of AI, cybersecurity, and compliance evolving over the next 3–5 years—and how is Sentra preparing for that future?
We’re already seeing the first wave of AI-specific regulation take shape. Colorado passed a landmark AI law focused on consumer protection, and in 2025 alone, more than 700 AI-related bills have been introduced across the U.S. We expect continued growing momentum globally similar to what we saw with GDPR and CCPA. The message is clear: AI governance isn’t theoretical anymore and it’s becoming a compliance mandate.
For enterprises, this means two things. First, they’ll need deeper visibility into how AI systems interact with sensitive data. And second, they’ll need the tools to enforce controls at the data layer, where organizations can mitigate risks before they multiply.
At Sentra, we’re building for that future now. Our Data Security for AI Agents solution gives organizations the ability to discover AI tools in use across their environments, map the data they access, classify that data by sensitivity, and monitor outputs to prevent leakage or misuse.
We’ll continue to expand those capabilities while doubling down on our core strengths of comprehensive data discovery, accurate classification, and proactive risk mitigation. AI brings massive potential, but it has to be adopted securely and responsibly. That’s the outcome we’re helping customers achieve.
Thank you for the great interview, readers who wish to learn more should visit Sentra.












