Interviews
David Matalon, CEO and Founder of Venn – Interview Series

David Matalon, CEO and Founder of Venn, is a serial entrepreneur with a long track record of building secure enterprise technology platforms, having previously led OS33—an early leader in secure workspaces for financial firms—and External IT, a pioneer in hosted IT services. With Venn, he is focused on redefining remote work security by enabling organizations to adopt bring-your-own-device (BYOD) models without sacrificing compliance or control, leveraging his deep experience in cloud infrastructure, endpoint security, and regulated industries to address the growing challenges of distributed workforces.
Venn is a cybersecurity and remote work platform designed to secure company data on personal and unmanaged devices through its proprietary Blue Border™ technology, which creates a secure, encrypted enclave on a user’s computer where work applications and data are isolated from personal activity. Unlike traditional virtual desktop infrastructure, Venn allows applications to run locally with native performance while enforcing strict data protection and compliance policies, helping organizations reduce IT overhead, onboard remote workers quickly, and maintain privacy by separating corporate and personal environments on the same device.
You’ve spent more than two decades building technology for secure remote work, from launching Offyx in the early days of application service providers to founding OS33 and now Venn. What lessons from those earlier companies led you to build Venn, and how did those experiences shape the idea behind Blue Border™ and your vision for securing modern Bring Your Own Device (BYOD) workforces?
Over the past two decades, I’ve had the opportunity to build companies at several different stages in the evolution of remote work. At OS33, we spent years delivering secure remote work environments through hosted infrastructure that used technology similar to virtual desktop infrastructure (VDI). While the security model worked, we kept hearing the same feedback from customers: the experience of using remotely hosted applications was often slow, complex to maintain, and frustrating for users.
That feedback was a turning point. Remote hosting introduced unavoidable latency and required significant infrastructure, creating operational complexity for IT teams. We began asking a simple question: what if you could remove hosting from the equation entirely? Instead of running work somewhere else and streaming it to the user, could you securely run work locally on the user’s device while still protecting corporate data?
That thinking ultimately led to Venn and the concept behind Blue Border. Instead of forcing work through remote hosting and virtualization, we created a new model that lets corporate applications run locally on a user’s laptop while keeping company data encrypted and protected. Even on a personal laptop, work remains isolated and protected from personal activity.
Artificial intelligence tools are spreading across enterprises faster than policies can keep up. From your perspective, why has governance struggled to keep pace with AI adoption inside organizations?
Governance has struggled to keep pace with AI adoption because the technology became an everyday tool almost overnight. In the last few years since ChatGPT exploded, employees have incorporated AI into their lives and workflows. They’re not waiting for formal IT approval cycles; they’re already using AI to write faster, analyze information, summarize meetings, or generate code in seconds. In most organizations, policy creation, legal review, security validation, and IT deployment happen on a much slower timeline than user behavior. That gap is where AI governance is falling behind.
The deeper problem is that many organizations are trying to apply yesterday’s control model to today’s AI reality. Traditional governance was built around approving or blocking a known set of applications, but AI is now embedded across browsers, SaaS platforms, and even in operating systems. Governance must evolve beyond controlling a predetermined toolset and focus instead on protecting data wherever it resides, securing the work environment and defining the conditions under which sensitive information can be used safely.
Many companies attempt to solve the problem by restricting or banning generative AI tools. Why do you believe this approach fails in practice, and what unintended security risks can it create?
Bans fail because they ignore the reality of how people work. Employees will find ways to use AI tools regardless of official approval. That creates shadow AI, or unsanctioned use of tools, personal accounts, copy-paste workflows, and browser-based interactions, that may happen outside approved oversight. The company then loses visibility, putting its sensitive data at risk.
In many cases, restrictive policies can increase risk rather than lower it. When employees cannot use these tools securely, they often find workarounds. Sensitive company data may end up flowing into tools that IT or security teams do not monitor or control. The better approach is not prohibition for its own sake, but enabling safe usage through isolation, data controls, and clear guardrails that let the business move forward without exposing critical information.
AI capabilities are increasingly embeddedectly into everyday applications rather than existing as standalone tools. How does this shift change the way security teams should think about monitoring and controlling data exposure?
This shift is significant because it breaks the old mental model of “risky app versus approved app.” If AI is embedded inside email, CRM, conferencing, document editing, and search, then data exposure is no longer tied to whether a user opens a separate AI tool. It’s connected to what data is accessible inside the application, what context the AI can see, and whether that interaction happens inside a secure workspace.
As a result, security teams need to focus on protecting the data rather than full device lockdowns. The focus should be on isolating work sessions, controlling copy/paste and downloads where appropriate, preventing leakage across personal and business contexts, and ensuring sensitive information stays within a protected environment.
Venn’s Blue Border™ technology isolates work apps and data locally on a user’s personal device instead of relying on traditional virtual desktop infrastructure. How does this architecture fundamentally reshape the endpoint security model for remote work?
Blue Border fundamentally changes the endpoint security model by moving beyond the idea that security requires either full device control or a virtualized desktop. Traditional VDI secures work by hosting it remotely and streaming it to the user. Blue Border secures workectly on the user’s personal device by creating an IT-controlled secure enclave where applications run locally, and company data stays isolated and protected.
The result is a different security model for remote work, where companies can enforce protection around the work itself without issuing company devices or forcing users to deal with the lag and latency that comes from hosting a desktop in the cloud.
From a security architecture standpoint, this shifts the model from controlling the whole endpoint or centralizing security protocols, to protecting the workspace itself, where it resides. Blue Border ensures that sensitive data never leaves the protected, local environment and enforces policy within that boundary. It prevents leakage to the personal side of the device. As a result, users can enjoy native compute and application performance, and they can use a personal device from anywhere in the world, as opposed to a required company device.
Many organizations struggle with balancing employee privacy and corporate oversight when workers use personal devices. How can security teams protect sensitive data without creating the perception of surveillance?
The key is to protect the work, not the personal activity. Employees are understandably uncomfortable when security measures could extend into their private files, messages, browser history, or personal applications. On a BYOD device, trust matters. If the company cannot clearly explain where its visibility begins and ends, employees will assume the worst.
A stronger model is one that creates a distinct workspace for business activity and applies security controls only within that boundary. This gives the organization the ability to protect corporate data while giving employees confidence that their personal activity is not being watched or managed. Privacy and security do not need to compete if the architecture is designed to separate them cleanly.
Remote work and contractor-based teams have made BYOD environments almost unavoidable. What are the biggest security risks associated with unmanaged devices today?
The biggest risk is that unmanaged devices erase the boundary between personal and business activities. On the same machine, a user may have work applications open alongside personal email, consumer AI tools, messaging apps, file-sharing services, and untrusted browser extensions. Without a secure separation layer, it becomes very easy for sensitive data to be copied, cached, downloaded, screen captured, or exposed through channels the company does not control. For organizations that are subject to regulations around data security, this is a huge risk.
Artificial intelligence agents and automated workflows are beginning to interactectly with enterprise applications and data. What new security challenges do these autonomous systems introduce?
Autonomous systems introduce a different class of risk because they do not just generate content, but they can also act. AI agents connected to enterprise systems may retrieve or move data, update records, trigger workflows, or communicate externally. That expands the blast radius of a mistake, misconfiguration, or compromised identity significantly beyond what we see with passive AI assistants.
It also creates new questions about access, trust, and accountability. What data is the agent allowed to access? Under what conditions can it act? How is that activity logged, constrained, and reviewed? IT and security teams will need to treat AI agents less like software features and more like privileged digital actors. That means applying principles like least privilege, segmentation, session isolation, and strong auditability from the start.
As organizations integrate generative artificial intelligence into productivity tools, customer support systems, and internal workflows, what kinds of sensitive data exposures worry you the most?
The use of generative AI in the workplace has blurred the line between personal and company data. Employees often access outside tools while working with company information, which makes it very easy for sensitive data like customer records, internal documents, source code, or financial information to slip into external environments. When corporate data flows through personal contexts or unmanaged devices, companies lose visibility and control over where that information goes, how it’s stored, and who might ultimately access it. As AI becomes embedded in everyday workflows, organizations need to address this blurred boundaryectly by ensuring that company data stays protected even when work happens on personal devices.
Looking ahead, how do you see endpoint security evolving as AI-driven workflows become more common across distributed and remote workforces?
Endpoint security will need to become much more adaptive, context-aware, and workspace-centric. In the past, endpoint security design assumed a managed device, a defined office perimeter, and a relatively stable set of business applications. The future is distributed, AI-powered, and increasingly autonomous. Security needs to follow the work itself, wherever it happens, without assuming full control over the device or blocking productivity.
The winning model will be one that combines strong separation between the device and sensitive data, context-aware access controls, and an architecture that preserves a clear boundary between work and personal activity. Organizations need environments where employees, contractors, and AI-enabled workflows can operate productively, but within controls that protect data by design. The companies that succeed will not be the ones trying to slow AI adoption; they will be the ones making safe adoption possible at scale.
Thank you for the great interview, readers who wish to learn more should visit Venn.












