Interviews
Shahar Man, Co-founder and CEO of Backslash Security – Interview Series

Shahar Man, Co-founder and CEO of Backslash Security, is a seasoned technology leader with deep expertise in cloud development, cybersecurity, and enterprise software. He currently leads Backslash Security, a company focused on securing AI-native software development environments, protecting everything from IDEs and AI agents to generated code and prompting workflows. Prior to this, he held senior leadership roles at Aqua Security, where he served as both Vice President of Product Management and Vice President of R&D, helping build one of the leading platforms for container security across the development lifecycle. Earlier in his career, Man spent over a decade at SAP, where he led development and product initiatives including SAP Web IDE and worked closely with global enterprise customers, while also contributing to developer ecosystem growth. His career began in technical and leadership roles in both startup environments and Israel’s defense technology units, giving him a strong foundation in both engineering and large-scale systems.
Backslash Security is an emerging cybersecurity platform purpose-built for the era of AI-driven software development. The company focuses on securing the entire AI-native development stack, including AI agents, code generation pipelines, and modern developer workflows, an area that traditional security tools often overlook. By providing visibility, governance, and real-time protection without disrupting developer velocity, Backslash aims to address the growing risks introduced by automated coding and “vibe coding” environments. As software creation increasingly shifts toward AI-assisted systems, the platform is designed to ensure that security evolves in parallel rather than becoming a bottleneck, positioning Backslash at the intersection of DevSecOps and next-generation AI development.
You have held leadership roles in product and R&D at companies like Aqua Security and SAP before founding Backslash. What early signals convinced you that AI-native development and vibe coding would fundamentally reshape software creation, and that security needed to be rebuilt to support it?
I had already lived through one major shift when software moved into cloud-native architectures. At SAP and later at Aqua, we saw firsthand that when development changes this much, security usually lags behind. AI has taken that truth to a whole new level, not just because it can help write code faster, but because it has started to reshape the entire environment around software creation.
Securing code is now less about the code itself and more about the environment around it. In less than a year, what used to be a relatively contained and low-risk development setup has expanded into a sprawling, highly connected attack surface with little oversight or governance. Once that happened, the security questions around code vulnerabilities changed altogether. The real issue is not whether a given piece of code is vulnerable. The issue is that in enabling AI-driven development, we have introduced systems, agents, integrations, and access paths that extend far beyond the code itself. Security can no longer focus only on the output of code. It has to account for the entire environment that makes that code possible.
You describe vibe coding as expanding the attack surface beyond code into prompts, agents, MCP servers, and tooling layers. What are the most misunderstood risks in this new stack that developers and security teams are currently overlooking?
The biggest misunderstanding is that many teams still think the risk lives mainly in the generated code. That is only one layer. In AI-native development, risk is introduced earlier and in many more places. This could be in prompts, in the context supplied to the model, in the permissions granted to agents, in the MCP servers they connect to, or in the external tools and plugins that extend their reach. A single user’s laptop can be taken over and used as the bridgehead of a broader attack. It’s an endpoint pain point masquerading as an AI coding issue. Unlike code vulnerabilities, this doesn’t only put your applications at risk – it can put your entire organization at risk. If you are only looking at the code, you’re missing most of the picture.
Traditional application security has focused heavily on code review. How does security thinking need to evolve when AI agents are generating, modifying, and deploying code in real time?
Security has to move from periodic inspection to continuous oversight. The notion of trust is completely broken — you can have trusted models and trusted MCP servers, but due to the non-deterministic nature of AI, they can still be manipulated or simply misbehave to create unexpected risk.
This also means there has to be a mindset shift in which security operates alongside the development process as it happens and has much deeper governance, guardrails, and detection and response capabilities within that environment. That means thinking critically about which tools are being used, what context they are consuming, what policies should govern them, and what actions they are taking in real time.
Additionally, we cannot ignore the role of AI and AI models in handling vulnerabilities. If a year ago AI models yielded many vulnerabilities by default, things have improved quite dramatically, and other models are now used to find zero days that were never found before. So we are headed toward better outputs – but who minds the shop while we’re doing that? The attackers are looking elsewhere.
Tools like Cursor, Claude Code, and GitHub Copilot are becoming standard in developer workflows. Where do you see the biggest security gaps when teams adopt these tools without a proper governance layer?
The biggest gap is visibility. In many organizations, these tools are spreading fast and without a formal review. Security teams often do not know which agents are being used, how they are configured, what data they can access, or what external systems they are connected to. That creates a shadow AI problem, which is similar to shadow IT in principle, just faster and more dynamic.
The second largest gap is the lack of enforceable policies. Most organizations may have guidelines, but guidelines alone do not help much when a developer is moving quickly inside the IDE. Without governance at the tool and workflow layer, teams risk over-permissioned tools that do not meet enterprise standards. These tools are not inherently bad, but adopting them without governance means you are effectively scaling development speed without scaling control.
A third emerging gap is that everyone can potentially become a developer – what we call citizen-developers, using vibe coding tools. When the finance person uses Claude Code to automate processes and connect to internal systems, this creates potential risk and is a huge blind spot even today.
Backslash focuses on securing the entire AI development ecosystem rather than individual tools. Why is this full-stack approach necessary, and what happens if organizations continue treating these risks in isolation?
Because risk does not sit neatly inside any one product in your stack. AI-native development is inherently an ecosystem problem because it operates in so many different places, using so many different tools. The IDE, the model, the agents, the MCP servers, the external plugins, the identities, and the connected data sources all influence what gets built and how. Organizations are deliberately not standardizing on a single tool because their relative strengths are shifting so quickly. If you secure only one point in that chain, you still miss how risk moves across the system.
Treating these risks in isolation leads to fragmented defenses and dangerous blind spots. You may harden the code scanner, but overlook the MCP server that fed risky context into the model. That is why we believe the right approach is full-stack visibility and real-time protection across the entire AI development ecosystem. Otherwise, organizations will keep solving symptoms while the actual attack surface keeps expanding underneath them.
Prompting is emerging as a new layer of programmability. How should organizations approach securing prompts and preventing issues like prompt injection, data leakage, or manipulation?
Prompts increasingly shape logic and behavior. In many cases, they are effectively a new control plane for software creation. That means they need policy, monitoring, and guardrails just like code or infrastructure definitions would. Practically, that starts with limiting what prompts can access and what downstream actions they can trigger. It also means defining prompt rules that align with security and quality expectations, preventing sensitive data from being exposed through context windows, and watching for manipulation attempts such as prompt injection or indirect instruction hijacking. And it also includes ensuring that the rules themselves are not used as backdoors for prompt injection. The broader point is that you do not secure prompting by instructing developers and agents to “be careful.” You secure it by embedding controls into the environment where prompting actually happens.
MCP servers and agent Skills introduce dynamic connections between systems. From a security perspective, do these represent the most significant new risk vector in AI-driven development?
MCP servers and agent Skills represent a major new layer of risk because they define how AI systems connect to and interact with the real world. Skills define what an agent is empowered to do, while MCP extends its access to context and systems. Together, they shape the agent’s actual behavior. If those layers are not tightly controlled, organizations lose visibility into what their AI tools are capable of and what they are actually doing. The shift from generating code to taking action is what makes this such a critical area for security, and they become more unpredictable when you chain them together.
One of your core themes is “being the department of Yes” – enabling security without slowing developers down. How do you balance real-time protection with developer velocity in environments where speed is critical?
Security creates friction when it happens late or is disconnected from how developers actually work. It becomes much more effective when it is embedded directly in the workflow and focused on what really matters. That has been part of our thinking since Backslash began, and it matters even more now in AI-driven development.
In practice, that means surfacing the few issues that represent real risk, not flooding developers with everything that looks theoretically suspicious. It means enforcing policy in the IDE and agent workflow, not after the fact. And it means creating transparent, deterministic guardrails so teams can move quickly while still knowing which tools are in use, what permissions they have, and when something abnormal is happening. The goal is not to slow AI adoption down, but to help organizations adopt it confidently without losing control. In real terms, this means that a developer would have less room to make mistakes in the first place, but if she does make one, it will be caught and handled quickly.
We are seeing non-technical users increasingly build software using AI tools. How does the rise of non-developer vibe coders change the threat landscape?
It broadens the threat landscape in two ways. First, it dramatically increases the number of people who can produce software-like outputs without understanding the security implications. Second, it creates a false sense of safety because the tools make development feel conversational and low-friction.
That means organizations will see more applications, automations, and integrations created by people who are not trained to consider trust boundaries, input validation, dependency hygiene, access control, or data exposure. In other words, the attack surface expands not just because AI writes more code, but because more people can now generate workflows and systems that behave like software without applying basic, hygienic engineering discipline. That makes visibility and built-in safeguards even more important, because you can no longer assume security knowledge at the point of creation.
Looking ahead 12 to 24 months, what types of attacks or vulnerabilities do you expect to emerge specifically because of AI-native development workflows?
We expect many of the common code vulnerabilities to be avoided upfront through improvements in the LLMs themselves, or through better embedded prompt rules in the “harness” that surrounds those tools. If we’re now seeing an increase in the volume of vulnerabilities merely due to increased velocity, this will correct itself. And what isn’t corrected will be chased down by AI-enabled SAST and SCA (some of which will also be provided by the AI platform vendors, e.g., Claude Code Security and project Glasswing).
However, I expect much worse outcomes when it comes to exposures due to the use of unvetted and unsupervised AI tools in application development – such as open-source agents (OpenClaw is a good example), which have very poor security defaults coupled with a user base whose knowledge of security is far surpassed by their enthusiasm for vibe coding.
As a consequence, I think we will see a shift toward attacks targeting the development ecosystem itself rather than just production systems. As AI becomes part of how software is created, attackers will focus on manipulating the tools and connections that shape that process, effectively compromising software before it is ever deployed.
Thank you for the great interview, readers who wish to learn more should visit Backslash Security.












