Connect with us

Interviews

Zaid Al Hamani, CEO and Founder of Boost Security – Interview Series

mm

Zaid Al Hamani, CEO and Founder of Boost Security, is a cybersecurity and DevSecOps leader with over two decades of experience building and scaling global technology operations. Since founding Boost Security in 2020, he has focused on modernizing how organizations secure software development, drawing on prior roles including VP of Application Security at Trend Micro and Co-Founder/CEO of IMMUNIO. Earlier, he held senior leadership positions at Canonical, leading product, engineering, and global support initiatives, and at SITA, where he managed large-scale, mission-critical IT operations. His career reflects a strong track record of building teams, optimizing systems, and advancing modern security practices.

Boost Security is a cybersecurity company focused on securing the modern software supply chain through a developer-first DevSecOps platform. Its technology integrates directly into CI/CD pipelines to automatically detect, prioritize, and remediate vulnerabilities, reducing manual overhead while maintaining development speed. By unifying application and supply chain security into a single system, the platform provides full visibility across code, dependencies, and infrastructure, helping organizations strengthen resilience in complex, cloud-native environments.

You previously led application security at Trend Micro and co-founded IMMUNIO. What led you to found Boost Security, and what gap in the market were you uniquely positioned to identify early?

IMMUN.IO was one of the first RASP companies to be founded – and our experience until that point was that WAFs as a runtime security technology were impossible to maintain, and not very effective. We envisioned a way where WAFs would be replaced with a more accurate, easier to maintain solution – by instrumenting the application.

That was in 2012, DevOps was still early, most teams were not Agile, and Kubernetes was not a thing yet.

Trend Micro acquired IMMUN.IO in 2017. By that time, there were a lot more DevOps practices: CI/CD pipelines, agile development practices, faster iterations and release cycles, cloud, etc. Software development teams were better at building software, and shipping faster. Security was still broken though:

  • Scans are too slow, or results arrive too late
  • Results are too complex for developers to action
  • There was a generally unacceptable false positive rate
  • Many new types of artifacts were not scanned: infrastructure as code, containers, APIs for example

Producing software fast was easier. Producing secure software fast was still hard.

That was the original problem we set out to solve. Make DevSecOps work in the real world; can you get a software development team to easily add security into the SDLC, at a speed to match the new velocity standards? Can you make the coverage broad – where one platform is all you need? Can you make it so that developers, not only adopt the technology, but embrace it and see the benefits? Can you make it scale so that you don’t need armies of security professionals to keep up with the amount of code written…

We helped companies inject security into the SDLC during the DevOps era. That was going from 1 to 10. We’re now in the era of agentic coding – where agents are writing an enormous amount of code – but it is fundamentally the same problem – speed and volume of code just went from 10 to 100; and we aim to continue the same trajectory.

You’ve argued that the software development lifecycle (SDLC) has fundamentally shifted upstream. What was the moment you realized traditional DevSecOps approaches were no longer sufficient?

It was watching how attackers were actually getting in. We kept seeing the same pattern: an exposed GitHub Actions workflow nobody had reviewed since the repo was forked, a token with production cloud access embedded in a runner config, a legitimate CI job hijacked to deploy attacker payloads. These became known as “living off the pipeline” attacks, because the adversary uses your own automation against you, with credentials your security team already approved.

The DevSecOps stack we had built up over a decade had no answer for that. SAST scans application source. SCA scans application dependencies. Both assume the pipeline running them is trustworthy. Meanwhile, the pipeline itself is a YAML file with shell commands, network access, and sensitive credentials, and almost nobody reviews it.

When that becomes the path of least resistance, you can ship perfectly clean code and still hand attackers your cloud.

How should enterprises rethink the SDLC in a world where AI agents are generating code continuously rather than developers writing it step by step?

We’ve all got to stop thinking about the SDLC as a sequence of checkpoints. AI agents have collapsed the time between “someone wrote this” and “this is in production” from weeks to minutes. The old model assumed a human cadence between code review, SAST, SCA, and deploy, but we’re beyond that now.

Security has to live where the agent operates: on the developer’s machine, inside the prompt context, in the agent’s connections to MCP servers and external models. By the time code reaches the pipeline, you have already lost the chance to shape it. The agent already pulled the dependency. The model already saw the credential. Move the controls upstream, to where the work actually happens.

Many organizations still treat AI coding tools as simple productivity layers. Why do you believe they represent an entirely new attack surface rather than just an extension of existing workflows?

Treating an AI coding tool as a productivity layer is like treating a junior developer with root access as a productivity layer. The label is technically accurate, but it gives you no useful framework for thinking about what could go wrong.

A coding agent reads your filesystem, scrapes environment variables for context, fetches dependencies from public registries, opens outbound connections to remote model providers and MCP servers, and executes shell commands. Each of those actions used to require a human in the loop. Now they happen in milliseconds, with the same privileges as the developer who launched the agent.

That collapse fuses trust boundaries that used to be separate: the developer’s authority, what an external tool can fetch, and what untrusted code can execute. That creates new opportunities for attackers and blind spots that defenders can’t even see, much less defend.

Boost frames the developer laptop as the new control plane. What risks exist at the endpoint that security teams are currently overlooking?

The biggest one is inventory. Most security teams cannot tell you which AI agents are running on which laptops, which MCP servers those agents are connected to, or which IDE extensions are scraping repository content right now. EDR has no visibility into the agent layer; SIEM cannot see what those agents do locally either. It is a shadow IT problem with code-execution privileges.

Underneath that sits the credential mess. We built an open-source tool called Bagel partly to make this concrete. A typical developer laptop holds GitHub tokens with write access to production repos, cloud credentials that can spin up infrastructure, npm or PyPI tokens that can publish to millions of users, and AI service keys that attackers resell. None of that is hardened the way a CI runner is hardened. The same machine that holds those credentials also browses the web and installs random VS Code extensions.

Pair the two and you have the actual attack surface. An untrusted extension running with developer privileges in an environment full of cloud keys is the highest-leverage target in the modern enterprise. Most teams have not started looking at it.

You’ve highlighted the “context trap,” where AI agents can access local files, environment variables, and configurations. How widespread is the risk of sensitive data leaking through prompts, and why is it so difficult to detect?

Widespread enough that we treat it as the default state of any unmanaged developer environment. Every coding agent we have inspected pulls local context aggressively. They read dotfiles, environment variables, recent files, sometimes whole directory trees, and ship that context to a remote model. The tools are designed to work this way; aggressive context grabbing is what makes them useful.

The detection problem starts because the traffic from a leak looks identical to normal product usage. It is TLS to api.openai.com or api.anthropic.com. It comes from an approved business application. Standard DLP sees a developer using the AI tool the company just bought a license for. It does not see that one of the strings in that prompt is an AWS secret key the agent grabbed from a half-forgotten .env file in a sibling directory.

You only catch it by inspecting prompts before they leave the laptop, which is exactly where almost no security stack is currently positioned.

You mention machine-speed supply chain attacks. Can you walk through a realistic scenario where an AI agent introduces a vulnerability faster than traditional security tools can identify it?

Here is one we have seen variations of repeatedly. Developer asks an agent to add a feature that needs an HTTP retry library. Agent suggests a package name. The package is plausible-sounding but does not actually exist on npm. Within an hour, an attacker registers it, populates it with working retry logic plus a small post-install script that reads ~/.aws/credentials and posts the contents to a webhook. The agent runs npm install without checking, because agents do not check reputation. The credential is gone before the developer even runs the code.

The attack itself is not technically sophisticated, but traditional supply-chain security is built around known vulnerabilities in known packages: CVEs, SBOMs, license scanning. That framework has nothing to say about a package that did not exist when the scan was last run, was created specifically to match an AI hallucination, and gets ingested before any threat feed updates.

The window from publication to compromise is now measured in minutes. Anything checking after the fact is checking too late.

Are hallucinated dependencies becoming one of the biggest risks in AI-driven development, and what practical steps can organizations take to defend against them?

They’re already one of the biggest. Attackers actively monitor popular AI tools for hallucinations and register the suggested package names within minutes. Researchers a couple of years ago, when it first started happening, called it slopsquatting and the name stuck. Once a dependency name gets hallucinated frequently enough, sitting on it is a passive supply-chain attack with near-zero effort.

The practical defenses look different from what most teams currently have. Start at ingestion. Block typosquatted and newly-registered packages at the moment npm install or pip install runs, on the developer’s machine, before anything hits disk. Postmortem detection in CI does not help when a post-install script has already exfiltrated a credential. Then give the agent guardrails to operate inside. Inject your approved-dependency list directly into the agent’s context, so the model sees what is allowed before it generates a suggestion. Asking developers to write “secure prompts” is not a strategy. If you’re getting strategic, it means security sets the boundary, the agent inherits it. And start tracking an AI Bill of Materials. Most teams cannot tell you which agents, models, and packages are touching which repositories. You cannot defend what you cannot inventory.

You’ve said security can no longer begin at CI/CD. What does a modern security pipeline look like when protection needs to start earlier in the development process?

If security starts at CI/CD, you have ceded the entire pre-commit phase to an environment you do not control. The agent already ingested context, your credential may already be in someone else’s logs. You are scanning a carcass.

A modern pipeline starts on the laptop. That means inventorying the agents and extensions running there, validating which MCP servers and models they are allowed to talk to, sanitizing what leaves the machine, and blocking malicious packages before they install. From there, the policy follows the work into the IDE. We inject security standards directly into the agent’s context window so generated code stays inside the guardrails from the first token. The pipeline still runs, doing final verification on controls that were already enforced upstream.

The pipeline itself does not disappear. Its role becomes verification: confirming that the upstream controls held.

As organizations continue adopting AI coding agents, what are the most critical changes they must make today to ensure their development environments remain secure over the next few years?

The biggest mistake is securing only what gets committed. The interesting risk now lives in the eight hours before a commit happens. Unseen drama can unfold on the laptop, in the prompt, or in the package install. If your tools start at the PR, you are protecting the wrong half of the workflow.

Closely related: stop treating coding agents as productivity software. They are non-human users with shell access, repository write privileges, and outbound network connections. Govern them the way you govern any other privileged identity, with an inventory, approved capabilities, and audit logs.

The last shift is harder culturally. Most current “AI security” tools surface findings and route them to humans. Humans cannot triage at the speed agents generate. Whatever you adopt has to fix issues automatically inside the workflow, with traceable reasoning, or it becomes another dashboard nobody reads.

Thank you for the great interview, readers who wish to learn more should visit Boost Security.

Antoine is a visionary leader and founding partner of Unite.AI, driven by an unwavering passion for shaping and promoting the future of AI and robotics. A serial entrepreneur, he believes that AI will be as disruptive to society as electricity, and is often caught raving about the potential of disruptive technologies and AGI.

As a futurist, he is dedicated to exploring how these innovations will shape our world. In addition, he is the founder of Securities.io, a platform focused on investing in cutting-edge technologies that are redefining the future and reshaping entire sectors.