Funding
XBOW Raises $120M Series C to Bring Autonomous Hacking to Enterprise Security
Cybersecurity startup XBOW has raised $120 million in Series C funding, reaching a valuation of over $1 billion as it pushes forward a new category it calls “autonomous offensive security.” The round was led by DFJ Growth and Northzone, with participation from Sofina, Alkeon Capital, Altimeter, NFDG Ventures, and Sequoia Capital.
The raise reflects a broader shift in cybersecurity: as AI enables attackers to scale their efforts, defenders are increasingly turning to AI-driven systems that can operate continuously rather than relying on periodic, human-led testing.
From Manual Pentesting to Autonomous Security
Traditional penetration testing—where human experts probe systems for vulnerabilities—has long been a cornerstone of enterprise security. But that model is struggling to keep up with modern development cycles and AI-powered threats.
XBOW’s approach replaces one-off testing with continuous, automated offensive security. Its platform acts as an autonomous “hacker,” constantly probing applications, identifying weaknesses, and validating whether those weaknesses can actually be exploited.
This shift is significant. Instead of static assessments conducted a few times a year, organizations can now run ongoing testing that mirrors how real attackers behave—persistent, adaptive, and always on.
How XBOW’s Autonomous Hacker Works
At the core of XBOW’s platform is a system of coordinated AI agents designed to behave like real-world adversaries.
The system combines several key components:
- Autonomous agents that independently explore applications and attempt attacks in parallel
- A central coordinator that maps the attack surface andects strategy
- An attack environment with real-world tools, including browsers and exploit frameworks
- Validation layers that confirm whether a vulnerability is truly exploitable before reporting it
This architecture allows XBOW to run thousands of simultaneous attack paths, adapting in real time based on how an application responds.
Crucially, the platform separates discovery from verification. AI handles the creative exploration of potential attack paths, while deterministic logic ensures that only proven, reproducible exploits are surfaced—reducing the false positives that often plague traditional security tools.
The result is a system that doesn’t just flag theoretical risks, but provides concrete evidence of real vulnerabilities.
Built on AI Reasoning, Not Just Automation
XBOW’s technology goes beyond conventional scanning tools by incorporating AI reasoning into offensive security workflows.
Rather than following predefined checklists, the system dynamically plans and executes attacks, adjusting its strategy as it uncovers new information. This enables it to identify complex, multi-step vulnerabilities that static tools often miss.
The platform has already been validated in real-world environments, including production systems and competitive security testing environments, where it has demonstrated the ability to uncover exploitable vulnerabilities at scale.
This combination of reasoning, automation, and validation positions XBOW as a system that approximates how attackers think and operate, rather than simply scanning for known issues.
What Autonomous Offensive Security Means for the Future
The emergence of autonomous offensive security signals a deeper structural shift in how software is built and defended.
If systems like XBOW continue to mature, security testing may become embeddedectly into the software development lifecycle. Instead of waiting for scheduled audits, applications could be continuously stress-tested in parallel with every code change, creating a feedback loop where vulnerabilities are identified and addressed almost immediately.
This also introduces the possibility of “always-on adversaries” inside enterprise environments—controlled systems that behave like attackers but operate safely within defined boundaries. Over time, this could reduce reliance on external pentesting cycles and reshape compliance frameworks that were built around periodic assessments.
At the same time, the widespread adoption of autonomous offensive tools could raise new challenges. Organizations will need to ensure these systems are deployed safely, avoid unintended disruptions in production environments, and establish clear governance over how autonomous agents operate. There is also the broader question of how defensive and offensive AI systems will interact as both sides become increasingly automated.
More broadly, the rise of this category reflects an arms race that is becoming less human-driven and more system-driven. As attackers adopt AI to scale their capabilities, defenders are responding with systems designed to match that scale. The long-term outcome may not be defined by who has more security analysts, but by who builds the most effective autonomous systems.








