Connect with us

Interviews

Kara Sprague, CEO of HackerOne – Interview Series

mm

Kara Sprague, CEO of HackerOne, is a veteran technology executive with more than two decades of experience spanning product leadership, general management, and strategic consulting across the software and security sectors. She assumed the CEO role in November 2024 after serving in senior executive positions at F5, including Executive Vice President and Chief Product Officer, where she led major product and platform initiatives, as well as earlier general manager roles overseeing large-scale businesses. Prior to F5, she spent over a decade as a partner at McKinsey & Company, advising leading technology companies on growth and strategy, and began her career as a technical staff member at Oracle. Alongside her role at HackerOne, she also serves on the board of directors at Trimble Inc.

HackerOne is a cybersecurity company best known for pioneering hacker-powered security, connecting organizations with a global community of ethical hackers to identify and remediate vulnerabilities before they can be exploited. The platform supports enterprises and governments through bug bounty programs, vulnerability disclosure, penetration testing, and security testing services that combine human expertise with automation and AI-driven workflows. By shifting security from a reactive to a proactive model, HackerOne has become a critical part of the modern application security stack for organizations looking to reduce risk and improve resilience at scale.

You stepped into the CEO role at HackerOne in November 2024 after decades of leadership across F5, McKinsey, Oracle, and other major technology organizations. What drew you to take on this challenge at this stage of your career, and what were the first priorities you set when you began leading the company?

I’ve spent my career at the intersection of technology, strategy, and risk, helping organizations navigate moments when the stakes are high and the environment is changing fast. What drew me to HackerOne is that we’re at exactly that kind of inflection point again as AI reshapes the cybersecurity landscape.

Security is no longer a back-office function — it’s a core driver of trust, resilience, and business velocity. Enterprises now operate on deeply interconnected systems, constant data flows, and automated decision-making at unprecedented scale. AI is accelerating innovation, but it’s also introducing new seams, dependencies, and failure modes that traditional security models weren’t built to handle.

That’s why HackerOne’s mission matters so much right now. Our mission is to empower the world to build a safer internet, and in an AI-driven world, that mission has never been more urgent. HackerOne is different because we combine a global community of human security researchers with platform intelligence to find and fix vulnerabilities before attackers can exploit them. That human-in-the-loop model isn’t just differentiated — it’s essential.

From day one, I focused on three priorities: expanding our agentic platform capabilities, investing in our researcher community, and deepening trust with customers and partners. That means scaling AI red teaming, evolving Hai from a copilot into a coordinated team of AI agents that help organizations continuously prioritize, validate, and remediate risk faster, and launching HackerOne Code to secure software earlier in the development cycle. Today, more than 90% of our customers use Hai to accelerate their work to validate and fix vulnerabilities.

The landscape is evolving fast, but our focus is constant: contain risk before it defines you. At HackerOne, that means making security continuous, practical, and built for the pace of modern innovation.

HackerOne has seen both a 200% surge in pentesting and AI red teaming and a strategic shift toward continuous threat exposure management. How are these trends reshaping your long-term vision for the company, and what does this momentum signal about the future of enterprise security?

What we’re seeing isn’t a spike — it’s a reset. A 200% surge in pentesting and AI red teaming confirms that point-in-time security simply can’t keep up with how fast modern enterprises change.

That reality is shaping our long-term vision around continuous threat exposure management across the full lifecycle — from code and cloud to AI systems. As AI accelerates both innovation and attack velocity, the challenge isn’t finding vulnerabilities; it’s proving what’s exploitable, prioritizing what matters most, and fixing it fast. We’re building a platform that combines continuous  testing, autonomous validation, intelligent prioritization, and human expertise to do exactly that.

For enterprise leaders, the signal is clear: security is becoming a continuous business discipline, not a periodic audit. The companies that outperform will be the ones that identify risk earlier, act faster, and contain exposure before it becomes a business issue. That shift defines the future of enterprise security.

How does your AI system Hai integrate into the vulnerability discovery workflow, and where does it provide the most leverage to researchers and customers?

Hai is a coordinated team of AI agents embedded directly into the vulnerability management workflow to continuously analyze and contextualize findings to help organizations prioritize, validate, and remediate risks faster. It operates across the lifecycle of a report, acting as a force multiplier for defenders as volumes rise and threats grow more complex.

Hai delivers the most leverage by cutting through noise. It improves triage and understanding by summarizing reports, identifying patterns, validating findings, and highlighting the issues most likely to matter. Our research shows that 20% of users save 6 to 10 hours each week, significantly shortening the path from detection to confident remediation.

Researchers benefit as well. With more than half of them now using AI or automation in their work, Hai helps them produce stronger proof-of-concepts, clearer explanations, and more consistent validation.

What new categories of vulnerabilities have emerged over the past year as enterprises adopt AI more aggressively across their software stacks?

As AI becomes embedded across products and workflows, we’re seeing new vulnerability categories emerge at meaningful scale. In our latest Hacker-Powered Security Report, valid AI vulnerability reports increased 210% year over year and nearly 80% of CISOs now include AI assets in scope for security testing. Prompt injection has been the most visible, increasing by more than half year over year, and remains one of the most common ways attackers influence model behavior. We’re also seeing growth in model manipulation, insecure output handling, data poisoning, and weaknesses tied to training data and response management.

What makes these risks especially consequential is that they don’t just affect systems — they influence decisions, workflows, and customer trust. AI introduces failure paths that traditional testing doesn’t fully cover. As these systems are deployed into production and become more operationally critical, dedicated and continuous AI security testing will become increasingly central to enterprise security programs.

Our approach pairs AI-driven automation to scale discovery and pattern detection with human expertise to uncover subtle failures, novel weaknesses, and real-world impact — allowing defenders to operate at the same speed and scale as attackers.

As the global researcher community expands, how do you maintain trust, quality, and fairness while also advancing your commitments to diversity and inclusion across such a large crowd-powered ecosystem?

Trust is the foundation of a crowd-powered security model, and it has to be built deliberately as the community scales. For us, that starts with clear standards, consistent incentives, and strong governance.

Our community is made up of vetted security researchers who partner with customers to identify, validate, and help remediate real-world vulnerabilities across a wide range of technologies.

We maintain quality and fairness by combining platform intelligence with human oversight — validating findings, enforcing uniform rules of engagement, and rewarding researchers based on impact, not background, geography, or tenure. Reputation systems, transparent triage, and consistent payout models create accountability on both sides of the marketplace.

We’re deeply invested in researcher success. Through onboarding, training, and clear growth paths, we help new researchers build skills and credibility. Over the past six years, 50 researchers have earned more than $1 million each on our platform — a powerful signal of both the caliber of the work and the fairness of the model.

Diversity and inclusion aren’t separate initiatives; they’re core to the strength of the ecosystem. Security challenges are global, and diverse perspectives surface different attack paths and blind spots. The result is a trusted, high-performing community that becomes stronger — not more fragmented — as it grows.

What safeguards has HackerOne put in place to ensure that AI-assisted vulnerability discovery remains responsible and avoids bias or misuse?

Across our platform, AI agents are designed to improve clarity, validate findings, and accelerate remediation — while humans remain accountable for decisions around acceptance, severity, and response.

We hold ourselves to the same standards we expect from customers. We use our AI capabilities internally, pressure-test them continuously in real workflows, and reward our researcher community for identifying high-impact vulnerabilities in our own solutions. That creates a tight feedback loop to surface bias, inconsistency, or misuse early.

As AI becomes more embedded in security operations, our goal is to set a bar teams can trust — grounded in transparency, continuous testing, and human responsibility.

A 120% rise in vulnerability findings and rewards suggests major shifts in the threat landscape. Do you interpret this as progress in detection or a sign that enterprise software is becoming riskier?

It’s both — and that’s the point.

The rise reflects real progress in detection. Researchers are uncovering more actionable, high-quality weaknesses, and increased rewards show that enterprises value surfacing and fixing real risk. More findings don’t automatically mean software is riskier — they mean exposure is finally visible.

At the same time, enterprise software is becoming more complex and interconnected. AI, third-party dependencies, and faster release cycles are expanding the attack surface faster than traditional controls were designed to handle.

The takeaway is simple: risk is dynamic, and security has to be as well. The most resilient organizations assume exposure is inevitable and focus relentlessly on fixing what actually matters.

What do you see as the biggest challenge for crowdsourced security platforms over the next few years as AI becomes more capable?

The biggest challenge in any security platform is maintaining signal and trust as speed and scale increase.

As AI lowers the barrier to discovery, platforms will see a surge in volume from automated and hybrid workflows. The risk isn’t too few findings — it’s noise overwhelming customers and misaligned incentives eroding trust.

The platforms that succeed will be the ones that validate exploitability, prioritize impact, and align rewards with outcomes — while maintaining strong governance and human accountability. The future isn’t about finding more issues; it’s about finding the right ones faster and turning insight into action before business issues can arise.

Do you envision HackerOne expanding beyond vulnerability discovery into areas like continuous monitoring, AI-driven remediation, or predictive threat modeling?

Our focus is solving the core problem enterprises face: understanding and containing real risk in continuously changing environments.

That naturally means moving beyond point-in-time discovery. We already operate across the exposure lifecycle, from code and AI red teaming through validation and prioritization, and we’ll continue investing in capabilities that help customers see exposure earlier, understand impact faster, and drive remediation to closure.

AI plays a central role in that evolution — particularly in prioritization and workflow acceleration — but always with human accountability at the center. Our north star is continuous, practical security that keeps pace with modern innovation.

As adversaries increasingly adopt AI, how does HackerOne plan to stay ahead and ensure defensive tools evolve just as quickly?

We stay ahead of our adversaries by operating where real attacks emerge. Our global researcher community is already testing AI-enabled techniques against live environments, giving us early visibility into how adversaries actually operate.

We pair that human insight with AI-driven automation to scale discovery, validation, prioritization, and remediation.  Just as importantly, we pressure-test our own platform continuously using the same AI red teaming approaches we offer customers.

The goal isn’t to predict every new attack — it’s to build a system that learns faster than attackers do. That’s how defensive tools keep pace in an AI-driven threat landscape.

Thank you for the great interview—readers who wish to learn more about how the company is using human expertise and AI-driven security to help organizations identify and contain real-world risk before it becomes a business issue can visit HackerOne.

Antoine is a visionary leader and founding partner of Unite.AI, driven by an unwavering passion for shaping and promoting the future of AI and robotics. A serial entrepreneur, he believes that AI will be as disruptive to society as electricity, and is often caught raving about the potential of disruptive technologies and AGI.

As a futurist, he is dedicated to exploring how these innovations will shape our world. In addition, he is the founder of Securities.io, a platform focused on investing in cutting-edge technologies that are redefining the future and reshaping entire sectors.