Funding
OpenAI Fund Extends Investment in Adaptive Security, Bringing Series A to $55M

Adaptive Security, the AI-driven cybersecurity company pioneering defenses against social engineering and impersonation attacks, today announced a follow-on investment from the OpenAI Startup Fund. The move brings the company’s Series A round to a total of $55 million and cements Adaptive as the Fund’s only cybersecurity investment — a clear signal of confidence in its human-centered approach to defending against the most pressing AI threats.
The Rising Tide of AI-Powered Deception
The digital world is shifting rapidly. AI systems once confined to labs are now seamlessly woven into how people shop, work, write, and communicate. Yet as these tools become “invisible infrastructure,” they are also fueling a new class of threats.
In June, 2025 U.S. officials — including foreign ministers and a sitting member of Congress — received AI-generated messages impersonating Secretary of State Marco Rubio through encrypted apps, according to a State Department advisory. That same month, OpenAI CEO Sam Altman told a Federal Reserve audience that AI impersonation could trigger a “fraud crisis” arriving “very, very soon.”
For consumers, the threats feel no less immediate. A deepfake video on X falsely promised a 100 million XRP reward program, leading Ripple’s CTO to publicly warn investors. The FBI later revealed that Michigan residents alone lost more than $240 million in 2024 to AI voice-cloning and fake video schemes.
The common thread is clear: attacks that were once highly technical are now highly personal, eroding trust at every level of society.
Adaptive Security’s Vision
“Cybersecurity now begins with people, not just infrastructure,” said Brian Long, co-founder and CEO of Adaptive Security. “Without upgrading how we train and protect individuals, we risk heading into a world where trust itself becomes our greatest vulnerability.”
Adaptive was founded with exactly this vision in mind: that people are the new perimeter. The company’s platform is designed to prepare individuals and organizations for the wave of AI-enabled impersonation, combining simulation, training, and real-time defense into one adaptive system.
The solution includes:
-
Deepfake attack simulations across voice, video, and messaging to measure resilience against the most convincing impersonations.
-
Dynamic awareness training that adapts to each user’s risk profile, ensuring lessons resonate and stick.
-
Instant triage and reporting tools that accelerate the containment of impersonation attempts before they spread.
-
AI-powered risk scoring to help security teams focus on their highest-priority vulnerabilities.
This approach not only trains employees but also reconditions institutions to think differently about how fraud unfolds in the AI era.
Why Investors Are Paying Attention
The OpenAI Startup Fund’s continued support underscores how urgent this challenge has become. “Adaptive is moving with incredible product speed to build AI-native defenses for equally advanced threats,” said Ian Hathaway, partner at the OpenAI Startup Fund. “Their platform delivers exactly what modern security teams need — realistic deepfake simulations, AI-powered risk scoring, and training that resonates. We’re proud to back a team that’s reshaping how institutions stay resilient in the age of AI.”
For OpenAI, the investment also aligns with its broader recognition of the dangers posed by generative AI in malicious hands. Altman’s recent warning about the collapse of voice-based authentication methods highlights how fast traditional safeguards are becoming obsolete. “AI has fully defeated most of the ways that people authenticate currently other than passwords,” he cautioned in July.
The Road Ahead: Implications for the Future
The implications of Adaptive’s approach stretch beyond any single company or funding milestone. As generative AI continues to advance, the balance between deception and detection will shape not only the security industry but also how societies function in a world of blurred authenticity.
If voice, video, and text can no longer be trusted at face value, institutions will be forced to rethink verification at every level. Banks may need to abandon biometric voiceprints, courts may need new standards for digital evidence, and employers may need ongoing safeguards against deepfake job applicants. The very mechanics of trust — how we know who is speaking, who we are transacting with, or whether a message is genuine — are being rewritten in real time.
Adaptive’s model points toward one possible future: a world where training, simulation, and adaptive defense become as routine as antivirus scans once were. Employees could one day face deepfake drills as regularly as fire drills, and organizations may rely on AI-powered risk scores to make daily decisions about communication and access.
In that sense, the technology signals a cultural shift in cybersecurity. Protection is no longer just about keeping networks secure — it’s about reinforcing human judgment at scale, ensuring that even in an era of perfect forgeries, people retain the ability to recognize what’s real.












