Connect with us

Thought Leaders

How Criminals Are Winning the AI Arms Race Before Businesses Even Start

mm

In an era where AI transforms industries at an unprecedented pace, the dark side of this technological revolution is equally alarming. As businesses race to harness AI’s potential, cybercriminals are exploiting these advancements, shifting the dynamics of cybercrime and fraud.

Changing the Economics of Cybercrime and Fraud

Cybercriminals utilize the same AI models and technologies that enterprises employ, often adapting them within days of their release. One of the earliest examples of such misuse was the automation of CAPTCHA solving using ChatGPT‑1, which demonstrated how quickly generative models could bypass basic security controls.

Since then, each major breakthrough in generative AI has been rapidly mirrored by criminal adaptations, including deepfake voice and video generation that appear almost immediately on darknet platforms. This accelerated cycle allows fraudsters to exploit sophisticated technologies to craft convincing scams, undermining traditional security measures.

In the first quarter of 2025 alone, deepfake‑enabled fraud reportedly caused more than $200 million in financial losses. The profitability of cybercrime has skyrocketed, with platforms offering “fraud‑as‑a‑service” making it easier than ever for criminals to execute complex schemes including synthetic identities and advanced phishing kits.

As businesses struggle to scale their AI capabilities, criminals are racing ahead, continuously innovating and exploiting gaps left by outdated security frameworks.

Why Legacy Cybersecurity and Trust Frameworks Fail Against AI-Powered Actors

The traditional cybersecurity measures that once provided a semblance of protection are proving inadequate. Legacy systems, relying on blacklists, CAPTCHAs, and single-factor authentication, are ill-equipped to combat the evolving landscape of AI-driven attacks. Criminals employ deepfakes that can fool biometric scanners and synthetic identities that easily bypass KYC protocols.

This failure is compounded by the reality that many organizations still treat cybersecurity as a cost center rather than a critical infrastructure component. As the Pentagon invests millions to hire AI hackers, the technology gap becomes evident. While businesses are mired in compliance theater, criminals are leveraging AI to exploit human vulnerabilities, such as spear-phishing attacks that mimic executive communications.

What “AI-Native” Attacks Look Like in Practice

Modern fraud tactics have evolved far beyond past phishing schemes. Attackers construct elaborate fraud chains that appear legitimate at every step.

Imagine a familiar corporate morning. Tuesday, 9:43 a.m. A CFO receives an email marked “urgent,” appearing to come from the CEO. The tone is familiar. The language matches previous requests. A follow-up message arrives minutes later on a different channel, reinforcing the urgency. By 11:00 a.m., a multimillion-dollar wire transfer is approved, but later discovered to be routed to an offshore account controlled by attackers.

These AI-native attacks are psychological manipulations leveraging trust and authority. The sophistication of such operations highlights a gap in existing security measures, which cannot detect the nuanced behavioral anomalies that characterize modern fraud.

What Businesses Should Realistically Prioritize Before Deploying More AI Internally

Before deploying more AI internally, businesses need to pause and reassess their assumptions about trust. The acceleration of AI-enabled crime has exposed a structural weakness: organizations are still defending against yesterday’s threats while today’s attacks are designed to look legitimate by default.

1. Companies must rethink how risk itself is defined.

Traditional risk matrices were built around failures such as system outages, data leaks, policy violations. In the AI era, risk increasingly stems from simulation rather than malfunction. Instead of asking “what could go wrong,” it’s more appropriate to ask “what can be convincingly faked, at scale, faster than we can react.”

Synthetic identities, executive impersonation, and AI-generated narratives behave differently from legacy threats: they spread faster, blend into legitimate activity, and exploit trust rather than technical gaps. Unsurprisingly, these risks tend to rank higher and materialize more often than their non-AI predecessors, hiding inside cybersecurity, fraud, reputational risk, or compliance.

2. Organizations must accept that prevention alone is no longer enough.

Top enterprises now map AI risks to three defensive layers, which correspond to AI Defender’s modular architecture:

  • Risk Prevention – that now includes anticipating attacks that exploit human trust and AI-generated content, not just blocking known threats.
  • AI-aware identity verification
  • Device & session integrity
  • Executive communication protection
  • Threat Detection & Monitoring combines technical anomaly analysis with behavioral and media monitoring, reflecting the fact that many AI-native attacks manifest in communication patterns rather than code.
  • Continuous monitoring for signals and anomalies
  • AI vs AI detection
  • Narrative & media monitoring
  • Investigation & Attribution – focusing on reconstructing events, attributing intent, and producing actionable evidence, enabling organizations to respond effectively even when deception scales faster than their initial defenses.
  • Explainability of AI alerts
  • Attribution of suspicious activity
  • Evidence-grade OSINT

3. Businesses must confront the human dimension of AI-native fraud.

Employees remain the primary entry point for modern attacks, but the nature of exploitation has changed. One common pattern increasingly observed in AI-driven fraud involves internal-looking interactions rather than external attacks. Employees might receive short video calls from what appears to be HR, asking to “quickly verify identity” to resolve a payroll issue. The face, voice, and branding look authentic. The request itself seems harmless, but it quietly enables account takeover later that day.

This type of scenario illustrates why AI-powered fraud leverages context, authority, and timing, often mimicking executive communication with unsettling precision. In this environment, traditional security training risks becoming little more than compliance theater, offering reassurance without real resilience.

The challenge lies not in awareness only, but in how the problem is framed.

Reframe the problem (this is step zero)

Old mental model: “Train employees not to make mistakes.”

New mental model: “Assume employees will be targeted, manipulated, and weaponized.”

Training is not education.

Training is inoculation + muscle memory.

Viewed through this lens, what teams must be trained to recognize recurring fraud patterns.

The 5 dominant AI-fraud vectors that pass through employees – none of these are stopped by awareness posters:

Vector What it looks like in reality
Authority spoofing CEO/CFO voice note, WhatsApp, Zoom deepfake
Urgency traps “5 minutes”, “confidential”, “board-level”
Context hijacking Fraudster knows real projects, names, timing
Process abuse “Just bypass this once”, “normal later”
Tool trust abuse “AI said it’s fine”, “system already approved”

4. Organizations need to rethink what “identity” means in a world of synthetic reality.

As deepfake voices and videos undermine biometric trust, no single factor can reliably prove authenticity. Increasingly, resilience comes from the accumulation of many weak signals over time like context, continuity, and consistency across devices, sessions, and external data points.

Open and external data, which were long treated as secondary, are gaining strategic importance. When combined with internal behavioral signals, they help answer a critical question: does this identity or action make sense across contexts? In a world where almost anything can be fabricated, coherence becomes one of the few remaining anchors of trust.

Ivan Shkvarun is the CEO and Co-Founder of Social Links and the author of the Darkside AI initiative.

With more than 15 years of experience in automation across multiple industries and leadership roles in international IT companies, he brings deep expertise in technology, strategy, and innovation. He previously led financial and public sector initiatives at SAP, where he focused on enterprise-scale solutions. His academic background is in mathematics, complemented by an MBA in entrepreneurship.

His passion for Open Data began over 20 years ago and has shaped his career ever since. In 2015, he co-founded Social Links with his partners as a side project, which evolved into a rapidly growing company by 2018. By 2023 and 2025, Social Links was recognized by Frost & Sullivan as a global leader in Open Source Intelligence (OSINT), now serving more than 500 clients across 80+ countries.