Connect with us

Thought Leaders

When Evolving Attacks Outrun Old Defenses: Why It Is Time for Proactive AI Security

mm

If you work anywhere near security right now, you probably feel like you are always catching up. There is a new breach in the news, a fresh ransomware story, and another clever trick that defenders did not see coming. At the same time, a lot of protection still leans on ideas from an older internet where networks had clear borders and attackers moved more slowly.

The numbers tell you this is not just a feeling. The latest IBM Cost of a Data Breach Report puts the global average breach at $4.88 million in 2024, up from $4.45 million the year before. That 10% jump is the largest spike since the pandemic years, and it comes even as security teams invest more in tools and staffing.

The Verizon Data Breach Investigations Report for 2024 looks at more than 30,000 incidents and over 10,000 confirmed breaches. It highlights how attackers rely on stolen credentials, web application exploits, and social actions such as pretexting, and it notes that organizations take around 55 days on average to fix just half of their critical vulnerabilities after patches are released. Those 55 days are a very comfortable window for an attacker who is scanning continuously.

In Europe, the ENISA Threat Landscape report for 2023 also points to a heavy mix of ransomware, denial of service, supply chain attacks, and social engineering. Another ENISA study focused on supply chain incidents estimated that there were likely four times as many such attacks in 2021 as in 2020, and that this trend has continued upward. 

So the picture is simple but uncomfortable. Breaches are becoming more common, more expensive, and more complex, even as tools improve. Something structural is off in the way many organizations still defend themselves.

Why the classic security model is falling behind

For a long time, the mental picture of cyber defense was simple. You had a clear inside and outside. You would build a strong perimeter with firewalls and filters. You would deploy antivirus on endpoints and look for known bad signatures. You would tune rules, watch for alerts, and react when something obvious fired.

That model has three big problems in the current world.

First, the perimeter is mostly gone. People work from everywhere on a mix of managed and unmanaged devices. Data sits in public cloud platforms and software as a service tools. Partners and suppliers connect directly into internal systems. Reports like the ENISA supply chain study show how often intrusions now begin through a trusted partner or software update rather than a direct frontal attack on a central server.

Second, the focus on known signatures leaves a huge blind spot. Modern attackers mix custom malware with what defenders call living off the land. They lean on built-in scripting tools, remote management agents, and everyday administrative actions. Each step viewed alone may look harmless. A simple signature-based approach does not see the bigger pattern, especially when attackers change small details in each campaign.

Third, humans are overloaded. The Verizon report shows that vulnerability exploitation is now a major way into networks and that many organizations struggle to apply patches fast enough. IBM’s research adds that long detection and containment times are a major reason breach costs keep climbing. Analysts sit under a mountain of alerts, logs, and manual triage, while attackers automate as much as they can.

So you have attackers who are faster and more automated, and defenders who still lean heavily on manual investigation and old patterns. Into that gap comes artificial intelligence.

Attackers are already treating AI as a teammate

When people talk about AI in security, they often picture defensive tools that help catch bad actors. The reality is that attackers are just as eager to use AI to make their work easier.

The Microsoft Digital Defense Report 2025 describes how state-backed groups are using AI to create synthetic media, automate parts of intrusion campaigns, and scale influence operations. A separate Associated Press summary of Microsoft threat intelligence reports that, from mid-2024 to mid-2025, incidents involving AI-generated fake content rose to more than 200, more than double the year before and roughly 10 times the number seen in 2023.

In practice, this looks like phishing messages that read as if a native speaker wrote them, in any language you like. It looks like deepfake audio and video that help attackers pretend to be senior leaders or trusted partners. It looks like AI systems are sorting through huge volumes of stolen data to find the most valuable details of your environment, your staff, and your third parties.

A recent Financial Times piece on agentic AI in cyberattacks even describes a largely autonomous espionage operation where an AI coding agent handled most of the steps from reconnaissance to data exfiltration with limited human input. However you feel about that specific case, the direction of travel is clear. Attackers are quite happy to let AI handle the boring parts of the work.

If attackers are using AI to move faster, blend in better, and hit more targets, then defenders cannot expect traditional perimeter tools and manual alert triage to be enough. You either bring similar intelligence into your defense, or the gap keeps widening.

From reactive defense to proactive security thinking

The first real shift is not technical; it is mental.

A reactive posture is built around the idea that you can wait for clear signs of trouble, then respond. A new binary is detected. An alert fires because traffic matches a known pattern. An account shows a blatant sign of compromise. The team jumps in, investigates, cleans up, and maybe updates a rule to prevent exactly that pattern from working again.

In a world with slow and rare attacks, this might be fine. In a world with constant probes, fast-moving exploitation, and AI-supported campaigns, it is too late. By the time a simple rule triggers, attackers have often explored your network, touched sensitive data, and prepared fallback paths.

A proactive posture starts from a different place. It assumes you are always being touched by hostile traffic. It assumes that some controls will fail. It cares about how quickly you spot unusual behavior, how fast you can contain it, and how consistently you learn from it. In that frame, the core questions become very practical.

  • Do you have continuous visibility into your key systems, identities and data stores?

  • Can you notice small deviations from normal behavior, not just known bad signatures?

  • Can you tie that insight to quick, repeatable action without burning out your team?

AI is not the solution by itself, but it is a powerful way to answer those questions at the scale modern environments demand.

What an AI-driven cybersecurity posture looks like

AI helps you move from a simple yes or no view of threats toward a richer, behavior-based picture. On the detection side, models can watch identity activity, endpoint telemetry, and network flows and learn what looks normal for your environment. Instead of only blocking a known malicious file, they can raise a flag when an account logs in from an unusual location at an unusual time, pivots to a system it has never touched before, and then begins moving large volumes of data. Every single event might be easy to overlook. The combined pattern is interesting.

On the exposure side, AI-supported tools can map your real attack surface. They can scan public cloud accounts, internet-facing services, and internal networks to find forgotten test systems, misconfigured storage, and exposed admin panels. They can group these findings into practical risk stories instead of raw lists. This is particularly important as shadow AI grows inside organizations, with teams spinning up their own models and tools without central oversight, a trend that IBM calls out in its more recent Cost of a Data Breach work as a serious risk area. 

On the response side, AI can help you act faster and more consistently. Some security operations centers already use AI-supported systems to recommend containment steps in real time and to summarize long investigation timelines for human analysts. The United States Cybersecurity and Infrastructure Security Agency describes several such uses in its artificial intelligence resources, showing how AI can help detect unusual network activity and analyze large streams of threat data across federal systems.

None of this removes the need for human judgment. Instead, AI becomes a force multiplier. It takes over the constant watching, the pattern spotting, and part of the early triage, so that human defenders can spend more time on deep investigation and on hard design questions, such as identity strategy and segmentation.

How to start moving in this direction

If you are responsible for security, all of this can sound large and abstract. The good news is that the shift from reactive to proactive usually begins with a few grounded steps rather than a giant transformation.

The first step is to get your data streams in order. AI is only as useful as the signals it can see. If your identity provider, endpoint tools, network controls, and cloud platforms all send logs into separate silos, every model will have blind spots and attackers will have hiding places. Investing in a central view of your most important telemetry is rarely glamorous, but it is the foundation that makes meaningful AI support possible.

The second step is to pick specific use cases rather than trying to sprinkle AI everywhere. Many teams start with behavior analytics for user accounts, anomaly detection in cloud environments, or smarter email and phishing detection. The aim is to choose areas where you already know you have risk and where pattern recognition across large data sets can clearly help.

The third step is to pair every new AI-supported tool with an explicit set of guardrails. That includes defining what the model is allowed to do on its own, what must always involve a human, and how you will measure whether the system is honest and useful over time. Here, the thinking in the NIST AI framework and the guidance from agencies like CISA can save you from reinventing everything yourself.

Why Proactive AI Security Cannot Wait

Cyber attacks are turning into something closer to a constant background condition than a rare emergency, and attackers are very happy to let artificial intelligence do a lot of the heavy lifting for them. The cost is rising, the entry points are multiplying, and the tooling on the attacker side is getting smarter every year. A reactive model that waits for loud alerts and then scrambles is simply not built for that world.

A proactive AI-driven posture is less about chasing a flashy trend and more about doing the quiet, unglamorous work of getting your data in order, adding behavior-based insight, and putting clear guardrails around new AI systems so that they help your defenders instead of surprising them. The gap between attackers and defenders is real, but it is not fixed, and the choices you make now about how you use AI in your security stack will decide which side is moving faster over the next few years.

Mirgen Hoxha is the CEO of Motomtech where he leads teams that design and build AI-driven software products for clients in North America and Europe. He works at the intersection of product strategy and applied machine learning, helping organizations turn real-world problems into practical AI solutions.