Connect with us

Cybersecurity

Are Businesses Ready for the Next Wave of AI-Powered Cyberattacks?

mm

Analyzing current trends allows experts to predict how cybercriminals will leverage artificial intelligence in the future. With this information, they can identify the biggest emerging threats and determine whether businesses are prepared. They may even be able to discern a solution. 

The State of AI Threats in Recent Years

Although AI technology is relatively new, it has already become a prominent tool for hackers. These trends suggest AI cyberattacks are on the rise. 

1. Model Tampering

By targeting large language models (LLMs) directly, threat actors can manipulate model behavior, decrease output accuracy or expose personally identifiable training data. Data poisoning and prompt engineering are common attack techniques. 

Some attacks are led by threat actors seeking to cause chaos or steal sensitive information. Others are administered by disgruntled artists wanting to protect their artwork from AI scraping. Either way, the company and its end users are adversely affected.  

2. Impersonation Attacks

In 2024, a Ferrari executive received several WhatsApp messages from the CEO, Benedetto Vigna. Vigna spoke of an impending acquisition and urged his employee to sign a nondisclosure agreement. He even called to discuss funding. There was one problem — it wasn’t him.

The deepfake was nearly perfect, mimicking Vigna’s Southern Italian accent exceptionally well. However, slight inconsistencies in the voice tipped off the executive to the scam. The employee asked about the title of a book Vigna had recommended days earlier, a question only the real CEO would know the answer to. The scammer promptly hung up. 

AI can clone a person’s voice, browsing behavior, writing style and likeness. As this technology advances, identifying deepfakes becomes increasingly difficult. The scammers often put the target in an urgent situation to stop them from questioning minor discrepancies. 

3. AI Phishing

In the past, a person could identify a phishing email by looking for bad grammar, suspicious links, generic greetings and out-of-place requests. Now, with natural language processing technology, hackers can craft believable messages with flawless grammar.

Researchers found that fully automated AI-enabled spear phishing emails have a 54% click-through rate, which is on par with phishing emails written by humans. Since these scams are more convincing, they are becoming increasingly common. Studies have found that over 80% of phishing emails show evidence of AI involvement. 

4. Social Engineering

Social engineering involves manipulating someone into taking action or divulging information. AI enables hackers to respond faster and craft more convincing messages. Any natural language processing model can conduct a semantic analysis to identify the recipient’s emotional state, making them more likely to give in. 

In addition to enhancing social engineering techniques, machine learning technology lowers traditional entry barriers, enabling novices to carry out sophisticated campaigns. If anyone can become a cybercriminal, anyone can become a target. 

The Next Wave of Data-Driven AI Attacks

In early 2026, AI attacks are expected to remain at a low maturity level. However, they will advance exponentially as the year progresses, allowing cybercriminals to enter the optimization, deployment and scaling stages. They will soon be able to launch fully automated campaigns. Confirmed examples of AI cyberattacks will not be rare for long. 

Polymorphic malware is an AI-enabled virus that can change its code each time it replicates to avoid detection. Attackers can deliver the payload through AI ecosystems, call on LLMs at runtime to generate commands or directly embed the virus into the LLM. The Google Threat Intelligence Group discovered adversaries deployed this malware for the first time in 2025.  

The malware families are PROMPTFLUX and PROMPTSTEAL. During execution, they use LLMs to request VBScript obfuscation and evasion techniques. They evade signature-based detection by obfuscating their own code on demand. 

Evidence suggests these threats are still in the testing phase — some incomplete features are commented out, and the application programming calls are limited. These fledgling AI malware families may still be in development, but their very existence represents a massive step forward in the direction of autonomous, adaptive attack techniques. 

NYU Tandon research shows LLMs can already autonomously execute ransomware attacks, dubbed Ransomware 3.0. They can conduct reconnaissance, generate payloads and personalize extortion without human involvement. It only requires natural language prompts embedded in the binary. The model yields polymorphic variants that adapt to the execution environment by dynamically generating the malicious code at runtime. 

Are Businesses Prepared for AI Attacks?

Despite billions in cybersecurity spending, private businesses continue to struggle to keep pace with the evolving threat landscape. Machine learning technology could render existing detection and response software obsolete, further complicating defense. It doesn’t help that many fail to meet basic security standards. 

The 2024 DIB Cybersecurity Maturity Report surveyed 400 information technology professionals in the United States defense industrial base (DIB). Over half of the respondents reported being years away from Cybersecurity Maturity Model Certification (CMMC) 2.0 compliance, despite the equivalent NIST 800-171 compliance having been outlined in Department of Defense (DoD) contracts since 2016. Many rate their security posture as much better than it actually is. 

The new CMMC requirements went into effect on November 10, 2025. Moving forward, all DoD contracts will require some level of CMMC compliance as a condition of contract award. The new rules are intended to strengthen DIB cybersecurity, but will they be effective in the age of AI?

Is Defensive AI the Answer?

Fighting fire with fire may be the only way to combat the inevitable surge in AI attacks. With defensive AI, organizations can dynamically respond to threats in real time. However, this approach comes with its own security flaws — securing the model against tampering will require continuous oversight and auditing. 

According to Harvard Business Review, conventional solutions leave businesses vulnerable to AI cyberattacks. To achieve cyber resilience, they must use machine learning technology to anticipate and automatically respond to threats. 

There is no simple answer to whether defensive AI is the solution to this problem. Should companies pour their resources into deploying unproven machine learning tools or expanding their information technology teams? It’s impossible to predict which investment will pay off in the long run. 

Large enterprises may see significant returns with automated cybersecurity, while small businesses might struggle to justify the cost. Conventional automation technology may be able to close the gap at a much lower price, but it won’t be able to respond to dynamic threats. 

Steve Durbin, CEO of the Information Security Forum, states that AI adoption has significant benefits, but it also has major drawbacks. For example, businesses often experience a surge in false positive alerts, which wastes the time of security teams. Moreover, overreliance on AI can lead to teams becoming overconfident, resulting in security lapses. 

Navigating the AI Threat Landscape

It is impossible to determine the exact extent of AI’s presence in the threat landscape, as attackers can utilize it to create malicious code or draft phishing emails, rather than using it at runtime. Lone cybercriminals and state-sponsored threat groups could be using it at scale. 

Going off the information available, model tampering, AI phishing and polymorphic malware will be the biggest cyberthreats of 2026. Cybercriminals will likely continue using LLMs to generate, deliver and adapt malicious payloads, targeting high-value industries like finance as well as ordinary people.

Zac Amos is a tech writer who focuses on artificial intelligence. He is also the Features Editor at ReHack, where you can read more of his work.