stub AI in Phishing: Do Attackers or Defenders Benefit More? - Unite.AI
Connect with us

Cybersecurity

AI in Phishing: Do Attackers or Defenders Benefit More?

mm

Published

 on

As cybercrime has grown, the cybersecurity industry has had to embrace cutting-edge technology to keep up. Artificial intelligence (AI) has quickly become one of the most helpful tools in stopping cyberattacks, but attackers can use it, too. Recent phishing trends are an excellent example of both sides of the issue.

Phishing is the most common type of cybercrime today by far. As more companies have become aware of this growing threat, more have implemented AI tools to stop it. However, cybercriminals are also ramping up their usage of AI in phishing. Here’s a closer look at how both sides use this technology and who’s benefiting from it more.

How AI Helps Fight Phishing

Phishing attacks take advantage of people’s natural tendency toward curiosity and fear. Because this social engineering is so effective, one of the best ways to protect against it is to ensure you don’t see it in the first place. That’s where AI comes in.

Anti-phishing AI tools typically come in the form of advanced email filters. These programs scan your incoming messages for signs of phishing attempts and automatically send suspicious emails to your junk folder. Some newer solutions can spot phishing emails with 99.9% accuracy by generating different versions of scam messages based on real examples to train themselves to spot variations.

As security researchers detect more phishing emails, they can provide these models with more data, making them even more accurate. AI’s continuous learning capabilities also help refine models to reduce false positives.

AI can also help stop phishing attacks when you click on a malicious link. Automated monitoring software can establish a baseline of normal behavior to detect abnormalities that will likely arise when someone else uses your account. They can then lock down the profile and alert security teams before the intruder does too much damage.

How Attackers Use AI in Phishing

AI’s potential for stopping phishing attacks is impressive, but it’s also a powerful tool for generating phishing emails. As generative AI like ChatGPT has become more accessible, it’s making phishing attacks more effective.

Spearphishing — which uses personal details to craft user-specific messages — is one of the most effective types of phishing. An email that gets all your personal information right will naturally be a lot more convincing. However, these messages have traditionally been difficult and time-consuming to create, especially on a large scale. That’s not the case anymore with generative AI.

AI can generate massive amounts of tailored phishing messages in a fraction of the time it would take a human. It’s also better than people at writing convincing fakes. In a 2021 study, AI-generated phishing emails saw significantly higher click rates than those humans wrote — and that was before ChatGPT’s release.

Just as marketers use AI to customize their customer outreach campaigns, cybercriminals can use it to create effective, user-specific phishing messages. As generative AI improves, these fakes will only become more convincing.

Attackers Remain in the Lead Thanks to Human Weaknesses

With attackers and defenders taking advantage of AI, which side has seen the most prominent benefits? If you look at recent cybercrime trends, you’ll see cybercriminals have thrived despite more sophisticated protections.

Business email compromise attacks rose 81% in the second half of 2022 and employees opened 28% of these messages. That’s part of a longer-term 175% increase over the past two years, suggesting phishing is growing faster than ever. These attacks are effective, too, stealing $17,700 a minute, which is probably why they’re behind 91% of cyberattacks.

Why has phishing grown so much despite AI improving anti-phishing protections? It likely comes down to the human element. Employees must actually use these tools for them to be effective. Beyond that, workers could engage in other unsafe activities that make them prone to phishing attempts, like logging into their work accounts on unsanctioned, unprotected personal devices.

The earlier-mentioned survey also found workers report just 2.1% of attacks. This lack of communication can make it difficult to see where and how security measures must improve.

How to Protect Against Rising Phishing Attacks

Given this alarming trend, businesses and individual users should take steps to stay safe. Implementing AI anti-phishing tools is a good start, but it can’t be your only measure. Only 7% of security teams are not using or planning to use AI, yet phishing’s dominance persists, so companies must address the human element, too.

Because humans are the weakest link against phishing attacks, they should be the focus of mitigation steps. Organizations should make security best practices a more prominent part of employee onboarding and ongoing training. These programs should include how to spot phishing attacks, why it’s an issue and simulations to test their knowledge retention after training.

Using stronger identity and access management tools is also important, as these help stop successful breaches after they get into an account. Even seasoned employees can make mistakes, so you should be able to spot and stop breached accounts before they cause extensive damage.

AI is a Powerful Tool for Both Good and Bad

AI is one of the most disruptive technologies in recent history. Whether that’s good or bad depends on its usage.

It’s vital to recognize that AI can help cybercriminals just as much — if not more — than cybersecurity professionals. When organizations acknowledge these risks, they can take more effective steps to address rising phishing attacks.

Zac Amos is a tech writer who focuses on artificial intelligence. He is also the Features Editor at ReHack, where you can read more of his work.