Connect with us

Cybersecurity

Cybercriminals Are Using Fake AI Software to Distribute Malware

mm

Artificial intelligence (AI) is soaring in popularity, and cybercriminals have taken note. As more people search for the latest and greatest AI tools to boost their productivity, some criminals have started slipping fake AI software into the market. These programs look like legitimate AI solutions, but installing them leaves users with a malware infection.

How Do AI Malware Scams Work?

Over the past few months, several security researchers have discovered malware masquerading as AI on social media. Cybercriminals have been using seemingly real AI tools to spread ransomware, spyware and other malware since at least mid-2024, and these threats are still active despite social platforms’ efforts.

These scams come in many forms, but they all follow the same premise. Cybercriminals post ads for AI tools on social sites — sometimes posing as existing apps like ChatGPT and sometimes pretending to be a new solution. In all cases, clicking the link to try the software installs malware on the user’s device.

Many look like generative AI platforms and even function like one until the last step. Users enter a prompt or upload files, but the file the program offers in return is malicious software instead of actual AI-generated output.

Fake AI Malware Scam Examples

Versatility is one of the ways these schemes have managed to remain active despite companies like Meta looking for and removing them. The criminals behind them will repeatedly change which AI tools they’re impersonating and use different domains. Despite this variety, many use one of a few popular malware strains and follow similar patterns.

Noodlophile Stealer

One of the most prominent is a strain Morphisec researchers discovered called Noodlophile Stealer. It starts with an ad for a generative AI service, sometimes even using a verified account. When you click the ad, which often promises a free trial, you are taken to a legitimate-looking AI generation page. 

After “processing” your request, the site gives you a file to download, which installs the Noodlophile malware. Noodlophile then steals your browser cookies, saved credentials and other information before sending it through encrypted messaging to the attacker. Sometimes, it will install additional malware or create a backdoor for the attackers to do so later.

CyberLock

CyberLock ransomware is another common component of these fake AI scams. It’s one of three threats uncovered by Talos researchers following this same scheme setup. In this case, the attackers spoof an existing, legitimate website, like the AI-driven monetization platform NovaLeads.

The fake sites look remarkably similar to the real versions, but clicking the download button will install the CyberLock ransomware. Once you click the downloaded file, CyberLock activates and demands $50,000 in return for not exposing sensitive, encrypted files. Talos researchers did note that they did not see any functionality that would let CyberLock do so. 

Lucky_Gh0$t

Another ransomware strain, Lucky_Gh0$t poses as ChatGPT. Because ChatGPT is the world’s most downloaded app, it’s fairly easy to convince people to install something claiming to be the popular AI chatbot, even if it’s not from a legitimate source. As with CyberLock, clicking the executable posing as the AI program sets the ransomware into motion.

Lucky_Gh0$t scours your device for files less than 1.2 gigabytes in size and encrypts them, demanding a ransom to get them back. In some cases, it deletes larger files, destroying data for destruction’s sake.

Numero

While security professionals knew of Lucky_Gh0$t and CyberLock before they appeared in fake AI tool scams, some of these schemes use a new malware strain — Numero. Numero pretends to be InVideo AI, a real video-generating AI app. It’s not ransomware like CyberLock or Lucky_Gh0$t, but it is destructive.

Numero manipulates your open windows in an infinite loop, eventually rendering your device unusable. It also employs several clever tricks to avoid detection, so it may not be noticeable until it’s too late.

How to Stay Safe from Fake AI Scams

Across all these examples, the way to stay safe from fake AI software scams remains the same. Keep these five tips in mind before you click any links or ads about a new AI tool.

1. Only Download AI Tools from Trusted Sources

The most fundamental step in avoiding fake AI scams is using only known, trusted sources to download any software. Never download anything directly from an ad, and never use an app store or software distributor outside of first-party, verified options. 

While security professionals have previously found malware on the Google Play Store, first-party app stores generally go to great lengths to keep their listings safe. They’re certainly safer than unknown sites. When uncertain about a site, go to the app developer’s official website to download the software from the source, making sure to avoid any shortcuts through ads or external links.

2. Always Double-Check URLs and File Extensions

Remember to inspect all URLs closely before clicking them. Mimicking legitimate addresses is a classic phishing technique, but suspicious URLs contain slight variations or misspellings. For example, some versions of the CyberLock scam used the domain “novaleadsai” instead of the real site, “novaleads.app.”

You should also pay attention to file extensions. Many malware strains come as .exe files but may have misleading names like “document.pdf.exe” to make them look like something else. It’s important to look closely at these names because they may be the only discernible difference between something suspicious and the real thing.

3. Beware of Anything Too Good to Be True

Be wary of any promises that seem like unusually good deals. A free app offering industry-leading AI performance or a year-long free trial for something that usually costs a lot of money are common examples. Cybercriminals often use these techniques to entice victims.

Similarly, watch out for messages with unusual urgency. You may get an email claiming to be from an AI service you actually use, telling you to follow a link to reset your password or change your payment information. Double-check the email address and log into your profile through the official site — not the provided link — to verify these messages before going through with anything.

4. Avoid Downloads from Social Media Ads

Given the number of these scams and how often they change, avoid following social media ad links. When you see something that interests you, look up the business and find its legitimate website to learn more.

While many social media ads are harmless, fraudulent ones can sometimes be indistinguishable. Consequently, it’s best to be safe and keep clicks to a minimum.

5. Stay Up to Date on Cybersecurity News

Finally, you can stay safe by keeping up with security news, especially regarding AI and social media scams. Cybercriminals often change tactics, so staying informed helps you know which warning signs to watch for.

Schemes like fake AI ads will not go away anytime soon. Scammers stole or extorted $1.03 trillion in 2024 alone. Cybercrime is far too profitable to abandon, so users must stay on their toes and watch out for suspicious signs.

AI’s Popularity Has Big Cybersecurity Implications

AI has an unusual relationship with cybersecurity. On the one hand, it can make things more secure by improving security software and acting faster than humanly possible. On the other hand, cybercriminals can use it, too, even if its popularity is what entices clicks.

Recognizing these tactics is the first step to staying safe. Follow these steps to avoid falling for fake AI ads and keep your device malware-free.