Connect with us

Thought Leaders

Combatting AI-Driven Romance Scams: How Platforms Can Protect Users

mm
A split scene at night shows a smiling woman on the right looking at a dating app on her phone. On the left, a hooded, faceless figure at a laptop manipulates a glowing, glitching deepfake avatar of a handsome man that floats between them.

Each year, millions of people flock to social media and dating apps in search of connections. Unfortunately, fraudsters will be right behind them. The rise of AI technologies such as deepfakes and advanced chatbots has revolutionized romance scams, making them more believable and widespread than ever before. These scams are plaguing the digital landscape, targeting vulnerable individuals seeking companionship or love.

The Scope of the Problem

Scams of all types are strengthening and scaling with the help of new technology like AI. According to the Global Anti-Scam Alliance, 57% of adults worldwide in 2025 experienced a scam and 23% lost money. In the U.S., 70% of adults experienced a scam, and the average American now encounters 377 scam attempts per year. In a single year, Americans lost an estimated $64.8 billion to all types of scams, averaging around $1,087 per victim.

Romance scams can be particularly costly to consumers across the globe. In the UK for example, Santander reported that customers lost over £6.8 million to romance scams in the past year alone, emphasizing the widespread nature of these frauds. This is a stark reminder of how costly these scams can be to individuals and society, particularly when AI tools such as deepfakes and chatbots are used to deceive unsuspecting victims.

The Role of AI in Modern Romance Scams

Recent research found that 69% of global consumers say AI-powered fraud now poses a greater threat to personal security than traditional forms of identity theft. This statistic highlights the growing anxiety around AI-driven scams, including those in the romance sector, where scammers use sophisticated techniques to deceive victims. Scammers now deploy generative AI to craft convincing narratives, generate lifelike deepfake images, and create interactions that feel intimate and personal. These tools allow fraudsters to scale their operations, targeting thousands of potential victims at once, and exploiting emotional vulnerabilities. The integration of advanced AI in scam techniques has made these crimes harder to detect and even harder to avoid.

What Can Platforms Do to Fight Back?

The good news is that platforms have powerful tools at their disposal to help curb these scams and protect users, and users are eager to have these safeguards. Here are several steps platforms can take to mitigate the risks:

  • Implement Robust Identity Verification
    While many platforms are beginning to use stronger identity verification measures, there is still work to be done. Implementing systems that require users to submit government-issued IDs alongside selfies can prevent the creation of fake profiles. This helps ensure that users are who they say they are, reducing the chances of a scammer operating undetected.
  • Use AI for Real-Time Risk Detection
    Platforms can leverage AI to detect suspicious activity in real time. By monitoring patterns such as multiple accounts from the same IP, rapid messaging to many different users, or other behaviors typical of scammers, platforms can proactively identify and stop scammers before they cause harm. This data-driven approach allows for immediate responses, potentially preventing significant losses for users.
  • Strengthen Reporting and Support Pathways
    Even when users spot fraud, many don’t know where to turn for help. Studies have shown that a small percentage of victims report the crime to law enforcement or their bank. To address this, platforms must simplify their reporting mechanisms, making it easy for users to alert them to suspicious activity. Additionally, users should be given clear support channels to guide them through the reporting process and help them take appropriate action.

Platform Responsibility & Building a Safer Digital Environment

As fraud techniques evolve, platforms must go beyond relying solely on user education. The responsibility to protect users cannot rest on individuals alone. Platforms must implement proactive strategies to combat AI-driven scams. With tools like advanced identity verification, AI-enhanced risk detection, and accessible support systems, platforms can provide users with the necessary safeguards to stay safe online.

Scammers are becoming more sophisticated, leveraging tools like deepfakes and AI chatbots to exploit emotional vulnerabilities. However, by combining advanced technology with user education, platforms can disrupt these scams and create a safer online environment. Embracing real-time AI monitoring, robust identity verification, and user-friendly reporting systems will help prevent future scams, strengthen user trust, and foster safer digital communities.

Reinhard is the SVP of Product and Technology at Jumio, where he drives the strategy and execution for Jumio’s ID Verification and Biometrics portfolio. He leads cross-functional teams focused on building secure, scalable, AI-driven solutions that help businesses drive conversion and prevent fraud at scale.