stub How Banks Must Leverage Responsible AI to Tackle Financial Crime - Unite.AI
Connect with us

Thought Leaders

How Banks Must Leverage Responsible AI to Tackle Financial Crime

mm
Updated on

Fraud is certainly nothing new in the financial services sector, but recently there’s been an acceleration that’s worth analyzing in greater detail. As technology develops and evolves at a rapid pace, criminals have found even more routes to break through compliance barriers, leading to a technological arms race between those attempting to protect consumers and those looking to cause them harm. Fraudsters are combining emerging technologies with emotional manipulation to scam people out of thousands of dollars, leaving the onus firmly on banks to upgrade their defenses to effectively combat the evolving threat.

To tackle the increasing fraud epidemic, banks themselves are starting to take advantage of new technology. With banks sitting on a wealth of data that hasn’t previously been used to its full potential, AI technology has the capability to empower banks to spot criminal behavior before it’s even happened by analyzing vast data sets.

Increased fraud risks

It’s positive to see governments across the world take a proactive approach when it comes to AI, particularly in the US and across Europe. In April the Biden administration announced a $140 million investment into research and development of artificial intelligence – a strong step forward no doubt. However, the fraud epidemic and the role of this new technology in facilitating criminal behavior cannot be overstated – something that I believe the government needs to have firmly on its radar.

Fraud cost consumers $8.8bn in 2022, up 44% from 2021. This drastic increase can largely be attributed to increasingly available technology, including AI, that scammers are starting to manipulate.

The Federal Trade Commission (FTC) noted that the most prevalent form of fraud reported is imposter scams – with losses of $2.6 billion reported last year. There are multiple types of imposter scams, ranging from criminals pretending to be from government bodies like the IRS or family members pretending to be in trouble; both tactics used to trick vulnerable consumers into willingly transferring money or assets.

In March this year, the FTC issued a further warning about criminals using existing audio clips to clone the voices of relatives through AI. In the warning, it states “Don’t trust the voice”, a stark reminder to help guide consumers away from sending money unintentionally to fraudsters.

The types of fraud employed by criminals are becoming increasingly varied and advanced, with romance scams continuing to be a key issue. Feedzai’s recent report, The Human Impact of Fraud and Financial Crime on Customer Trust in Banks found that 42% of people in the US have fallen victim to a romance scam.

Generative AI, capable of generating text, images and other media in response to prompts has empowered criminals to work en masse, finding new ways to trick consumers into handing over their money. ChatGPT has already been exploited by fraudsters, allowing them to create highly realistic messages to trick victims into thinking they’re someone else and that’s just the tip of the iceberg.

As generative AI becomes more sophisticated, it’s going to become even more difficult for people to differentiate between what’s real and what’s not. Subsequently, it’s vital that banks act quickly to strengthen their defenses and protect their customer bases.

AI as a defensive tool

However, just as AI can be used as a criminal tool, so too can it help effectively protect consumers. It can work at speed analyzing vast amounts of data to come to intelligent decisions in the blink of an eye. At a time when compliance teams are hugely overworked, AI is helping to decide what is a fraudulent transaction and what isn’t.

By embracing AI, some banks are building complete pictures of customers, enabling them to identify any unusual behavior rapidly. Behavioral datasets such as transaction trends, or what time people typically access their online banking can all help to build a picture of a person’s usual “good” behavior.

This is particularly helpful when spotting account takeover fraud, a technique used by criminals to pose as genuine customers and gain control of an account to make unauthorized payments. If the criminal is in a different time zone or starts to erratically try to access the account, it’ll flag this as suspicious behavior and flag a SAR, a suspicious activity report. AI can speed this process up by automatically generating the reports as well as filling them out, saving cost and time for compliance teams.

Well-trained AI can also help with reducing false positives, a huge burden for financial institutions. False positives are when legitimate transactions are flagged as suspicious and could lead to a customer’s transaction – or worse, their account – being blocked.

Mistakenly identifying a customer as a fraudster is one of the leading issues faced by banks. Feedzai research found that half of consumers would leave their bank if it stopped a legitimate transaction, even if it were to resolve it quickly. AI can help reduce this burden by building a better, single view of the customer that can work at speed to decipher if a transaction is legitimate.

However, it’s paramount that financial institutions adopt AI that is responsible and without bias. Still a relatively new technology, reliant on learning skills from existing behaviors, it can pick up biased behavior and make incorrect decisions which could also impact banks and financial institutions negatively if not properly implemented.

Financial institutions have a responsibility to learn more about ethical and responsible AI and align with technology partners to monitor and mitigate AI bias, whilst also protecting consumers from fraud.

Trust is the most important currency a bank holds and customers want to feel secure in the knowledge that their bank is doing the utmost to protect them. By acting quickly and responsibly, financial institutions can leverage AI to build barriers against fraudsters and be in the best position to protect their customers from ever-evolving criminal threats.

Pedro Bizarro is co-founder and Chief Science Officer of Feedzai. Drawing on a history in academia and research, Pedro has turned his technical expertise into entrepreneurial success as he has helped to develop Feedzai’s industry-leading artificial intelligence platform to fight fraud. Pedro has been an official member of the Forbes Technology Council, a visiting professor at Carnegie Mellon University, a Fulbright Fellow, and has worked with CERN, the European Organization for Nuclear Research. Pedro holds a Computer Science PhD from the University of Wisconsin-Madison.