stub Building Trust in AI with ID Verification - Unite.AI
Connect with us

Thought Leaders

Building Trust in AI with ID Verification

mm

Published

 on

Generative AI has captured interest across businesses globally. In fact, ​​60% of organizations with reported AI adoption are now using generative AI. Today’s leaders are racing to determine how to incorporate AI tools into their tech stacks to remain competitive and relevant – and AI developers are creating more tools than ever before. But, with rapid adoption and the nature of the technology, many security and ethical concerns are not fully being considered as businesses rush to incorporate the latest and greatest technology. As a result, trust is waning.

A recent survey found only 48% of Americans believe AI is safe and secure, while 78% say they are very or somewhat concerned that AI can be used for malicious intent. While AI has been found to improve daily workflows, consumers are concerned about bad actors and their ability to manipulate AI. Deepfake capabilities, for example, are becoming more of a threat as the accessibility of the technology to the masses increases.

Having an AI tool is no longer enough. For AI to reach its true, beneficial potential, businesses need to incorporate AI into solutions that demonstrate responsible and viable use of the technology to bring higher confidence to consumers, especially in cybersecurity where trust is key.

AI Cybersecurity Challenges

Generative AI technology is progressing at a rapid rate and developers are just now understanding the significance of bringing this technology to the enterprise as seen by the recent launch of ChatGPT Enterprise.

Current AI technology is capable of achieving things only talked about in the realm of science fiction less than a decade ago. How it operates is impressive, but the relatively quick expansion in which it is all happening is even more impressive. That is what makes AI technology so scalable and accessible to companies, individuals, and, of course, fraudsters. While the capabilities of AI technology have spearheaded innovation, its widespread use has also led to the development of dangerous tech such as deepfakes-as-a-service. The term “deepfake” is derived from the technology creating this particular style of manipulated content (or “fake”) requiring the use of deep learning techniques.

Fraudsters will always follow the money that provides them with the greatest ROI – so any business with a high potential return will be their target. This means fintech, businesses paying invoices, government services and high-value goods retailers will always be at the top of their list.

We are in a place where trust is on the line, and consumers are increasingly less trustworthy, giving amateur fraudsters more opportunities than ever to attack. With the newfound accessibility of AI tools, and increasingly low cost,  it is easier for bad actors of any skill level to manipulate others’ images and identities. Deepfake capabilities are becoming more accessible to the masses through deepfake apps and websites and creating sophisticated deepfakes requires very little time and a relatively low level of skills.

With the use of AI, we have also seen an increase in account takeovers. AI-generated deepfakes make it easy for anyone to create impersonations or synthetic identities whether it be of celebrities or even your boss. ​​

AI and Large Language Model (LLM) generative language applications can be used to create more sophisticated and evasive fraud that is difficult to detect and remove. LLMs specifically have created a widespread use of phishing attacks that can speak your mother tongue perfectly. These also create a risk of “romance fraud” at scale, when a person makes a connection with someone through a dating website or app, but the individual they are communicating with is a scammer using a fake profile. This is leading many social platforms to consider deploying “proof of humanity” checks to remain viable at scale.

However, these current security solutions in place, which use metadata analysis, cannot stop bad actors. Deepfake detection is based on classifiers that look for differences between real and fake. However, this detection is no longer powerful enough as these advanced threats require more data points to detect.

AI and Identity Verification: Working Together

Developers of AI need to focus on using the technology to provide improved safeguards for proven cybersecurity measures. Not only will this provide a more reliable use case for AI, but it can also provide more responsible use – encouraging better cybersecurity practices while advancing the capabilities of existing solutions.

A main use case of this technology is within identity verification. The AI threat landscape is constantly evolving and teams need to be equipped with technology that can quickly and easily adjust and implement new techniques.

Some opportunities in using AI with identity verification technology include:

  • Examining key device attributes
  • Using counter-AI to identify manipulation: To avoid being defrauded and protect important data, counter-AI can identify the manipulation of incoming images.
  • Treating the ‘absence of data’ as a risk factor in certain circumstances
  • Actively looking for patterns across multiple sessions and customers

These layered defenses provided by both AI and identity verification technology, investigate the person, their asserted identity document, network and device, minimizing the risk of manipulation as a result of deepfakes and ensuring only trusted, genuine people gain access to your services.

AI and identity verification need to continue to work together. The more robust and complete the training data, the better the model gets and as AI is only as good as the data it is fed, the more data points we have, the more accurate identity verification and AI can be.

Future of AI and ID Verification

It's hard to trust anything online unless proven by a reliable source. Today, the core of online trust lies in proven identity. Accessibility to LLMs and deepfake tools poses an increasing online fraud risk. Organized crime groups are well funded and now they're able to leverage the latest technology at a larger scale.

Companies need to widen their defense landscape, and cannot be afraid to invest in tech, even if it adds a bit of friction. There can no longer be just one defense point – they need to look at all of the data points associated with the individual who is trying to gain access to the systems, goods, or services and keep verifying throughout their journey.

Deepfakes will continue to evolve and become more sophisticated, business leaders need to continuously review data from solution deployments to identify new fraud patterns and work to evolve their cybersecurity strategies continuously alongside the threats.

As CEO and co-founder, Kaarel is the strategic thinker and visionary behind Veriff.
With plenty of energy and enthusiasm, he encourages the Veriff team to stand for honesty in the digital
world and keeps the team one step ahead of fraud and competition in the dynamic world of online
verification. In 2023, Kaarel was honored in the EU Forbes 30 Under 30, in 2020 he was named the EY
Entrepreneur of the Year in Estonia and Nordic Business Report has named him one of the 25 most
influential young entrepreneurs in Northern Europe.