stub Lack of Trustworthy AI Can Stunt Innovation and Business Value - Unite.AI
Connect with us

Thought Leaders

Lack of Trustworthy AI Can Stunt Innovation and Business Value

mm

Published

 on

A recent survey among global business leaders shows trustworthy AI is a major priority, yet many are not taking enough steps to achieve it, but at what cost?

Indeed, the IBM survey revealed that a staggering 85% of respondents agree that consumers are more likely to choose a company that’s transparent about how its AI models are built, managed, and used.

However, the majority admitted they haven’t taken key steps to ensure their AI is trustworthy and responsible, such as reducing bias (74%), tracking performance variations and model drift (68%), and making sure they can explain AI-powered decisions (61%). This is worrying, especially when you consider the usage of AI keeps growing – with 35% saying they now use AI in their business, up from 31% a year ago.

I recently attended the invitation-only Corporate Innovation Summit in Toronto where attendees exchanged innovative ideas and showcased technologies poised to shape the future. I had the privilege of participating in three roundtables within financial services, insurance, and retail segments with three key areas emerging: the need for more transparency to foster trust in AI, democratization of AI through no-code/low-code, and development to deliver faster time-to-value and risk mitigation through AI regulatory governance best practices.

Increase trust in AI technologies. COVID-19 amplified and accelerated the trend toward espousing AI-powered chatbots, virtual financial assistants and touchless customer on-boarding. This trend will continue as confirmed in research by Cap Gemini which shows that 78% of consumers surveyed are planning to increase use of AI technologies, including digital identity management in their interactions with financial services organizations.

The inherent benefits notwithstanding, a number of challenges arise. Chief among them is continued consumer distrust of AI technologies and how their ubiquitous nature impact their privacy and security rights. 30% of consumers stated that they would be more comfortable sharing their biometric information if their financial service providers provided more transparency in explaining how their information is collected, managed and secured.

CIOs must adopt trustworthy AI principles and institute rigorous measures that safeguard privacy and security rights. They can achieve this through encryption, data minimization  and safer authentication, including considering emerging decentralized digital identity standards. As a result, your intelligent automation efforts and self-service offerings will see more adoption and needing less human intervention.

Remove barriers to the democratization of AI. There is a growing shift toward no-code/low-code AI applications development, which research forecasts to reach $45.5 billion by 2025. The main driver is faster time to value with improvements in application development productivity by 10x.

For example, 56% of financial service organizations surveyed consider data collection from borrowers as one of the most challenging and inefficient steps within the loan application process, resulting in high abandonment rates. While AI-driven biometric identification and data collection technologies are proven to improve efficiencies in the loan application process they may also create compliance risks particularly, data privacy, confidentiality and AI algorithmic bias.

To mitigate and remediate such risks low code/no code applications must include comprehensive testing to ensure that they perform in accordance with initial design objectives, remove potential bias in the training data set that may include sampling bias, labeling bias, and is secure from adversarial AI attacks that can adversely impact AI algorithmic outcomes.  Consideration of responsible data science principles of fairness, accuracy, confidentiality and security is paramount.

Develop an AI governance and regulatory framework. AI governance is no longer a nice to have initiative but an imperative. According to The OECD’s tracker on national AI policies, there are over 700 AI regulatory initiatives under development in over 60 countries. There are however, voluntary codes of conduct and ethical AI principles developed by international standards organizations such as the Institute of Electrical and Electronic Engineers (“IEEE”) and the National Institute of Standards and Technology (NIST).

Concerns from organizations surround the assumption that AI regulations will impose more rigorous compliance obligations on them, backed by onerous enforcement mechanisms, including penalties for noncompliance. Yet, AI regulation is inevitable.

Europe and North America are taking proactive stances that will require CIOs to collaborate with their technology and business counterparts to form effective policies. For example, the European Commission’s proposed an Artificial Intelligence Act is proposing to institute risk-based obligations on AI providers to protect consumer rights, while at the same time promote innovation and economic opportunities associated with AI technologies.

Additionally, in June 2022, the Canadian Federal Government released its much awaited Digital Charter Implementation Act which protects against adverse impacts of high-risk AI systems. The US is also proceeding with AI regulatory initiatives, albeit on a sectoral basis.  The Federal Trade Commission (FTC),  the Consumer Financial Protection Bureau (CFPB) and The Federal Reserve Board are all flexing their regulatory muscles through their enforcement mechanisms to protect consumers against adverse impacts arising from the increased applications of AI that may result in discriminatory outcomes, albeit, unintended. An AI regulatory framework is must for any innovative company.

Achieving Trustworthy AI Requires Data Driven Insights

Implementation of trustworthy AI cannot be achieved without a data driven approach to determine where the applications of AI technologies may have the greatest impact before proceeding with implementation. Is it to improve customer engagement, or to realize operational efficiencies or to mitigate compliance risks?

Each of these business drivers requires an understanding of how processes execute, how escalations and exceptions are handled, and identify variations in process execution roadblocks and their root causes. Based on such data driven analysis, organizations can make informed business decisions as to the impact and outcomes associated with implementation of AI-based solutions to reduce customer onboarding friction and improve operational efficiencies. Once organizations have the benefit of data driven insights, then they can automate highly labor-intensive processes such as meeting AI compliance mandates, compliance auditing, KYC and AML in financial services.

The main takeaway is that an integral part of AI-enabled process automation is implementation of trustworthy AI best practices. Ethical use of AI should not be considered only as a legal and moral obligation but as a business imperative. It makes good business sense to be transparent in the application of AI. It fosters trust and engenders brand loyalty.

Andrew Pery is an AI Ethics Evangelist at intelligent automation company ABBYY. Pery has more than 25 years of experience spearheading product management programs for leading global technology companies. His expertise is in intelligent document process automation and process intelligence with a particular expertise in AI technologies, application software, data privacy and AI ethics. He holds a Master of Law degree with Distinction from Northwestern University Pritzker School of Law and is a Certified Data Privacy Professional.