stub How Companies Can Create Responsible and Transparent AI - Thought Leaders - Unite.AI
Connect with us

Thought Leaders

How Companies Can Create Responsible and Transparent AI – Thought Leaders

mm

Published

 on

By Eric Paternoster, CEO of Infosys Public Services

Sundar Pichai, CEO of Google parent company Alphabet, has described developments in AI as “more profound than fire or electricity,” and COVID-19 has brought fresh urgency in unleashing this technology’s promise. Applications of AI are now firmly in the spotlight, improving COVID treatments, tracing potential COVID carriers, and deploying real-time chatbots for supply-stricken users of retail websites. These applications have shown that AI improves a business’s resilience and benefits broader society.

So along with “cloud-native,” the buzzword of the last quarter might just be “AI-first transformation,” a term that industry practitioners believe will hold true even after COVID goes away. For many firms, the promise of lower costs (i.e., supply chain algorithms that match supply with demand) and admirable boosts in productivity (i.e., when banks use document and identity verification in real time) is just too good to ignore.

Why AI-First Transformation?

In AI-first transformation, an enterprise uses AI as a North Star, working to use it not only intelligently but also in a way that influences decisions made by people, processes, and systems at scale. It tunes organizations into changing dynamics between employees, partners, and customers. This enables them to quickly pivot and meet shifting demands while creating long-term competitive advantage.

But not all firms are at the same level of AI maturity. There are some that can be termed the “conventional AI group,” or H1. These firms, which have less experience and investment, generally use classical algorithms such as naïve Bayes, which has been around for 250 years, or random forest (developed by Tin Kam Ho in 1995) to augment fragmented intelligence within existing systems. Such uses of AI are strictly rules-based and quite rigid, lacking the ability to generalize from the rules they discover. Then there is the “deep learning group,” or H2. These firms embrace more complex AI, including neural machine translations and transcription-based systems, to mine conversational insights. Such systems have more power but don’t easily explain why they do the things they do. They also lack transparency. For these two groups, the AI used often isn’t trustworthy or reliable and can make biased decisions that bring the firm negative attention from governmental bodies, regulators, and the general public.

These firms need to make moves now to take their AI implementations a step further — to a third camp (H3) where AI is self-learning and generative. At this point, AI systems are semi-supervised or even unsupervised. They are transparent and achieve “common sense” through multitask learning. These systems deliver richer intelligence and provide real-time, actionable insights. This is done through well-managed, governed AI that is interpretable and explainable at all stages.

How to work toward more responsible, transparent AI

AI is increasingly being used to manage schools, workspaces, and other public entities. In these settings, it is more important than ever that the AI is fair and transparent. However, as society works through this explosion of AI adoption, regulatory bodies are providing limited guidance on appropriate development and deployment of AI technologies. Thus, the onus is on companies to take the lead. The broader tech industry must put financial muscle and human capital to work, transforming initial implementations of fragmented AI into efficient, creative, responsible, and transparent intelligence-driven ecosystems. To move into this space, firms should do the following four things:

  • Keep humans in the loop: AI models are often designed to operate independently of humans. However, the human element is crucial in many cases. Humans need to review decisions and avoid biases and mistakes that often sidetrack AI projects. Two use cases include fraud detection and cases where law enforcement is involved. We recommend that firms hire AI practitioners slowly yet consistently over time to get a leg up on their AI-first journey.
  • Eliminate biased datasets: An unbiased dataset is a critical prerequisite to make reliable, fair and nondiscriminatory AI models. To get a sense of its importance, AI is being used for shortlisting resumes and credit scoring by banks, and it has even made its way into some judicial systems. In this landscape, unchecked biases have had very real implications.
  • Ensure decisions are explainable: This feature has been covered by many of the big news outlets, and rightly so. XAI helps explain why an AI system made a certain decision. It uncovers which features of the deep learning model were used more than others to make its prediction or hypothesis. Understanding feature importance and being able to justify how decisions are reached is crucial to use cases such as autonomous vehicles and computer vision used in medical biopsies.
  • Reliably reproduce findings: A common necessity in research projects, AI models should be consistent when giving predictions over time. Such models should not be phased when presented with new data.

These four things will create transparent, intelligence-driven ecosystems, moving toward what we term a “live enterprise.” Here, unbiased, explainable decisions are made in next-to-real time, with the whole enterprise acting as a sentient organism that is governed by humans. Read the Infosys Knowledge Institute white paper to find out more.

Eric Paternoster is Chief Executive Officer of Infosys Public Services, an Infosys subsidiary focused on public sector in US and Canada. In this role, he oversees company strategy and execution for profitable growth, and advises public sector organizations on strategy, technology and operations. He also serves on the Boards of Infosys Public Services and the McCamish subsidiary of Infosys BPM.

Eric has over 30 years of experience in public sector, healthcare, consulting and business technology with multiple firms. Prior to his current role, he was Senior Vice President and Head of Insurance, Healthcare and Life Sciences business unit, where he grew the business from $90 million to over $700 million with 60+ clients across Americas, Europe and Asia. Eric joined Infosys in 2002 as Head of Business Consulting for Eastern US and Canada.