stub Introducing New Levels of Transparency with AI - Thought Leaders - Unite.AI
Connect with us

Thought Leaders

Introducing New Levels of Transparency with AI – Thought Leaders

mm
Updated on

By Balakrishna D R, Senior Vice President, Service Offering Head – Energy, Communications, Services, and AI and Automation services, at Infosys.

On January 9, 2020, the World Health Organization notified the public of the Coronavirus outbreak in China. Three days prior, the US Centers for Disease Control and Prevention had gotten the word out. But it was a Canadian health monitoring platform that had beaten them both to the punch, sending word of the outbreak to its customers as early as on December 31, 2019! The platform, BlueDot uses artificial intelligence-driven algorithms that scours foreign-language news reports, animal and plant disease networks, and official proclamations to give its clients advance warning to avoid danger zones like Wuhan.

Over the past few years, artificial intelligence has become the key source of transformation, disruption and competitive advantage in today’s fast changing economy. From epidemic tracking to defense to healthcare to autonomous vehicles and everything in between, AI is gaining widespread adoption. PwC predicts that AI could contribute up to $15.7 trillion to the global economy in 2030, at its current growth rate.

Yet, for all the hope that AI brings, it still poses unanswered questions around transparency and trustworthiness. The need to understand, predict and trust the decision-making ability of AI systems is important particularly in areas that are critical to life, death, and personal wellness.

 

Into the unknown

When automated reasoning systems were first introduced to support decision-making, they relied on hand-crafted rules. While this made it easy to interpret as well as modify their behavior, they were not scalable. Machine learning based models arrived to address the latter need; they did not require human intervention and could train from data – the more the better. While deep learning models are unsurpassed in their modelling capacity and scope of applicability, the fact that these models are black boxes for the most part, raises disturbing questions regarding their veracity, trustworthiness and biases in the context of their wide usage.

There is currently no direct mechanism to trace the reasoning implicitly used by deep learning models. With machine learning models that have a black-box nature, the primary kind of explainability is known as post-hoc explainability, implying that the explanations are derived from the nature and properties of the outputs generated by the model. Early attempts to extract rules from neural networks (as deep learning was earlier known) are not currently pursued since the networks have become too large and diverse for tractable rule extraction. There is, therefore, an urgent need to introduce interpretability and transparency into the very fabric of AI modelling.

 

Exit night, enter light

This concern has created a need for transparency in machine learning, which has led to the growth of explainable AI, or XAI. It seeks to address the major issues that hinder our ability to fully trust AI decision-making — including bias and transparency. This new field of AI brings accountability to ensure that AI benefits society with better outcomes for all involved.

XAI will be critical in helping with the bias inherent to AI systems and algorithms, which are programmed by people whose backgrounds and experiences unintentionally lead to the development of AI systems that exhibit bias. Unwanted biases such as discrimination against a particular nationality or ethnicity may creep in because the system adds a value to it based on real data. To illustrate, it may be found that typical loan defaulters come from a particular ethnic background, however, implementing any restrictive policy based on this may be against fair practices. Erroneous data is another cause of bias. Example, if a particular face recognition scanner is inaccurate 5% of the time because of the complexion of the person or the light falling on the face, it could bring in bias. Lastly, if your sample data isn’t a true representation of the whole population, bias is inevitable.

XAI aims to address how black box decisions of AI systems are arrived at. It inspects and tries to understand the steps and models involved in making decisions. It answers crucial questions such as: Why did the AI system make a specific prediction or decision? Why didn’t the AI system do something else? When did the AI system succeed or fail? When do AI systems give enough confidence in the decision that you can trust it, and how can the AI system correct errors?

 

Explainable, predictable and traceable AI

One way to gain explainability in AI systems is to use machine learning algorithms that are inherently explainable. For example, simpler forms of machine learning such as decision trees, Bayesian classifiers, and other algorithms that have certain amounts of traceability and transparency in their decision making. They can provide the visibility needed for critical AI systems without sacrificing too much performance or accuracy.

Noticing the need to provide explainability for deep learning and other more complex algorithmic approaches, the US Defense Advanced Research Project Agency (DARPA) is pursuing efforts to produce explainable AI solutions through a number of funded research initiatives. DARPA describes AI explainability in three parts which include: prediction accuracy, which means models will explain how conclusions are reached to improve future decision making; decision understanding and trust from human users and operators, as well as inspection and traceability of actions undertaken by the AI systems.

Traceability will empower humans to get into AI decision loops and have the ability to stop, or, control its tasks, whenever need arises. An AI system is not only expected to perform a certain task or impose decisions, but also provide a transparent report of why it took specific decisions with the supporting rationale.

Standardization of algorithms or even XAI approaches isn’t currently possible, but it might certainly be possible to standardize levels of transparency / levels of explainability. Standards organizations are trying to arrive at common, standard understandings of these levels of transparency to facilitate communication between end users and technology vendors.

As governments, institutions, enterprises and the general public come to depend on AI-based systems, winning their trust through clearer transparency of the decision-making process is going to be fundamental. The launch of the first global conference exclusively dedicated to XAI, the International Joint Conference on artificial intelligence: Workshop on Explainable Artificial Intelligence, is further proof that the age of XAI has come.

Balakrishna, popularly known as Bali D.R., is the Head of AI and Automation at Infosys where he drives both internal automation for Infosys and provides independent automation services leveraging products for clients. Bali has been with Infosys for more than 25 years and has played sales, program management and delivery roles across different geographies and industry verticals.