stub Explainability Can Address Every Industry’s AI Problem: The Lack of Transparency - Unite.AI
Connect with us

Thought Leaders

Explainability Can Address Every Industry’s AI Problem: The Lack of Transparency

mm

Published

 on

By: Migüel Jetté, VP of R&D Speech, Rev.

In its nascent stages, AI may have been able to rest on the laurels of newness. It was okay for machine learning to learn slowly and maintain an opaque process where the AI’s calculation is impossible for the average consumer to penetrate. That’s changing. As more industries such as healthcare, finance and the criminal justice system begin to leverage AI in ways that can have real impact on peoples’ lives, more people want to know how the algorithms are being used, how the data is being sourced, and just how accurate its capabilities are. If companies want to stay at the forefront of innovation in their markets, they need to rely on AI that their audience will trust. AI explainability is the key ingredient to deepen that relationship.

AI explainability differs from standard AI procedures because it offers people a way to understand how the machine learning algorithms create output. Explainable AI is a system that can provide people with potential outcomes and shortcomings. It’s a machine learning system that can fulfill the very human desire for fairness, accountability and respect for privacy. Explainable AI is imperative for businesses to build trust with consumers.

While AI is expanding, AI providers need to understand that the black box can’t. Black box models are created directly from the data and oftentimes not even the developer who created the algorithm can identify what drove the machine’s learned habits. But the conscientious consumer doesn’t want to engage with something so impenetrable it can’t be held accountable. People want to know how an AI algorithm arrives at a specific result without the mystery of sourced input and controlled output, especially when AI’s miscalculations are often due to machine biases. As AI becomes more advanced, people want access to the machine learning process to understand how the algorithm came to its specific result. Leaders in every industry must understand that sooner or later, people will no longer prefer this access but demand it as a necessary level of transparency.

ASR systems such as voice-enabled assistants, transcription technology and other services that convert human speech into text are especially plagued by biases. When the service is used for safety measures, mistakes due to accents, a person’s age or background, can be grave mistakes, so the problem has to be taken seriously. ASR can be used effectively in police body cams, for example, to automatically record and transcribe interactions — keeping a record that, if transcribed accurately, could save lives. The practice of explainability will require that the AI doesn’t just rely on purchased datasets, but seeks to understand the characteristics of the incoming audio that might contribute to errors if any exist. What is the acoustic profile? Is there noise in the background? Is the speaker from a non English-first country or from a generation that uses a vocabulary the AI hasn’t yet learned? Machine learning needs to be proactive in learning faster and it can start by collecting data that can address these variables.

The necessity is becoming obvious, but the path to implementing this methodology won’t always have an easy solution. The traditional answer to the problem is to add more data, but a more sophisticated solution will be necessary, especially when the purchased datasets many companies use are inherently biased. This is because historically, it’s been difficult to explain a particular decision that was rendered by the AI and that's due to the nature of the complexity of the end-to-end models. However, we can now, and we can start by asking how people lost trust in AI in the first place.

Inevitably, AI will make mistakes. Companies need to build models that are aware of potential shortcomings, identify when and where the issues are happening, and create ongoing solutions to build stronger AI models:

  1. When something goes wrong, developers are going to need to explain what happened and develop an immediate plan for improving the model to decrease future, similar mistakes.
  2. For the machine to actually know whether it was right or wrong, scientists need to create a feedback loop so that AI can learn its shortcomings and evolve.
  3. Another way for ASR to build trust while the AI is still improving is to create a system that can provide confidence scores, and offer reasons as to why the AI is less confident. For example, companies typically generate scores from zero to 100 to reflect their own AI’s imperfections and establish transparency with their customers. In the future, systems may provide post-hoc explanations for why the audio was challenging by offering more metadata about the audio, such as perceived noise level or a less understood accent.

Additional transparency will result in better human oversight of AI training and performance. The more we are open about where we need to improve, the more accountable we are to taking action on those improvements. For example, a researcher may want to know why erroneous text was output so they can mitigate the problem, while a transcriptionist may want evidence as to why ASR misinterpreted the input to help with their assessment of its validity. Keeping humans in the loop can mitigate some of the most obvious problems that arise when AI goes unchecked. It can also speed up the time required for AI to catch its errors, improve and eventually correct itself in real time.

AI has the capabilities to improve people’s lives but only if humans build it to produce properly. We need to hold not only these systems accountable but also the people behind the innovation. AI systems of the future are expected to adhere to the principles set forth by people, and only until then will we have a system people trust. It’s time to lay the groundwork and strive for those principles now while it’s ultimately still humans serving ourselves.

Miguel Jetté is the head of AI R&D at Rev, a speech-to-text transcription platform combining AI with skilled humans. He leads the team responsible for developing the world’s most accurate speech-to-text AI platform. Passionate about solving complex problems while improving lives, he is dedicated to increasing build inclusion and equality through technology. In over two decades he has worked to implement voice technologies with companies including Nuance Communications and VoiceBox. He earned a master of mathematics and statistics from McGill University in Montreal. When not advancing communication through AI he spends his time as a photographer for rock climbing competitions.