Connect with us

Thought Leaders

AI and the Arc of Trust

mm

Early last year, when our team was making predictions about what would lie ahead, the consensus was the wrappers were coming off artificial intelligence (AI). We’d finally see what it could do, and hopefully, gain clarity on its impact on business and society and how to proceed. It wasn’t a novel prediction, but its core was accurate, though we still grapple with the implications of AI and how (or if) it should be controlled in some way.

We’d end up seeing Apple, Microsoft and Google build AI into devices and bring its power to a broader and larger audience. Then again, one chatbot in an AI-powered search tool threatened users and claimed it spied on employees, while another suggested using Elmer’s glue in homemade pizza to keep the cheese from slipping off. Still, AI business adoption use cases grew dramatically and so did the market. According to data from Crunchbase, nearly a third of all global venture funding last year went to companies in AI-related fields alone.

But even as OpenAI delivered truly remarkable enhanced reasoning, renowned AI researcher, Yoshua Bengio, urged adoption of safety measures for “frontier models” with the potential to cause catastrophic harm. And though the technology would take home two Nobel Prizes for applications in science, one of its recipients expressed worry about having systems “more intelligent than us that eventually take control.”

In such a fast-moving space, it’s difficult for expert technologists to keep up, never mind those in the mainstream.

And those AI hiccups – and concerns over privacy and unchecked use –  have only raised public wariness.

Just trust me

We find ourselves midway on an arc of trust, one which requires advancing public acceptance of AI, while ensuring business and technology communities act responsibly. The first part is where the game is being played right now, on the corporate level, gaining steam with each safe and sound deployment. We’re building upon our proof points, which will eventually lead to increased public trust. The second part, however, is far trickier.

Who defines and enforces the responsible use of AI? Can an industry create guidelines when they’re the ones who need regulating? If technologists are having trouble following AI, will policymakers be informed, and keep their political agendas out of the discussion? And when an AI billionaire suddenly wants to talk about control, did they have an epiphany or are they just trying to pump the brakes in order to catch up?

“Just trust me” isn’t going to cut it with AI, no matter who you are.

Trust but verify

People look at generative AI like ChatGPT and wonder if it’ll be their next Google? Well, the top result of a Google search is now created by its AI model, Gemini. The problem is you can’t rely on it for business at scale when a simple question like “Is raw meat safe to consume?” returns the answer, “Yes, frozen.” You need to inspect the data.

The arc of public trust will mirror what it did with conventional search, gained over time and with proof of reliability. Ironically, in our quest to simplify and improve search, we’ve taken a step backward. Now, after getting that AI-generated answer up top, you must scroll past a long list of sponsorship links, click on the next five and still qualify the information.

That’s a lot of work when you’re looking for a fast answer – and you can’t have an entire company doing the same. However, enrich that data you’ve been mining with tens of thousands of your own service tickets and you bring in actual knowledge about your environment. Algorithmically, you can then configure hallucinations down, but still, it remains a “trust but verify” situation.

Keep politics out of it

When it comes to regulating AI, some posture the horse has already left the barn and likely won’t be caught. For example, there is a lack of truly effective tools for checking if a student drafted a paper or used GenAI. The technology is just too far ahead.

Regulating this would be very complex, and to be honest, we’d be veering onto thin ice. We know tech companies are decades ahead of proposed outside regulators. But we’re good at carving a path, not being held back. Still, at the end of the day, it may fall to innovators to try to apply governance. Who else could do it – responsibly?

There are many politicians who’d love to give it a go. The risk, in addition to a lack of understanding, is if they have a personal and political agenda to advance. Their focus, potentially, would be less on cultivating AI and more on doing what’s in their best political interests. They might be able to work off those public fears, using a heavy hand to stymie its progress.

The CHIPS and Science Act was a good example of healthy governmental action, producing a dramatic 15-fold increase in the building of manufacturing facilities for computing and electronic devices. But this was made possible through bipartisanship – increasingly a relic of a past era.

Are we worthy?

There’s a lot of money flowing into AI and over the next two decades a lot will be made by tech companies. How much, how fast and how safely remains to be seen. On any given day, a deep fake could be circulated, showing someone in the spotlight juggling the eggs of an endangered bird. The public would react in horror, and while it may be revealed to be AI-generated, that egg does not go back into the shell – the damage is done.

We need such things to be regulated by informed technologists. What form that takes – a council, standards body, international framework – remains to be seen. What is known is AI is on an arc of trust, and as an industry, we need to prove we’re worthy of it.

Eduardo Mota is a senior cloud data architect and an AI/ML specialist at DoiT. An accomplished cloud architect and ML specialist, he holds a Bachelor of Business Administration and multiple related certifications, demonstrating his relentless pursuit of knowledge. Eduardo's journey includes pivotal roles at DoiT and AWS, where his expertise in AWS and GCP cloud architecture and optimization strategies significantly impacted operational efficiency and cost savings for multiple organizations.