Connect with us

Thought Leaders

Why Blind Trust in AI Could Be Your Worst Decision Yet

mm

In 1979, an IBM Training Manual made a simple but striking warning: “A computer can never be held accountable; therefore, a computer must never make a management decision.” And more than 45 years later, this statement feels like an ignored prophecy.

In 2025, AI doesn’t just assist; it makes autonomous decisions, and in many cases, it not only decides but leads. In fact, around 74% of executives are more confident in AI for business advice compared to colleagues or friends, 38% trust AI to make business decisions for them, and 44% defer to the technology’s reasoning over their own insights. The shift is clear; AI is the new gut instinct.

But there’s a problem. Trust in AI is only possible if the algorithm is worth trusting. And when trust is placed blindly, especially in black boxes we can’t understand or audit, it’s a risk disguised as progress. As with human leadership, confidence without accountability is dangerous, and when AI gets it wrong, who takes the fall?

When the Tool Becomes the Boss

What began as a tool for streamlining back-office operations is now being used in core business processes. But companies aren’t only using AI to support human decisions; they’re now trusting AI, particularly generative AI (GenAI), to make business decisions, from business strategy to customer service, financial modeling, and more.

This shift is understandable. AI doesn’t get distracted, forget instructions, or let emotions cloud its judgment. For many companies, this offers an appealing antidote to the risks of human error. However, a key question remains: can we trust AI to be the boss and make decisions independently?

It’s not a straightforward answer, but one way to look at it is how we judge people’s trustworthiness: by their competence, reliability, and clear intent. The same principles apply to AI.

To be trusted, an AI system must deliver results that are accurate, timely, and appropriate. But the level of trust and the margin for error vary depending on context. For instance, when diagnosing cancer from medical imagery, the bar for failure is extremely low. Conversely, when generating ideas for a marketing campaign, there is more room for experimentation.

We’ve seen AI used to make autonomous decisions in areas like credit approvals, with banks leveraging algorithms to determine loan eligibility in seconds. Retailers use AI to manage inventory and pricing without human input. But we’ve also seen failures—like self-driving cars misjudging road conditions.

One cautionary tale shows the risks of placing too much trust in AI without adequate oversight. Derek Mobley—a Black man over 40—applied to more than 100 positions through the Workday AI-driven hiring system since 2017 and was rejected every time. He alleged discrimination based on age and race. In May 2025, the court granted a nationwide collective action. The class includes all applicants aged 40+ who applied through Workday since September 2020 and were denied based on AI recommendations.

This example makes an important point: AI lacks emotional intelligence, moral reasoning, or a natural sense of fairness. And since AI is moving from human assistant to independent decision-maker, there’s now an accountability void. When algorithms are allowed to run without human checks and balances, they can and do make bad decisions and reinforce existing biases.

The Question Around Black Boxes

Black boxes—when an AI’s system and logic are not fully visible—are increasingly common. Although they may have visible layers, developers and users still cannot see what goes on at each layer, making them opaque.

For example, ChatGPT is a black box, as even its creators are unsure how it works, since it’s trained on such large data sets. But due to the lack of transparency, is it ever okay to ‘trust’ an AI model without fully understanding how it works?

In short, no: AI hallucinations are getting worse. This means in high-stakes scenarios, like financial decisions, legal advice, and medical insights, AI demands rigorous validation, cross-referencing, and human oversight.

The Disney and Universal lawsuit filed in June 2025 reinforces this point. The studios allege that GenAI tools were trained on copyrighted materials to create new content without consent. This case highlights a new reality: when companies deploy AI models they don’t fully understand, they may be held responsible for decisions made. And ignorance is no defense; it’s a liability.

However, we often put trust in complex systems we don’t understand. For example, most air travel passengers can’t explain the physics of flight, yet people board planes with confidence because we’ve built trust through repeated exposure, collective experience, and a strong track record of safety.

The same logic can again apply to AI. It’s unreasonable to expect everyone to understand how LLMs actually work. But trust isn’t built on comprehension; it requires familiarity, transparency about limitations and a proven pattern of reliable performance. Aerospace engineers know what tests to put in place and what errors look like, and we must demand the same from GenAI providers. The foundational principle of AI should be trust, but verify.

Furthermore, business leaders often believe AI will be the silver bullet that will solve all their business problems. However, this myth plagues many companies when integrating AI. Leaders might prefer complex and sophisticated models, but a simpler solution might be better suited if they carried out a cost-benefit analysis. AI is a powerful instrument, but it’s not appropriate for every task. Companies need to recognize the problem before they select a tool.

Rebuilding Trust in AI

While it’s clear that trusting AI blindly is an issue, AI systems and algorithms can be the greatest tool a business ever owns—when used safely.

For businesses looking to leverage AI tools, the first thing to research is vendor due diligence. When a business has identified an area that could benefit from AI efficiency, business leaders should evaluate vendors not only on performance claims but on governance controls. This includes reviewing how models are developed, whether explainability tools are in place, how bias is monitored and if audit trails are available. Choosing a vendor with transparent processes is essential to mitigating risk from the start.

Perhaps the most important point when building trust in AI systems is ensuring data governance with clean, representative, and well-documented datasets. As the saying goes: garbage in, garbage out. Therefore, if the data is incomplete, biased, or inaccurate, even the most advanced model will produce unreliable results.

To ensure data is AI-ready, businesses should:

  • Audit existing datasets for gaps and duplication, and check for sources of bias

  • Standardize data formats

  • Implement data governance policies that define ownership and access controls

Another key step for business leaders to conduct is stress testing under different conditions. Although a model might do well in controlled tests, it is critical to understand the model’s limitations when faced with new data or inputs that it didn’t expect. This is why it’s important to test AI in a variety of situations, with different types of users, various use cases, and data from different time periods.

AI validation is also an ongoing task. As data changes over time, even reliable AI models can lose accuracy. That’s why regular monitoring matters. Businesses need to monitor how the model is performing day to day: is it still accurate? Or are the false positives creeping up? And just like any system that needs upkeep, models should be retrained regularly with fresh data to stay relevant.

AI isn’t trustworthy or untrustworthy; it’s shaped by the data it learns from, the people who make it, and the rules that govern it. As AI develops from a useful tool to a business advisor, leaders have the option to not only use it but to do so in a thoughtful and ethical way. If we do this right, AI will not only be powerful in the future but will also be responsible, with accountability clearly sitting with its developer and supervisors.

Martin Lewit is the SVP (Senior Vice President) of Nisum, a global consulting partner specialized in digital commerce and evolution that builds AI-powered platforms and tailor-made solutions that unlock growth, optimize operations, and create long-term value.

With vast experience in solving complex business challenges with innovative solutions, Martin's interests include developing and training those who work with him and generating connections that create new and exciting opportunities, providing effective leadership, strategic vision, and a daily focus on building an innovative culture, under the company's motto "Building success together".