Thought Leaders
AI in the Enterprise: Counting the Full Cost

AI has moved beyond theory and novelty. For many organisations, it now sits alongside core systems as part of the infrastructure. A lot of people still think about is mainly in terms of public LLMs and chatbots – something you can dip in and out of in a browser and discard when you’re done. Viewing AI through this lens can miss the wider range of models and techniques that can improve outcomes more efficiently with less risk.
The truth is AI should be assessed like any other major infrastructure investment – with a clear view of cost, benefit, and operational risk from the start. Bringing it to a usable state means investing in quality data pipelines, observability, governance, and the people who keep it aligned with the desired business outcomes. Cut corners here, and the bill is simply moved to the future with added interest.
How AI Costs Really Scale
Understandably, teams may assume AI costs grow in a straight line: do twice as much work, pay twice as much money. In reality, effort, costs, and outcomes can move independently of each other in surprising ways. Ask an AI to read a long document all at once and it has to consider every word in relation to every other word. In most popular LLMs, that means the work involved, and therefore cost, doesn’t grow the way people naturally expect – double the input, double the cost – but instead grows roughly with the square of the input length. An awareness of fundamentals like this can have a real impact on the bottom line of any AI deployment. If an organisation that handles large volumes of text every day, a regulator for example, designs from the outset around graph search or retrieval pipelines instead of sending entire documents to the model, the end user experience is still “ask a question, get an answer in seconds,” much like a public chatbot. That keeps AI-enthusiastic executives happy with instant interactions, while under the surface the system is doing far less unnecessary work and the compute bill is much lower as a result.
How AI Spend Gets Spread Across the Organisation
Technology choices are only part of the story, the rest is how organisations approach AI to begin with. In many organisations, data preparation sits with engineering. Compliance reviews sit with legal. Cloud spend lives with platform or infrastructure teams. Model selection, configuration, and any fine-tuning usually sit with a few specialist operators. Each group sees its own slice of the work and its own budget line. The spend shows up as compute here, contractor time there, and people’s time absorbed into “business as usual” across several teams. With numbers scattered across cost centres, getting the full landed cost of a single AI initiative may not be visible in any one place and is easy to underestimate. In that environment, AI costs can spiral quietly, simply because no one is tracking the whole number in one place.
A Practical Approach to AI Cost Management
Avoiding AI is not the right move for organisations, but neither is treating AI like a catch-all technology. A good strategy is always to start with the desired outcome and work backwards. Not every case needs a cutting-edge, expensive-to-run, large general-purpose model. Many tasks can be handled by well understood machine learning techniques that fall under the AI umbrella and can run on existing infrastructure.
Start small with pilot projects that measure the total cost of ownership not just model usage, which means looking at compute, of course, but also integration work, engineering time, change management, and compliance effort. The aim is to choose the smallest and simplest model that delivers an acceptable result rather than assuming “more model” means “more benefit”.
AI is not one thing. It is a combination of techniques and tools that can be used in different ways. Seeing it this way breaks down the mystique around impressive results and allows businesses to leverage its power with more responsibility and greater effect.
People, Time, and AI
Every AI deployment is in practice a collaboration between people and software. Whether or not that gets acknowledged formally, it is how the work gets done. The current shift toward more agentic AI – tools that can chain steps, call other systems, and act with less prompting – doesn’t change that, in fact it raises the stakes for getting the human side of the workflow right.
These tools can be easy to over-trust. When a system presents answers fluently and with confidence, it is natural for people to assume it is usually right. If that kind of tool is dropped into a workflow without proper training, clear boundaries, and sensible checks, it can quietly generate a stream of small mistakes. Each one has to be discovered, understood, and fixed by a person. On paper the AI looks efficient, in practice there is a hidden cost in extra human time spent cleaning up after it. In customer-facing or regulated settings, those small errors can also carry a reputational cost. However, the tools are delivered or used, accountability for their outputs still sits with the organisation and, in day-to-day terms, with the human operators using them. That needs to be explicitly understood for these tools to be truly useful.
A better pattern is a deliberate partnership: skilled people stay clearly in charge of outcomes, and AI is used to speed up the parts of the work that suit it, such as summarising, drafting, sorting, searching. Even when some checking and correction is still needed, the overall effect of well-introduced and well-managed AI in workflows can be more speed, more consistency, and more capacity than a team could achieve on its own.
Governance as Part of the AI Budget
Even when technical choices are sound and usage is efficient a growing share of AI spend will be tied up in governance rather than raw compute. For organisations operating in the EU, the AI act makes that very clear. It takes a risk-based view of AI and importantly that does not apply only to public-facing products. Internal systems used in areas like hiring and promotion, worker management and monitoring, and certain safety-related decision making can fall into scope and bring expectations around risk management, documentation, logging, and human oversight with them. Other regions are moving in a similar direction, even if the rules look slightly different the overall trend is the same: larger organisations are expected to know where AI is used, what it is doing, and how it is controlled.
The practical effect of this is that internal AI projects can now come with a governance workload of their own which is not optional. Each new use case could mean a new risk or impact assessment, more monitoring, and more questions from compliance, audit or risk teams. None of that will appear in model usage metrics but it is real effort that must be paid for.
Again, none of this is reason to avid AI. It is a reminder that the running cost of an internal AI-enabled process is not just the price of invoking a model. Governance and regulatory expectations are now part of the total cost of ownership
Where AI Rollouts Go Wrong
A familiar pattern in AI projects is the gap between how a system looks in a demo and how it behaves at scale in the wild. In a controlled setting, with a narrow set of questions and friendly data, the results can look flawless. It is easy in that moment to assume the system is ready to take on a whole category of work.
The problems tend to appear later, when the system is exposed to the full variety and volume of real use: unusual queries, stressed users, incomplete records, messy edge cases. Cracks that were invisible in the demo start to show up as misdirected answers, missed nuance, support loops, longer handling times, and quiet damage to trust. Internal metrics such as “queries handled” and “time saved” may look good on paper, but the lived experience for end users may tell a different story.
Jumping straight from a polished demo or small pilot where success under controlled conditions is treated as evidence the system is ready for broader rollout can be a costly mistake. In the real-world users bring messy queries, incomplete data, and their own assumptions about what the tool can do. If expectations are not managed, and the workflow around the system is not designed with fallbacks and escalation in mind, the organisation pays twice: once for the build, and again in extra support, rework, complaints, and lost trust. The technology may look impressive on paper, but without a pragmatic approach to how it meets real people and real processes the return on that investment quickly erodes.
The flip side is that well designed systems, with clear boundaries and human ownership built in from the start, can do things no human team could manage alone: scanning huge volumes of information in seconds, spotting patterns across years of data, and handling routine decisions at a scale that would otherwise be out of reach. The point is that to earn those benefits, organisations must match their ambition with a realistic point of view of how the technology will behave once it is out in the world.
Closing Thoughts
None of this is an argument against AI. It’s an argument for treating it with the same seriousness as any other system that can materially change how a business operates.
Used well, AI helps small teams act a at a larger scale, uncover patterns that would be hard to spot manually, and make expert judgment go further. But getting there requires a clear view of where AI is used, what it costs in total, and how it is governed. That means making deliberate choices about models and architectures, investing in data and observability, and designing processes where people stay in the loop.
“Move fast and break things” was a slogan written for human teams working on human-scale systems: if something broke, you rolled it back, patched it, and moved on. Once AI is woven into decisions about customers, employees, or citizens, the same attitude can produce problems that spread faster, hit harder, and are much more difficult to unwind. Speed still matters, and AI can certainly help here, but it has to be matched by a clear view of risk, cost and accountability.
There is no way to remove cost or risk entirely. But there is a clear difference between organisations that rely on ad-hoc experiments and those that build AI into their operations in a measured way, with line of sight from spend to success. Across the variety of problems and outcomes organisations face, there is no single AI solution that can solve them all. Effective use of AI in enterprise should always be specialised, supervised and carefully scoped.












