Thought Leaders
Behind the Numbers: How AI Became Finance’s Most Profitable Employee

In the media, banking and, more broadly, finance are often portrayed as people in sharp suits making business decisions from the top floors of skyscrapers, or gifted traders who can understand the market’s state from little data. Since it is one of the most powerful images of finance, many discussions on new technical features in this field focus on how they will change this front-desk work.
AI is not an exception here, and a huge part of the debates around its adoption in finance centres on whether agents will replace traders or whether they can allocate capital more effectively than advisors. However, the most efficient application of AI turned out to be far from the glamorous image that many envision. In fact, artificial intelligence is bringing in more money from what can be called the “boring” side of finance, the day-to-day operations.
Where AI Actually Creates Value
The main benefit of AI is that it can handle tasks much cheaper and several times faster than humans. And by doing so, it literally generates profit through boosted operational efficiency.
For example, with the help of AI tools, Citigroup reduced document review time prior to account opening from over an hour to just 15 minutes. Naturally, faster decision-making will please customers and may even make them more loyal. But at the same time, those 45 minutes translate into hundreds of thousands of dollars in cost savings for the bank, because these tools free up hours upon hours of human labour for more important work.
AI helps optimize the vast layer of financial bureaucracy and internal frameworks on which companies rely. That is why the most valuable use cases often turn out to be far from the most spectacular ones. Autonomous traders or a chatbot that suggests the best deals to a client may sound impressive, but automated KYC procedures and due diligence checks are likely to bring much greater economic value to a bank or a financial company.
That said, just like with Citigroup’s document review process, nothing prevents these improvements from benefiting customers as well. Users may appreciate a personal AI assistant in an app, but they would appreciate it even more if loan decisions could be cut from days to minutes, or if their transactions weren’t incorrectly flagged as fraud, as such probabilities were lowered by dozens of percentage points.
How Did AI Become the Most Profitable “Employee”?
Usually, when a bank’s customer base grows, its staff must grow almost proportionately. It used to be impossible to review an increasing number of transactions and client documents with the same team size. Various modern technological solutions helped to some extent, but business growth still inevitably led to headcount growth. And the more employees a company has, the more managers it needs and the more expensive it becomes to supervise the entire structure.
Now that AI has emerged, this problem is beginning to disappear, as fewer employees can still effectively serve a growing number of clients with the help of AI tools. Some companies are already using this logic: Klarna, for example, has claimed that one AI assistant can do the work of 700 people. Whatever the cost of applying such tools may be, it is unlikely to come close to the regular paychecks of several hundred employees.
However, to actually make it work, a company should integrate AI properly into its workflows, beyond just experiments. In finance, many projects still stay on the pilot stage, which obviously cannot generate much value. While one company may be debating whether to adopt new instruments or how to scale AI agents, its competitors won’t be standing still, building their own AI capabilities instead.
Lagging behind in this race would lead to significant financial losses. To be precise, companies that fail to shift operations onto AI rails early could lose up to 9% of their profits. Catching up with such a drawback later would not be easy, and it demands finance companies to build a solid AI strategy.
How to Govern AI Decisions
Here comes the biggest challenge, because embedding AI-agents in finance operations inevitably would mean delegating some decision-making authority to them. In finance, where AI has become a kind of bottomless source of free “junior employees” by optimizing basic back-office operations, this poses a significant risk. The thing is, mistakes in this type of work are often the most expensive.
Generally, regulators are preventing financial organizations from doing something risky and making up rules to minimize possible harm. Yet, when it comes to AI, the industry is moving much faster than supervision, since only a quarter of authorities collect data on AI use from regulated entities. This is clearly not enough to keep up with the growing number of companies that are adding agents to their operations.
As a result, financial companies have to find ways to regulate AI-driven instruments themselves. This is understandable, given that any mistake here can lead to multimillion-dollar losses. For example, in modern banks agents are given limited permissions, much like real employees. If AI works with client documents, it clearly does not need the right to change a client’s risk rating. The agent is assigned a strict operational role and is not allowed to exceed it.
Another possible and certainly necessary mechanism is keeping detailed records of all AI actions, so that if an error occurs, every step the agent has taken can be traced. In areas such as KYC and fraud detection, questions about a client may arise months later, so banks absolutely need to retain a complete record of the AI assistant’s logic.
AI behavior can also be tested in a sandbox. The Bank of England, for example, has begun simulating AI trading sessions to understand how agents would interact with one another and with the real market. Such testing helps identify exactly where an agent makes mistakes and fix the problem before it goes public.
Ultimately, it is worth remembering that any AI decision must be confirmed by a human, who remains responsible for it. In the event of losses, no one will accept the answer “because the model decided so,” and a senior manager still has to approve the AI’s actions and take responsibility for them.
From “Banks-vs-Fintech” to “Fast-vs-Slow”
AI regulation also shapes competition in the financial market. Customers may be pleased when their document is processed 30 minutes faster, but they will certainly not be happy if an AI bot damages their credit history or costs them money. To avoid such problems, they are more likely to trust their money to companies that explain their AI strategy transparently and honestly. And which, of course, have fewer issues managing it.
Fintech companies have an obvious advantage here, simply because they are not weighed down by the burden of legacy systems. Modern fintechs can build their services around AI from the start and automate all the processes immediately. Building something new can be much easier than trying to integrate AI agents into organizations that still rely on fax machines and decades-old COBOL systems. It’s no wonder that almost half of fintech companies have already reached an advanced stage of AI adoption, compared with less than a third among traditional financial institutions.
Banks are not doomed to extinction. After all, they have survived the Great Depression, the 1970s, the Great Recession, and more. They know how to adapt to changes. Because of their legacy, they accumulated huge amounts of customer data, capital, and reputation. However, to meaningfully use these advantages, they should fully integrate AI across their processes, since simply adding it to a side product wouldn’t help much.












