Connect with us

Thought Leaders

AI Agents and Market Dynamics: Risk, Opportunity, and Strategy

mm

2026 will be a year of testing AI agents for resilience: the market has grown from $7 billion to nearly $10 billion, regulators are launching standardization, venture capital funds and corporations are either scaling up or cutting back on resources. Euphoria has given way to pragmatism: analysts warn that GenAI is now in a phase of disappointment, and it is important to answer the question of where exactly agents create measurable value, as well as at what cost and how to safely integrate them into critical processes.

What is an AI agent in practice?

In the media, an agent is defined as almost anything that can invoke tools, but for the market and regulators, a more down-to-earth definition is important.

An AI agent is a system that not only responds to user requests, but also independently plans a chain of actions and invokes external services within the framework of specified policies and restrictions. Unlike the co-pilots we are used to, which help people with specific tasks, such as writing a letter or summarizing a document, an agent takes over the entire work process.

In fintech, agents analyze the client’s portfolio and collect market data. In the operations unit, the agent can request missing KYC documents, check statuses in external registries, and prepare a draft onboarding decision.

How the market distorted the value of AI agents

The information boom surrounding the introduction of AI agents has been powerful: companies are incorporating this functionality into separate products, creating new business units, and actively promoting a new wave of autonomy for corporate clients. A significant part of future AI budgets in fintech is already being reallocated in favor of agent solutions.

The capital market has interpreted this in its own way: public companies are rushing to demonstrate their strategy so as not to appear behind the times; startups are repositioning themselves en masse from ML products to agent platforms; investors are risking overpaying for any revenue growth that can be attributed to agents, even if it is actually related to traditional automation.

As a result, agents are credited as a source of value where real returns are still generated by well-established processes, data, and control.

Where agents are already showing measurable results

Today, only a small number of players use an agentic approach in production, with most still in the experimental stage. The first tangible ROI can be seen in the same areas where artificial intelligence took off earlier – high-volume, formalizable workflows with clear pre- and post-cycle times and costs, repetitive customer requests and meeting preparation, operational anti-fraud and monitoring of suspicious activities, where agents are integrated into existing warning and investigation systems.

As an example, a European bank has implemented AI agents for the initial processing of correspondent accounts. The agents automatically sort documents, extract data for KYC, and check for missing information. As a result, data collection time has been reduced by 99%, costs by 94%, and the accuracy of analysts’ work has increased.

The real asset is the infrastructure, not the agent itself

Investors should ask questions about how the data architecture is structured under agents, whether there is a single layer of access rights and auditing for all agent actions, and how privacy and sensitive data storage issues are addressed when using external models.

After all, the most important asset is the workflow in which the agent is embedded: KYC, onboarding, anti-fraud, liquidity management, and customer communications. Companies that manage these processes through market share, depth of integration, or regulatory status benefit from agents more than others: they can increase margins and reduce losses without losing control.

A startup that sells a conditionally universal agent but does not own any critical processes or domains finds itself in the least advantageous position: it can be relatively easily replaced by another framework.

We see the real value of an agent in its access to reliable, clean, and legally secure data and in its integration with existing systems.

Without control, there is no scaling

Regulators in various countries already require AI systems to be transparent, controllable, and verifiable. Therefore, a company’s ability to control and document the work of agents is already a prerequisite for operating in the market.

This leads to the next logical step: companies need a comprehensive control infrastructure. This includes logging all agent actions, constant monitoring, alerts for deviations, and stress tests.

A successful example is Sumsub, which has deployed the AI co-pilot “Summy” for compliance and fraud investigation specialists. Unlike black boxes, the system does not make autonomous decisions, but analyzes transaction arrays and generates audit-ready reports upon request in natural language, reducing incident processing time by three times while maintaining full human control.

Suppliers who embed such an add-on into their agent platforms and solutions gain not only a technological advantage, but also a regulatory one: they reduce the time and cost of approvals and simplify due diligence and auditing.

What should an investor check besides the product?

Investors often underestimate risks because they rarely manifest themselves immediately. More often than not, it is a gradual, almost imperceptible system failure that accumulates over time and leads to serious consequences.

If a company does not set strict limits and does not implement a monitoring process, the problem is only noticed when it is pointed out by regulators or customers.

Moreover, prompt injection, data poisoning, and circumvention of access policies become a real threat, as attackers can exploit all of these. In fintech, such attacks directly affect anti-fraud, KYC, and payment operations.

One example of such a risk: a financial employee of a multinational corporation transferred $25 million to fraudsters’ accounts after participating in a video conference where attackers used real-time generative AI to clone the faces and voices of the CFO and several colleagues.

This and many other similar examples show that traditional video or voice verification methods no longer provide reliable protection in a corporate environment.

For investors, this means looking not only at the product itself, but also at who it depends on. Who supplies the technology? Can it be quickly replaced? Is there a plan in case of failures or changes in license terms?

It’s a time for a mature approach

Right now, what’s important for the market’s growth isn’t revolutionary marketing, but three simple things: knowing how to work with real processes, normal control, and being honest about risks.

Investors should ask what the company really has under control. Startups need to honestly decide whether they want to be multifunctional or deeply knowledgeable in one specific area. And corporations need to remember that agents do not replace existing systems, but rather reinforce them. But this only works where there is order in processes and management.

Alexander Rugaev is a serial entrepreneur and venture capital expert with over 20 years of experience in technology, public markets, and startup development. He has founded and scaled multiple companies in AI, robotics, and blockchain, bridging early-stage innovation with institutional and public investors worldwide.