Connect with us

Thought Leaders

The Struggle for AI Ownership – Why Data Centers Matter More Than Ever

mm

A few years ago, data centers seemed like something purely technical and invisible – infrastructure hidden deep in the backend, rarely discussed outside professional circles. But the explosive growth of AI has completely changed that picture. Today, data centers have become the new “oil well” of the digital economy: a strategic asset around which billions in investment, government policies, and corporate strategies are being built.

Recent news confirms this. Anthropic announced the construction of its own data centers in the U.S., costing $50 billion, a figure comparable to the budgets of major energy megaprojects. Almost simultaneously, X.AI and Nvidia revealed a joint project in Saudi Arabia, one of the largest data centers in the region.

Why has the topic of data centers become so global? Why are major players moving away from pure cloud models and investing tens of billions into their own capacity? And how does this shift influence AI architecture, energy systems, geopolitics, and the rise of alternative models, from Arctic to space-based data centers?

This is what the column below explores.

The global surge of interest in owning data centers

When computing resource consumption is measured in millions of dollars per year, renting cloud servers is indeed more cost-effective: businesses don’t need to build and maintain buildings, pay for electricity and cooling, purchase equipment, or regularly upgrade it. But when expenses reach tens of billions of dollars, the logic shifts.

At that point, it becomes more cost-effective to build your own data centers, hire engineers, purchase equipment, and optimize the infrastructure for your specific needs. The company stops overpaying for cloud providers’ margins and gains much greater control over the cost and efficiency of computing.

This is why the trend of building private data centers is most relevant for giants like OpenAI or Anthropic, companies whose needs are so large that the cloud is no longer economically justified.

At the same time, it’s important to understand that the concept of a “data center” is multi-layered. For some companies, it is primarily a data storage facility, disks, databases, and user information. For others, it is also a computational hub: servers running models like GPT, Claude, or LLaMA, simultaneously storing data and performing complex operations. Essentially, today, a data center is a massive technological “warehouse” housing thousands of specialized computers.

And the higher the demand for AI capacity, the more strategic and debated this “warehouse” becomes, which is why data centers are now discussed not just by engineers, but also by investors, policymakers, and top executives.

What matters more in building AI data centers: speed or quality?

In reality, neither construction speed nor the formal “quality” of a data center is the primary driver. Large companies invest in their own infrastructure to reduce costs and gain maximum control over computing.

The quality of the models themselves concerns top-level players far less than one might think. The reason is simple: the quality gap between market leaders is minimal. It’s very much like the automotive industry: Volkswagen, Toyota, Honda – all different, but none can pull far enough ahead to monopolize the market. Each maintains its stable share.

The AI market follows a similar logic. Advanced users already use multiple models simultaneously: one for programming, another for text generation, a third for analytics or search. Corporate clients do the same. For example, services like Grammarly don’t have their own model at all. They purchase tokens from multiple providers, Anthropic, OpenAI, Meta. When a request comes in, the system automatically selects the provider: the one that is currently cheaper, faster, or more accurate. If the text is in English – it goes to GPT; if in Hindi – to Claude; if LLaMA currently has the lowest rates – it goes there. This is essentially a stock-exchange-style load distribution model.

In conversations with corporate clients of Keymakr, I increasingly see the same trend: large companies have long abandoned the “one model – one provider” approach. They build multi-model pipelines where requests are routed between different AI systems depending on cost, latency, or language specificity. However, this architecture places significantly higher demands on data, specifically, its cleanliness, annotation, validation, and consistency. In this sense, data infrastructure becomes as strategic as the data centers themselves: without high-quality input, a multi-model system simply doesn’t work.

Ultimately, in this architecture, model quality becomes just one of many parameters. The key is maintaining speed, scalability, and the ability to handle massive computing loads. And this is precisely what gives private data centers their strategic value: they allow companies to control cost, throughput, and stability, while having little impact on the final model quality.

In other words, today, companies build data centers not for speed or perfect quality, but for economics and control.

The geography of data

By “control,” I mean the geography of data. If a company works with government agencies, the law often prohibits data from leaving the country. Governmental and quasi-military applications actively use AI in intelligence, defense IT units, and municipal services. But it’s impossible to give these systems access to a model if the data center is located in a region with uncertain jurisdiction or low trust. That’s why governments require computing capacity to be physically located within the country.

Large companies understand this perfectly. If they want to participate in government tenders, sign contracts, or process sensitive data, they need infrastructure in specific regions and the ability to guarantee compliance with security standards. This geographic constraint also significantly impacts another critical factor in building and operating data centers – energy.

AI data centers consume enormous amounts of electricity, both to run servers and to cool them. Cooling often costs more than the computation itself. This creates strict limitations. In some regions, data centers are limited to drawing a certain amount of power from the grid; in others, heat emissions to the environment are strictly regulated. Exceeding limits results in fines and costly engineering upgrades.

Moreover, electricity is purchased chiefly from state-owned energy companies, which have their own tariff structures. You cannot simply “buy as much as you want.” For example, up to a certain threshold, the price is one rate; above it, another. If a data center draws more power than allowed during peak periods, it automatically incurs fines. For this reason, large companies often find it more economical to build their own data centers near their own power plants.

This naturally leads to the idea of developing private power generation, such as solar farms, gas-fired plants, or small hydroelectric stations. But all of these solutions have limitations. Gas and coal plants produce emissions. Hydropower alters river ecosystems. Nuclear energy is the cleanest in terms of emissions, but only governments can build nuclear plants.

And it is precisely at this point that new concepts begin to emerge…

Alternative solutions

The most apparent option is relocating data centers to regions with naturally cold climates, such as northern Canada, the northern territories of Scandinavia, or remote areas of the Arctic. There, nature itself solves the cooling problem, drastically reducing operating costs.

The next step is “underwater data centers.” Computing takes place underwater, with the cold marine environment providing natural cooling. But this approach also has downsides. Environmentalists have already raised concerns. For example, near southern Iceland, where the Gulf Stream passes, some have suggested that the large-scale deployment of underwater data centers could affect local climate processes, potentially even altering ocean current behavior. Initial observations of such deviations have already been recorded.

There are also more futuristic options. Recently, I discussed the concept of space-based data centers with colleagues. The idea of launching computing infrastructure into orbit has existed for a long time; however, technology has now brought it to the brink of practical feasibility, with a ready technical foundation.

Why does space seem attractive? It immediately solves two major constraints: cooling and electricity. Temperatures in near-Earth space are extremely low, making heat dissipation almost free. Electricity is also no problem: massive solar panels can be deployed, much like space telescopes unfold their mirrors. In space, there’s no dust, no weather, no shading. The panels provide stable power around the clock with virtually no maintenance required.

Communication with Earth is a separate engineering challenge, but it is entirely solvable. One approach is to use satellite systems like Starlink, but with much wider channels. Radio links can, in principle, handle these volumes, and optical links, light-based channels with enormous bandwidth, can be used if needed. Engineers will definitely find a solution here.

Overall, space infrastructure is more of a future branch of development, but discussing it no longer seems like science fiction, especially as demand for computing is growing far faster than new capacity on Earth.

It’s worth noting the most recent news: Google announced its Suncatcher project, aimed at creating AI orbital data centers. According to the plan, satellites equipped with TPU chips will be powered by solar energy and transmit data through optical channels. Google claims that this solution could provide up to eight times greater energy production efficiency than terrestrial systems. The first satellite prototypes are scheduled for launch as early as 2027.

The impact of regulations

When it comes to regulations affecting data centers, their environmental impact, and whether legal frameworks could actually “push” this market into space or underwater, the question remains open.

Each country acts differently, implementing regulations in accordance with its long-term plans. It’s no secret that Europe, for example, has stricter rules, which slow down AI development. The U.S., by contrast, takes a more pragmatic approach: laws are usually written to allow innovation and growth to continue. A strong tech lobby in California, home to companies like Nvidia, Apple, Microsoft, and Meta, makes a total ban on AI unlikely. That means technology will continue to move forward.

We live in an era where “thinking outside the box” is cultivated both in the West and Asia, and the examples of Elon Musk and Steve Jobs continue to inspire ambitious projects. So, perhaps computing in space is the next logical step after all.

Michael Abramov is the founder & CEO of Introspector, bringing over 15+ years of software engineering and computer vision AI systems experience to building enterprise-grade labelling tools.

Michael began his career as a software engineer and R&D manager, building scalable data systems and managing cross-functional engineering teams. Until 2025, he has served as the CEO of Keymakr, a data labelling service company, where he pioneered human-in-the-loop workflows, advanced QA systems, and bespoke tooling to support large-scale computer vision and autonomy data needs.

He holds a B.Sc. in Computer Science and a background in engineering and creative arts, bringing a multidisciplinary lens to solving hard problems. Michael lives at the intersection of technology innovation, strategic product leadership, and real-world impact, driving forward the next frontier of autonomous systems and intelligent automation.