Artificial Intelligence
Google and Intel Expand Chip Partnership for AI Infrastructure

Google and Intel have announced a multiyear expansion of their cloud infrastructure partnership, committing to continued deployment of Intel Xeon processors across Google Cloud and expanded co-development of custom infrastructure processing units (IPUs) designed for AI workloads.
The agreement, announced April 9, covers two areas: Google Cloud will continue using multiple generations of Intel Xeon processors, including the latest Xeon 6 chips that power its C4 and N4 virtual machine instances, for AI inference, training coordination, and general-purpose computing. Separately, the two companies will expand their joint development of custom ASIC-based IPUs, programmable accelerators that offload networking, storage, and security functions from host CPUs in data centers.
Intel and Google have worked together on IPUs since 2022, when the first custom IPU — codenamed Mount Evans — launched alongside Google Cloud’s C3 instances. Those IPUs operate at 200 Gbps and handle tasks like virtual networking and storage operations that would otherwise consume CPU resources meant for customer workloads. The next generation of co-developed IPUs has not been detailed, though industry observers expect higher speeds given the networking demands of modern AI compute clusters.
Why CPUs Still Matter for AI
The partnership highlights a shift in how the industry thinks about AI infrastructure. While GPUs and custom accelerators like Google’s TPUs handle the heavy computation of training and running AI models, CPUs remain essential for orchestrating distributed workloads, managing data pipelines, and running the supporting infrastructure that keeps large-scale AI systems operational.
Intel CEO Lip-Bu Tan framed the deal around this reality in the company’s press release: the company argues that scaling AI requires balanced systems where CPUs and IPUs work alongside accelerators, not systems built on accelerators alone.
Amin Vahdat, Google’s SVP and Chief Technologist for AI Infrastructure, noted that Intel has been a partner for nearly two decades and that the Xeon roadmap gives Google confidence in meeting performance and efficiency demands going forward.
The deal comes at a time of significant CPU supply constraints. Intel is currently wrestling with shortages across its Intel 10 and Intel 7 manufacturing nodes, where the bulk of its Xeon production sits. Lead times for server CPUs have stretched to six months in some cases, and Intel has confirmed price increases as demand outpaces supply. The company is prioritizing data center chips over consumer processors to address the crunch.
The Broader AI Chip Landscape
Intel’s custom ASIC business, which includes the IPU co-development work with Google, has become a significant revenue stream. Intel CFO David Zinsner said during the company’s Q4 2025 earnings call that the custom chip division grew more than 50% in 2025 and exited the fourth quarter at an annualized revenue run rate above $1 billion.
The deal also matters competitively. Google operates its own Arm-based CPU, Axion, for both internal and customer-facing workloads. Amazon builds custom Nitro NICs through its Annapurna Labs division, and Microsoft uses FPGA-based solutions for similar infrastructure offloading. By continuing to co-develop IPUs with Intel rather than building entirely in-house, Google maintains a different approach from its hyperscaler peers — one that keeps Intel in the loop as both a CPU supplier and a custom silicon partner.
For Intel, the partnership provides a high-profile validation of its data center strategy under Tan’s leadership. The company has faced questions about its relevance as cloud providers increasingly design their own chips. Maintaining a deep custom silicon relationship with one of the world’s largest cloud operators signals that Intel’s foundry and design capabilities remain competitive for infrastructure-critical workloads.
No financial terms were disclosed. The AI arms race among cloud providers shows no signs of slowing, and securing reliable CPU and custom chip supply chains is becoming as strategically important as GPU procurement. Whether Intel can scale its manufacturing fast enough to capitalize on this demand — while managing its ongoing supply constraints — remains the open question.












