Connect with us

Artificial Intelligence

OpenAI and Oracle Scrap Stargate Expansion in Texas

mm

OpenAI and Oracle have abandoned plans to expand their flagship Stargate data center campus in Abilene, Texas, after prolonged negotiations complicated by financing challenges and what Bloomberg described as OpenAI’s “often-changing demand forecasting.” The collapse marks a significant setback for the most high-profile AI infrastructure project in the United States, though OpenAI is already pivoting toward Nvidia’s next-generation Vera Rubin chips at new locations.

The canceled component is a planned 600-megawatt expansion that OpenAI and Oracle announced in September 2025 as part of a broader buildout with SoftBank. The existing Abilene campus — planned to support up to 450,000 Nvidia GB200 Blackwell GPUs distributed across eight buildings — remains operational. Construction has progressed significantly, with Crusoe topping out the final building in November 2025 and the full campus expected to complete by mid-2026. However, reliability problems have strained the partnership: earlier this year, multiple buildings went offline for days when winter weather knocked out portions of the facility’s liquid cooling infrastructure.

The halt does not affect other Stargate sites. Projects in Shackelford County, Texas; Doña Ana County, New Mexico; Milam County, Texas; Lordstown, Ohio; and Wisconsin are all still proceeding on schedule. Stargate’s total planned capacity stands at nearly 7 gigawatts across all locations, representing over $400 billion in investment over three years. Oracle, SoftBank, and OpenAI reviewed more than 300 proposals from over 30 states before selecting the current portfolio of sites, and additional locations are expected as the project works toward its 10-gigawatt target.

Vera Rubin Chips Replace Blackwell at New Sites

Rather than expand Blackwell capacity at Abilene, OpenAI is redirecting its next phase of compute buildout toward Nvidia’s Vera Rubin platform. The two companies signed a letter of intent on September 22, 2025 to deploy at least 10 gigawatts of Nvidia systems, with Nvidia investing up to $100 billion in OpenAI progressively as each gigawatt comes online.

The first gigawatt of Vera Rubin capacity is targeted for the second half of 2026. The Rubin platform features Vera Rubin Superchips with sixth-generation NVLink interconnects delivering up to 260 terabytes per second of bandwidth per NVL72 rack — a substantial upgrade over the Blackwell architecture currently deployed at Abilene. Multiple cloud providers — including AWS, Google Cloud, Microsoft Azure, and Oracle Cloud — are among the first to deploy Rubin-based instances this year.

“Everything starts with compute,” OpenAI CEO Sam Altman said when the Nvidia partnership was announced. “Compute infrastructure will be the basis for the economy of the future, and we will utilize what we’re building with NVIDIA to both create new AI breakthroughs and empower people and businesses with them at scale.”

OpenAI’s pivot toward Vera Rubin at new locations reflects a practical calculation: building where power and financing align is faster than negotiating expansions at an existing site with unresolved infrastructure questions. The move also positions OpenAI to run its next-generation models on more advanced hardware from day one.

Power and Financing Strain AI’s Buildout

The Abilene setback underscores a broader constraint facing the AI industry: data center ambitions are colliding with the realities of power delivery, financing, and construction timelines. Texas lawmakers have raised concerns that large data centers are driving up load forecasts faster than utilities can bring new generation and transmission online. The Abilene campus alone is designed for 1.2 gigawatts of total power capacity, placing it among the single largest loads on the Texas grid.

OpenAI’s Stargate project envisions consuming 10 gigawatts at full scale — enough to power roughly 7.5 million homes. The company has been actively diversifying its compute supply chain, partnering with Cerebras for 750 megawatts of low-latency AI compute capacity through 2028, and working with Microsoft, SoftBank, and CoreWeave alongside Oracle.

The challenge of securing sufficient power and capital for AI infrastructure is not unique to OpenAI. Microsoft, Google, and Meta are all racing to lock down energy contracts for their own data center expansions. Meta is negotiating to acquire portions of Crusoe’s Abilene capacity, with Nvidia facilitating the discussions, suggesting the site’s existing infrastructure may find new tenants even as OpenAI’s own expansion there stalls.

Several questions remain unanswered. OpenAI has not disclosed specific locations for its Vera Rubin deployments beyond confirming they will be at sites with existing power capacity. Whether Oracle will continue developing Abilene independently or seek other anchor tenants is an open question. And whether the 600-megawatt expansion is permanently dead or merely deferred depends on whether Abilene’s power and financing infrastructure can catch up to the scale of ambition that was built on top of it.

Alex McFarland is an AI journalist and writer exploring the latest developments in artificial intelligence. He has collaborated with numerous AI startups and publications worldwide.