
Key Takeaways:
- OpenAI expands Stargate past 5 GW in partnership with Oracle, pushing toward its $500 billion U.S. AI infrastructure vision.
- xAI, led by Elon Musk, is aiming to deploy 50 million H100-scale chips in five years 500× more than today’s top systems.
- Despite early challenges, Stargate is expected to create over 100,000 U.S. jobs and transform power and data networks nationwide.
OpenAI Expands Stargate With Major 5 GW Power Leap
OpenAI expands Stargate once again this time with a major new partnership with Oracle to add a massive 4.5 gigawatts (GW) of power.
That’s enough energy to support over two million AI chips, giving OpenAI the infrastructure to scale its AI models even further.
This move pushes Stargate’s total pipeline past 5 GW, moving closer to the company’s long-term goal of reaching 10 GW of compute capacity across the U.S.
The expansion also signals that OpenAI isn’t just talking about big plans it’s actively building them.
In a recent post on X (formerly Twitter), CEO Sam Altman shared photos from the Abilene, Texas site, calling the project “gigantic.”
He added that over one million GPUs will be online by the end of 2025. He then made a half-joking challenge to his team: “100x that.”
It hasn’t always been easy sailing, though. A report from The Wall Street Journal revealed that tensions between key partners especially OpenAI and SoftBank have caused delays.
So instead of launching multiple sites this year, the team is now focusing on getting at least one data center up and running by year’s end.
OpenAI Expands Stargate to Build America’s AI Future
Even with some bumps along the road, OpenAI expands Stargate as a strategic push to reshape America’s tech infrastructure.
The company is moving quickly to lock down new data center sites in energy-friendly states like Texas, Michigan, Georgia, Wisconsin, and Ohio.
These locations were chosen for low electricity costs, space, and easier regulations.
This isn’t just about AI models anymore it’s about real-world economic impact.
The Stargate project is expected to create over 100,000 new jobs in construction, operations, and energy.
For Oracle, it’s an opportunity to help power the future of AI through its rapidly growing cloud network.
Their Abilene campus alone will soon produce up to 2 GW of power.
Moreover, OpenAI isn’t placing all its bets on just one provider.
It’s also working with CoreWeave and others to make sure compute needs are met, no matter how fast things scale.
In short, Stargate is turning AI into a national infrastructure priority and it’s doing so at a speed rarely seen in tech or energy sectors.
Meanwhile, Musk’s xAI Aims for 50 Million AI Chips
Not to be outdone, Elon Musk recently announced an ambitious roadmap for his AI startup, xAI.
In a post on X, Musk said the company plans to deploy 50 million H100-equivalent AI units over the next five years.
That’s a massive leap even by Musk’s standards.
To give some perspective, just one year ago the world’s most powerful AI supercomputer had a fraction of that compute.
According to estimates from tech followers on X, Musk’s vision would represent 500 times more computing power than what was top-of-the-line in 2024.
xAI’s upcoming Colossus 2 supercomputer already aims to use 550,000 Nvidia GB200 chips, which is the equivalent of 5.5 million H100s.
So if Musk’s full plan comes to life, xAI would essentially scale its compute capacity by 10× in just a few years.
Of course, such a bold plan requires massive funding. Musk is reportedly working on a $12 billion debt package to help make it all possible.
OpenAI Expands Stargate, But xAI Isn’t Far Behind
So what does all of this mean? It means we’re witnessing the beginning of a new kind of arms race not in weapons, but in AI infrastructure.
OpenAI expands Stargate by building out the energy and compute backbone to power its models.
At the same time, Musk’s xAI is laser-focused on raw chip count and efficiency.
The two companies are taking different routes:
- OpenAI is scaling up power, infrastructure, and nationwide data centers.
- xAI is targeting ultra-dense compute clusters and chip efficiency.
If both are successful, the result can fundamentally change in the way artificial intelligence is made, applied and used in various types of areas including robotics, finance and healthcare.


