OpenAI and NVIDIA moved to deepen their relationship with a letter of intent to deploy at least 10 gigawatts of NVIDIA systems for OpenAI’s next wave of AI infrastructure. The plan, announced today, aims to support the training and deployment of the company’s next-generation models and accelerate its path to what it calls superintelligence.
The agreement signals a significant increase in computing spending and capacity. It points to a multi-year buildout spanning data centers, power, networking, and supply chains. The companies did not disclose timelines, locations, or financing details.
What the Companies Said
OpenAI and NVIDIA today announced a letter of intent for a landmark strategic partnership to deploy at least 10 gigawatts of NVIDIA systems for OpenAI’s next-generation AI infrastructure to train and run its next generation of models on the path to deploying superintelligence.
The statement frames the effort as a step toward far larger systems. A letter of intent is not a final contract, but it sets the direction for negotiations and technical planning.
Why 10 Gigawatts Matters
Ten gigawatts equals 10,000 megawatts of capacity. That is similar to the output of several large power plants working in concert. For comparison, many hyperscale data centers draw 50 to 100 megawatts.
At that scale, the plan implies a network of many facilities or multi-phase campuses. Power availability, grid interconnection, and cooling will be central challenges. Sourcing transformers, switchgear, and fiber at this level often takes years.
Context: The Race for AI Compute
NVIDIA is the dominant supplier of AI accelerators used to train and run large models. Its recent platforms, including systems built around H100, H200, and the Blackwell architecture announced in 2024, drive most state-of-the-art training. OpenAI’s models require tightly networked clusters with high-bandwidth interconnects and advanced software stacks.
OpenAI has previously said that progress in AI tracks the amount of high-quality compute it can apply. The company has also spoken about safety research and the need for governance as systems grow more capable. This plan suggests a sustained push to scale capacity and model size.
Potential Impact on Industry and Markets
A 10-gigawatt deployment would affect chip supply, server manufacturing, and data center construction worldwide. It could amplify demand for GPUs, memory, and power-efficient cooling systems. It may also pressure networking vendors to deliver more high-speed links at lower latency.
Cloud providers, colocation firms, and utility partners may compete to host portions of the buildout. Competitors in the AI sector could face longer lead times for similar hardware if supply tightens. Startups relying on rented compute might see higher prices or longer waits.
Energy, Siting, and Environmental Considerations
Power is the gating factor for large AI clusters. Securing 10 gigawatts will likely require a mix of grid power, renewable contracts, and long-term build-transfer deals with utilities. Regions with ample transmission capacity and friendly permitting will have an edge.
Cooling loads at this size often push operators toward liquid cooling and heat reuse where possible. Communities may seek firm plans on water use, noise, and local hiring. Regulators will look at emissions, reliability, and grid impacts.
- Power procurement and grid interconnects can take 24 to 60 months.
- Transformer and switchgear lead times remain tight across many markets.
- Liquid cooling adoption is rising as rack densities increase.
What It Means for AI Progress
More compute can reduce training time and enable larger context windows, higher-quality multi-modal systems, and faster iteration. It can also support broader deployment with lower latency. But bigger models raise safety, bias, and reliability questions that require careful testing and monitoring.
Experts note that software efficiency still matters. Compiler advances, sparsity, caching, and better data pipelines can stretch each watt and dollar. Even with 10 gigawatts, returns depend on algorithmic gains and careful system design.
Open Questions and Next Steps
Key unknowns include the delivery schedule, the share of capacity hosted by cloud partners, and how much will be built in new sites. Financing terms and the mix of on-premises versus leased facilities will shape costs. The parties did not name specific products or generations for the systems.
Analysts will watch for follow-on contracts, construction permits, utility filings, and supplier guidance. Early signs could appear in GPU order backlogs, networking bookings, and data center real estate transactions.
The announcement marks an aggressive push to scale AI infrastructure. If executed, the plan could reset supply dynamics across chips, power, and data center construction. The next milestones to watch are binding agreements, site selections, and utility deals that turn the letter of intent into steel in the ground and systems online.
A seasoned technology executive with a proven record of developing and executing innovative strategies to scale high-growth SaaS platforms and enterprise solutions. As a hands-on CTO and systems architect, he combines technical excellence with visionary leadership to drive organizational success.























