
A conceptual rendering of OpenAI and Broadcom’s 10GW AI accelerator infrastructure, designed for global AI deployment and next-generation compute efficiency. Image Source: ChatGPT-5
OpenAI and Broadcom Partner on 10-Gigawatt AI Accelerator Infrastructure
Key Takeaways: OpenAI–Broadcom AI Accelerator Collaboration
10 gigawatts of custom AI accelerators to be designed by OpenAI and deployed with Broadcom.
Broadcom to provide Ethernet, PCIe, and optical connectivity for scale-up and scale-out systems.
Deployment to begin in late 2026 and complete by end of 2029.
Collaboration reinforces OpenAI’s hardware independence and Broadcom’s leadership in AI networking.
The partnership supports OpenAI’s mission to ensure AGI benefits all of humanity.
OpenAI and Broadcom: Strategic Partnership to Build 10GW AI Accelerator Infrastructure
OpenAI and Broadcom have announced a sweeping multi-year collaboration to design, manufacture, and deploy 10 gigawatts (GW) of OpenAI-designed AI accelerators — specialized chips engineered to power the next wave of artificial intelligence systems.
The initiative marks a major expansion in AI infrastructure capacity, positioning OpenAI alongside leading chip developers such as Nvidia, Google, and AMD. It reflects both companies’ commitment to building scalable, energy-efficient compute systems that can support the rapidly growing demands of advanced AI models. Deployment of the new racks is scheduled to begin in the second half of 2026, with full rollout expected by the end of 2029.
Inside the 10GW System Design Powering OpenAI’s AI Clusters
Under the agreement, OpenAI will design custom accelerators and systems informed by its experience developing frontier AI models and products. Broadcom will co-develop the hardware and deploy racks of custom AI chips and accelerator systems, equipped with Ethernet-based networking to meet the surging demand for compute power.
The two companies have signed a term sheet to jointly roll out racks integrating OpenAI’s chip and accelerator designs with Broadcom’s connectivity technologies across OpenAI data centers and partner facilities worldwide.
“Partnering with Broadcom is a critical step in building the infrastructure needed to unlock AI’s potential and deliver real benefits for people and businesses,” said Sam Altman, co-founder and CEO of OpenAI. “Developing our own accelerators adds to the broader ecosystem of partners all building the capacity required to push the frontier of AI to provide benefits to all humanity.”
By the Numbers: AI Scale and Timeline
10 GW total planned compute capacity
2026–2029 rollout window
800 million+ weekly active users on OpenAI platforms
4 senior executives (Altman, Brockman, Tan, Kawwas) leading the collaboration
Global deployment across OpenAI facilities and partner data centers
Custom Accelerators and Ethernet Scale: Broadcom’s Role
For Broadcom, the collaboration highlights the growing importance of custom accelerators and Ethernet networking in the expanding landscape of AI data centers. The company will leverage its end-to-end portfolio of Ethernet, PCIe, and optical connectivity to power the racks.
“OpenAI has been in the forefront of the AI revolution since the ChatGPT moment, and we are thrilled to co-develop and deploy 10 gigawatts of next-generation accelerators and network systems to pave the way for the future of AI,” said Hock Tan, President and CEO of Broadcom.
Charlie Kawwas, Ph.D., President of Broadcom’s Semiconductor Solutions Group, added that the collaboration sets “new industry benchmarks for the design and deployment of open, scalable, and power-efficient AI clusters.”
Accelerating Toward AGI Infrastructure
OpenAI says that designing its own hardware allows it to integrate lessons from developing advanced AI systems directly into its chips — improving performance, efficiency, and model alignment across its platforms. The company plans to use these accelerators to support its 800 million weekly active users and expanding enterprise adoption across industries.
“By building our own chip, we can embed what we’ve learned from creating frontier models and products directly into the hardware, unlocking new levels of capability and intelligence,” said Greg Brockman, co-founder and President of OpenAI.
To learn more about this partnership, you can listen to OpenAI’s new podcast where Sam Altman and Greg Brockman sit down with Broadcom’s Hock Tan and Charlie Kawwas.
Q&A: OpenAI and Broadcom AI Accelerator Partnership
Q1: What is the goal of the OpenAI–Broadcom partnership?
A: To co-develop and deploy 10 GW of OpenAI-designed AI accelerators and Ethernet-based network systems supporting next-generation AI clusters.
Q2: When will the deployment begin and finish?
A: Broadcom will begin deploying racks in the second half of 2026, with full completion targeted by the end of 2029.
Q3: Why is OpenAI designing its own accelerators?
A: OpenAI aims to embed its frontier model learnings directly into hardware, boosting efficiency, scalability, and performance across AI systems.
Q4: How does Broadcom contribute to the project?
A: Broadcom provides Ethernet, PCIe, and optical connectivity solutions, enabling scalable, power-efficient AI infrastructure.
Q5: How does this collaboration align with OpenAI’s mission?
A: It supports OpenAI’s mission to ensure that artificial general intelligence (AGI) benefits all of humanity by expanding the capacity to train and deploy advanced models.
What This Means: A New Model for Scalable, Sustainable AI Systems
The OpenAI–Broadcom collaboration represents a decisive moment in the evolution of global AI infrastructure. By moving from reliance on third-party hardware to custom, co-designed systems, OpenAI is asserting greater control over the performance, cost, and availability of the compute resources that power its frontier models. This signals a shift toward vertical integration — where leading AI companies increasingly design both the intelligence and the machines that run it.
For Broadcom, the partnership reinforces its position at the center of the emerging AI hardware supply chain, validating Ethernet as a scalable backbone for large language model training and inference. It also suggests that future data centers may rely less on proprietary interconnects and more on open, standards-based architectures.
At a broader level, this project underscores the mounting demand for energy-efficient compute. Building 10 gigawatts of accelerator capacity will require innovations not only in chip design but also in power management, networking, and sustainability — areas now critical to the AI industry’s long-term viability.
If successful, the collaboration could redefine how infrastructure for artificial general intelligence (AGI) is conceived — integrating AI-optimized hardware directly into the foundation of digital economies. What began as a chip design initiative may ultimately shape how intelligence itself is scaled and distributed across the global network.
Editor’s Note: This article was created by Alicia Shapiro, CMO of AiNews.com, with writing, image, and idea-generation support from ChatGPT, an AI assistant used for research and drafting. However, the final perspective and editorial choices are solely Alicia Shapiro’s. Special thanks to ChatGPT for assistance with research and editorial support in crafting this article.