Broadcom Signs Long-Term Google TPU and AI Rack Agreements
Broadcom said it has entered into a long-term agreement with Google to develop and supply custom Tensor Processing Units for future generations of Google’s AI infrastructure. The deal deepens a partnership that has supported one of the most important chip programs in the industry.
The company also disclosed a separate supply assurance agreement covering networking and other components for Google’s next-generation AI racks through 2031. Together, the agreements extend Broadcom’s role across both custom AI processors and the broader hardware stack needed to support large-scale AI systems.
Broadcom, Google, and Anthropic Expand TPU Collaboration
In a related step, Broadcom, Google, and Anthropic expanded their three-way collaboration. Under the arrangement, Anthropic will gain access to about 3.5 gigawatts of next-generation TPU-based AI compute capacity beginning in 2027.
The filing stated that Anthropic’s use of the expanded capacity depends on its continued commercial success. It also said the parties are discussing support for the deployment with operational and financial partners.
Anthropic’s TPU Capacity Access Starting in 2027
Anthropic’s role significantly broadens TPU use beyond Google’s own operations. The company, which develops the Claude family of models, had already announced a major arrangement with Google in October 2025 to access up to one million TPUs, described as being worth tens of billions of dollars.
Broadcom CEO Hock Tan later said on a December 2025 earnings call that Anthropic was behind a $21 billion order for Ironwood-based TPU systems. On the company’s March earnings call, Tan said Broadcom had shipped one gigawatt of TPU capacity to Anthropic and expected that amount to grow to more than three gigawatts in 2027.
The Broadcom and Google TPU Partnership Has Been Building for Over a Decade
These new agreements formalize and extend a relationship that has been developing for more than ten years. Broadcom has co-designed every generation of Google’s TPU since the program began, and those chips have become central to Google’s AI strategy.
They power both the training and serving of Google’s Gemini family of models. That long-running collaboration helps explain why this latest announcement matters: it is not a new experiment, but an expansion of a deeply established infrastructure partnership.
Google’s Ironwood TPU and the Next Generation of AI Infrastructure
Google released its seventh-generation TPU, called Ironwood, in late 2025. It is also preparing an eighth-generation chip for mass production on TSMC’s 3nm process node.
That roadmap shows the TPU program continuing to move forward at scale. And with Broadcom tied into future generations of Google’s AI infrastructure, the company’s position inside that roadmap looks even more entrenched.
Broadcom Pushes to Lead the Hyperscale Custom Silicon Market
The agreements arrive as Broadcom moves aggressively to establish itself as the leading partner for hyperscale custom silicon. The company is clearly aiming to be at the center of the infrastructure buildout behind large AI model deployment.
In its fiscal first quarter ended February 2026, Broadcom’s AI-related revenue rose 106 percent year over year to $8.4 billion. Tan has said the company has “line of sight” to more than $100 billion in AI chip sales by 2027, pointing to planned gigawatt-scale deployments at Google, Anthropic, Meta, and OpenAI.
Why Broadcom’s AI Revenue Growth Matters
The revenue jump gives more weight to Broadcom’s broader AI ambitions. It suggests that the company’s custom silicon and infrastructure strategy is already translating into material business growth, not just future promise.
And the structure of these new agreements matters too. They do not only cover chip development. They also reach into networking, rack-level infrastructure, and long-term supply assurance, which strengthens Broadcom’s role in the full AI hardware stack.
Investor Response Highlights Confidence in Broadcom’s AI Position
Broadcom shares rose in after-hours trading following the announcement. The market reaction pointed to investor confidence in the company’s growing importance at the center of large-scale AI infrastructure.
That response fits the broader picture. Broadcom is not just participating in the AI buildout. It is expanding its role through long-duration agreements, deeper integration with Google’s TPU program, and a larger compute relationship that brings Anthropic further into the fold.
What the Google, Broadcom, and Anthropic Expansion Signals
The expanded arrangements show how custom AI infrastructure is becoming more interconnected across chip design, compute access, and deployment planning. Broadcom’s relationship with Google continues to deepen through future TPU generations and AI rack components, while Anthropic’s access to large-scale TPU capacity points to a broader use case for Google’s infrastructure.
At the same time, the details around Anthropic’s future consumption make clear that this capacity expansion is tied to business performance and ongoing deployment planning. The collaboration is growing, but it is also structured around execution, scale, and commercial demand.

