Meta and Broadcom Extend MTIA Chip Partnership

Meta is deepening its investment in custom AI silicon by extending its partnership with Broadcom through 2029. The expanded agreement focuses on the design and manufacturing of Meta Training and Inference Accelerators, known as MTIA chips, which are intended to support future large-scale compute deployments.

Under the agreement, Meta has committed to an initial deployment of 1 gigawatt of MTIA capacity. The company also plans to scale that figure to multiple gigawatts of Broadcom-based accelerators as its AI infrastructure expands. Broadcom said these chips will be the first AI silicon built on a 2-nanometer process.

Broadcom’s Role in Meta’s AI Infrastructure

Chip Design, Packaging, and Networking

Broadcom’s involvement goes beyond manufacturing. Meta is working with the company on chip design, advanced packaging, and networking for MTIA, giving Broadcom a wide-ranging role across Meta’s AI compute infrastructure.

This positions Broadcom as a key partner in the technical foundation behind Meta’s custom accelerator roadmap, not just a supplier of silicon.

MTIA Roadmap Remains Active

The expanded deal also served as a direct response to questions around how much Broadcom stands to gain from Meta’s custom chip efforts. Broadcom emphasized that Meta’s MTIA roadmap remains active and on schedule.

During the company’s March earnings call, Broadcom CEO Hock Tan said that Meta’s custom accelerator plan is moving forward. He stated that shipments are already underway and that the next generation of XPUs will scale to multiple gigawatts in 2027 and beyond.

Why Meta Is Investing in Custom AI Chips

Large cloud providers are increasingly looking for ways to reduce dependence on expensive and hard-to-secure GPUs from Nvidia and AMD. One approach is to build custom ASICs tailored to specific AI workloads.

These chips do not offer the same flexibility as general-purpose GPUs, but they can provide lower costs and better efficiency when handling a defined set of AI tasks. That tradeoff is becoming more attractive as AI infrastructure demands continue to grow.

How Meta’s Approach Differs From Other Hyperscalers

Google began the first major hyperscaler ASIC effort with Tensor Processing Units in 2015, and Amazon followed with its own custom chips in 2018. Meta’s approach stands apart because it uses MTIA silicon entirely for internal workloads rather than exposing those accelerators through a cloud platform.

That makes MTIA a more focused internal infrastructure strategy, built to support Meta’s own AI systems rather than external customers.

MTIA Development and Data Center Expansion

Meta introduced MTIA in 2023 and added four new versions in March, showing a rapid pace of iteration between chip generations. The Broadcom agreement comes after Meta also secured multi-gigawatt GPU deals with AMD and Nvidia, along with a new custom-chip partnership with Arm Holdings.

Meta plans to deploy this mix of GPUs and accelerators across 31 data centers, including 27 located in the US. That combination suggests the company is building a broad AI hardware stack rather than relying on a single compute path.

Meta’s Larger AI Capital Plan

The custom silicon strategy is part of a much bigger AI spending push. In January, Meta said it could spend up to $135 billion on AI this year as it works to keep pace with Google, Amazon, Anthropic, and OpenAI.

Broadcom is also tied to other large-scale AI infrastructure efforts. The company has a separate long-term agreement with Google to develop and supply future TPUs, and beginning in 2027, Anthropic is set to access about 3.5 gigawatts of that TPU capacity.

Board Change Alongside the Expanded Deal

Technical and financial commitments are unfolding alongside changes at the board level. According to a securities filing, Hock Tan told Meta last week that he will not stand for reelection to Meta’s board, which he joined in 2024.