A Strategy Years in the Making

For a long time, Google's Tensor Processing Units — TPUs — were kind of like the secret sauce you never got to taste. Built in 2015 purely for internal use, these chips powered Google's own workloads while the rest of the world leased access through Google Cloud starting in 2018. You could rent compute time, sure, but owning the hardware? Not a chance.

That's changing now. On its Q1 2026 earnings call, CEO Sundar Pichai announced that Google will sell TPU chips directly to select customers — a genuine shift in how the company thinks about its silicon business. It's a big deal, and honestly, it's been telegraphed for a while if you were paying attention.

Reports surfaced back in late 2024 that Google was in serious talks with Meta about direct TPU sales, with Meta reportedly eyeing purchases worth billions starting in 2027. Then in December 2025, Broadcom announced it would sell Google's TPU Ironwood rack systems to Anthropic. By April 2026, Broadcom had formalized expanded long-term supply agreements with both Google and Anthropic. So the pieces were already falling into place — the Q1 earnings call just made it official.

What Google's 8th-Generation TPUs Actually Bring

The timing of the announcement wasn't random. It landed just one week after Google unveiled its eighth-generation TPUs at Google Cloud Next 2026 in Las Vegas. And the specs are genuinely impressive.

Google introduced two distinct chips for two distinct jobs:

  • TPU 8t — built for training, offering up to 3x faster AI model training and 2.8x better price-to-performance than its predecessor
  • TPU 8i — built for inference, delivering an 80% improvement in inference performance

Splitting training and inference into separate chips is a deliberate architectural choice, and it signals that Google is thinking seriously about competing at data center scale — not just offering a general-purpose accelerator and calling it a day.

Going Head-to-Head With Nvidia (Sort Of)

Here's where it gets interesting. Google selling TPUs externally puts it in more direct competition with Nvidia in the AI chip market — but the strategy isn't exactly "our chip beats their chip." It's more nuanced than that.

Google's approach, as Forbes noted, centers on beating Nvidia not chip-for-chip but on integrated cost at the data center level. And the company isn't even walking away from Nvidia entirely — it continues to offer Nvidia chips through its cloud platform and has committed to making Nvidia's upcoming Vera Rubin chip available later this year.

The scale projections, though, tell a story. TPU shipments are expected to hit 4.3 million units in 2026, scaling to more than 35 million by 2028. Morgan Stanley estimated that every 500,000 TPUs sold externally could generate roughly $13 billion in additional revenue for Google. That's not a side hustle — that's a serious business line.

The Bigger Picture: Google's AI Infrastructure Bet

The TPU announcement didn't happen in isolation. Alongside it, Alphabet revealed plans to invest up to $40 billion in Anthropic — the largest outside bet the company has made — with the deal including substantial TPU compute commitments in addition to cash. That's not just a financial investment; it's a validation loop. Anthropic using Google's TPUs at scale is exactly the kind of proof-of-concept that makes other customers take the hardware seriously.

Google also expanded its chip supply chain in meaningful ways. It partnered with Marvell Technology for future chip designs alongside its existing Broadcom relationship, and deepened its collaboration with Intel on AI data center processors. The message is pretty clear: Google isn't betting everything on one manufacturing partner.

DA Davidson analyst Gil Luria put it starkly — Google's chip business "could ultimately be worth more than Google Cloud." And Google Cloud itself is no small thing, with analysts expecting roughly 47% revenue growth in the quarter.