News
Google and Broadcom have developed over ten generations of TPU AI accelerators
Saturday, February 28, 2026 at 12:41 AM
Google has collaborated with Broadcom to develop more than ten generations of TPU chips, highlighting the extensive history and commitment required for AI ASIC development.
Context
Google and Broadcom have solidified a decade-long partnership, successfully developing over ten generations of custom TPU AI accelerators. This collaboration has transitioned from early internal experiments into a sophisticated hardware ecosystem that powers Google’s most advanced models, including Gemini 3. By co-designing these specialized ASICs, the companies have created a proprietary alternative to general-purpose GPUs, allowing for deep architectural integration that competitors struggle to replicate.
For investors, this relationship is a primary driver of Broadcom's growth, with AI-related revenue projected to jump from $20 billion in fiscal 2025 to as much as $46 billion in 2026. The latest TPU v7, codenamed Ironwood, utilizes a 3nm manufacturing process and is reported to be 67% more energy-efficient for inference than standard merchant silicon. These efficiencies allow Google to scale its infrastructure at a unit cost significantly lower than those relying solely on third-party hardware.
The partnership is now entering a new phase of commercialization as Broadcom begins facilitating the supply of these TPU designs to external partners like Anthropic. With an eighth generation already in development for late 2026 and a total semiconductor backlog exceeding $73 billion, Broadcom has successfully positioned itself as the essential architect for hyperscale AI infrastructure.
Sources (2)
Related Companies
Broadcom
AVGO
Google
GOOGL