News
Alphabet co-designs TPUs across hardware and software stacks for optimized AI infrastructure
Sunday, February 8, 2026 at 03:24 PM
Alphabet is leveraging cross-stack co-design for its Tensor Processing Units (TPUs), integrating requirements from Google DeepMind and various internal services including Search, YouTube, and Google Cloud to optimize hardware for specific AI and data center workloads.
Context
Alphabet is accelerating its custom silicon strategy by co-designing its sixth-generation Tensor Processing Units, known as Trillium, across its entire hardware and software stack. By integrating feedback from Google DeepMind and core services like Search and YouTube, the company achieved a 4.7x increase in peak compute performance per chip over its predecessor. This vertical integration allows Alphabet to optimize AI workloads, improving energy efficiency by 67% as Trillium reached general availability in early 2025.
To support this scaling, Alphabet forecasted capital expenditures of $175 billion to $185 billion for 2026, nearly doubling its $91.4 billion investment in 2025. This aggressive spending underscores a transition toward becoming an AI infrastructure powerhouse. By partnering with Broadcom for custom ASIC development, Alphabet is building a self-sufficient supply chain that reduces reliance on third-party GPUs while scaling its Gemini models at a lower total cost.
Related Companies
Google
GOOGL