News

Google Axion and Amazon Graviton CPUs utilize DDR5 memory architectures for AI server workloads

Thursday, December 25, 2025 at 01:00 PM

AI server architectures utilize different memory standards for CPUs depending on the vendor, with Nvidia using LPDDR series while Amazon Graviton and Google Axion utilize DDR series. Google Axion CPU designs specifically feature high-density DDR5 DIMM slot configurations to support data preprocessing and job instructions in AI ASIC systems.

Context

Google and Amazon are accelerating the deployment of custom ARM-based CPUs, specifically the Axion and Graviton processors, to manage heavy AI server workloads. While Nvidia GPUs handle primary computation, these custom CPUs perform critical data preprocessing and instruction management, requiring massive memory overhead. Unlike Nvidia, which frequently utilizes LPDDR memory for its specialized chips, Google and Amazon are integrating high-density DDR5 architectures into their server designs. This design choice highlights a growing demand for standard DDR5 DIMMs in the AI supply chain, as evidenced by Axion’s high-capacity memory slot configurations. For investors, this shift underscores significant diversification in the AI infrastructure market. Google Axion reportedly offers up to 60% better energy efficiency than comparable x86 chips, while Amazon Graviton4 provides 50% more cores than its predecessor. As these hyperscalers scale their internal silicon through 2025, the reliance on DDR5 for AI ASICs ensures that memory manufacturers remain central to the AI boom, even outside of the high-bandwidth memory (HBM) used in premium GPUs. This trend solidifies the role of custom ARM silicon as a primary driver for next-generation server memory demand.

Related Companies

Nvidia
Nvidia
NVDA
US
Google
Google
GOOGL
US
Amazon
Amazon
AMZN
US