News
NVIDIA highlights complementary roles of HBM and LPDDR in hardware performance
Sunday, February 1, 2026 at 12:05 AM
Jensen Huang emphasized the dual requirements for NVIDIA's hardware architecture, noting that HBM is essential for high-performance computing needs while LPDDR is utilized for low-power memory applications.
Context
Nvidia CEO Jensen Huang recently underscored the complementary roles of High Bandwidth Memory (HBM) and LPDDR in AI hardware performance. While HBM provides the extreme bandwidth necessary for GPU-intensive training, Nvidia utilizes LPDDR5X on its processors to provide high-capacity, low-power memory for broader system tasks. This hybrid architecture is central to the latest superchips, which achieve a unified memory footprint of 1.5 TB by pairing HBM3E with LPDDR5X.
This strategy enables Nvidia to balance the high costs of HBM with the efficiency of LPDDR5X, which offers 1 TB/s of bandwidth at a significantly lower power envelope. Looking ahead, the next-generation architecture is scheduled for full production in H2 2026, incorporating HBM4 to reach 22 TB/s in bandwidth and 288 GB capacity per GPU. By aggressively securing supply via prepayments to SK Hynix and Micron, Nvidia is reinforcing its market dominance while impacting global availability for the smartphone and PC sectors.
Related Companies
Nvidia
NVDA