News
Nvidia H100 infrastructure noted for its use of older HBM2e and HBM3 memory standards
Saturday, March 28, 2026 at 09:10 AM
The tweet discusses older GPU generations, specifically referencing the H100 and its use of HBM2e and HBM3 memory compared to the newer H200. It notes the age of these components in the context of current infrastructure.
Context
As of March 28, 2026, market attention is shifting back to the legacy hardware powering established AI clusters. While Nvidia has moved into the Blackwell and Rubin eras, the H100 remains a foundational asset. Recent analysis highlights that the H100 SXM primarily utilizes HBM3 memory, while some PCIe variants and older infrastructure still rely on HBM2e. This distinguishes the H100 from the H200, which was the first to adopt the significantly faster HBM3e standard to address memory-bandwidth-bound workloads.
This technical distinction is critical for infrastructure efficiency. The H100 delivers approximately 3.35 TB/s of bandwidth, compared to the 4.8 TB/s found in the H200. For investors, the continued use of HBM2e/3 in H100 systems reflects the long tail of the Hopper architecture's lifecycle and the supply chain's transition toward HBM4, which is expected to enter mass production by SK Hynix and Samsung later this year.
Sources (10)
H100 GPU - NVIDIA
HBM chip war intensifies as SK Hynix hunts for Samsung talent - KED GlobalHopper (microarchitecture) - WikipediaScaling the Memory Wall: The Rise and Roadmap of HBMH100 NVL vs. SXM5: NVIDIA's Supercomputing GPUs - Vast.aiComparing Blackwell vs Hopper | B200 & B100 vs H200 & H100 | Exxact BlogComparing NVIDIA H100 vs A100 GPUs for AI Workloads | OpenMetal IaaSNVIDIA GPUs H200 vs. H100 - A detailed comparison guide | TRG Datacenters
Related Companies
Nvidia
NVDA