Rambus introduces HBM4E memory controller achieving 4.1 TB/s bandwidth for AI accelerators
News

Rambus introduces HBM4E memory controller achieving 4.1 TB/s bandwidth for AI accelerators

Thursday, March 5, 2026 at 12:33 AM

Rambus has unveiled a new HBM4E memory controller IP designed to handle data transfer speeds of up to 4.1 TB/s per integrated circuit. This technology provides a 60% performance increase compared to standard HBM4, targeting high-performance AI accelerators and data center infrastructure that require massive memory bandwidth.

Context

Rambus has announced the industry's first HBM4E memory controller IP, a critical infrastructure component designed to manage next-generation high-bandwidth memory for AI accelerators and GPUs. This new controller achieves a breakthrough bandwidth of 4.1 TB/s per memory device, representing a 60% performance increase over standard HBM4 solutions. By supporting speeds of up to 16 Gbps per pin, the technology addresses the massive data throughput requirements of generative AI training and high-performance computing (HPC) workloads. This release reinforces the market leadership of Rambus in the silicon IP sector, building on a portfolio of over 100 HBM design wins. As the semiconductor industry shifts toward HBM4 and HBM4E standards to overcome memory bottlenecks, this controller provides a low-latency path for chip designers to integrate faster memory stacks. The solution is expected to be a key enabler for AI hardware arriving in the 2026-2027 timeframe.

Related Companies

Rambus
Rambus
RMBS