Rumor
Nvidia Feynman GPU expected to utilize up to 1.3TB of HBM5 memory according to JP Morgan
Saturday, March 14, 2026 at 02:02 AM
JP Morgan analysts estimate that the upcoming Feynman GPU architecture will feature significantly higher memory capacity than previously anticipated, potentially utilizing between 1TB and 1.3TB of HBM5. This memory specification exceeds that of the Rubin Ultra, indicating a massive increase in high-bandwidth memory requirements per unit.
Context
According to updated JPMorgan research, Nvidia's next-generation Feynman GPU is projected to utilize between 1.0TB and 1.3TB of HBM5 memory. This represents a significant capacity increase over the upcoming Rubin Ultra architecture and signals a massive demand spike for high-bandwidth memory. The Feynman chip, expected to be teased at GTC 2026 and officially launched in 2028, will reportedly be the first to utilize a 1nm-class process, likely TSMC's A16 node. This roadmap underscores a continued 'memory wall' challenge where AI model scaling requires exponentially denser hardware.
This development is highly bullish for major memory providers like Micron, SK Hynix, and Samsung. To achieve the necessary density for 1.3TB of capacity, the industry is moving toward 20-Hi and 24-Hi stacks using hybrid bonding technology, which fuses copper pads directly to maintain height limits. JPMorgan expects commercialization of this specialized packaging equipment by 2H27, positioning firms like ASMPT, Besi, and Hanmi as critical enablers of the Feynman lifecycle.
Sources (5)
The HBM4 Arms Race: SK Hynix, Samsung, and Micron Deliver 16 ...[News] NVIDIA May Offer First Look at Feynman at GTC 2026, TSMC A16 and Taiwan Supply Chain in FocusScaling the Memory Wall: The Rise and Roadmap of HBMSK hynix reveals DRAM development roadmap through 2031 — DDR6, GDDR8, LPDDR6, and 3D DRAM incoming | Tom's HardwareGlobal Memory Market JPM 250925 | PDF | Flash Memory | Solid State Drive
Related Companies
JPMorgan Chase
JPM
Nvidia
NVDA
Micron
MU