News

NVIDIA Blackwell and Hopper architectures compared against AMD MI355X for AI inference

Monday, February 16, 2026 at 05:26 PM

The tweet references an analysis comparing NVIDIA's Blackwell (GB200, B200, GB300 NVL72) and Hopper (H100) architectures against AMD's MI355X for inference workloads. The analysis, formerly known as InferenceMAX, is now InferenceX v2 and likely details performance and architectural differences.

Context

Nvidia and AMD are entering a critical showdown in AI inference following the release of the InferenceX v2 benchmarks. These results indicate that while AMD's MI355X architecture is highly competitive in single-node and mid-range configurations—offering comparable performance-per-dollar in FP8 workloads—Nvidia continues to dominate the high-end frontier. Nvidia's Blackwell GB300 NVL72 systems leverage a superior software stack to pull ahead in complex scenarios, delivering up to 100x better performance in specific reasoning models compared to legacy architectures. The technical battleground has shifted to memory and precision. Both the Blackwell Ultra GB300 and the MI355X now feature 288GB of HBM3e memory per GPU, but Nvidia’s rack-scale integration allows for 1,400 PFLOPS of FP4 performance. While AMD has doubled its software performance since late 2025, it still struggles to combine FP4 precision with disaggregated serving as effectively as its rival. Nvidia's systems are currently shipping to hyperscalers, with broad enterprise availability expected throughout 2026.

Related Companies

Nvidia
Nvidia
NVDA
US
AMD
AMD
AMD
US