Rumor

Nvidia shifts AI infrastructure focus to Blackwell GB200 as H100 orders decline

Wednesday, March 11, 2026 at 04:50 AM

Analyst Ming-Chi Kuo reports that Nvidia is shifting its AI server focus towards the GB200 and Blackwell architecture, leading to a reduction in orders for the H100 and older components. This transition marks a strategic pivot in AI infrastructure procurement as major cloud providers prioritize next-generation hardware density.

Context

As the AI hardware market evolves, Nvidia is shifting its primary infrastructure focus from the Hopper architecture to the next-generation Blackwell GB200 superchip. This transition comes as demand for the older H100 and H200 units begins to soften in favor of more efficient, higher-performance systems. The GB200 NVL72 rack-scale solution is a centerpiece of this shift, offering up to a 30x performance increase for large language model inference and a 25x reduction in energy consumption compared to its predecessor. Key manufacturing partners including ASUS, Supermicro, and Foxconn are currently ramping up production to meet a scheduled shipment window starting in March 2025. Early availability has already been signaled by cloud providers like CoreWeave, which recently launched the first GB200-based instances. For investors, this cycle represents a significant increase in average selling prices, as a single GB200 NVL72 rack is estimated to cost approximately $3 million, nearly doubling the revenue potential per unit compared to previous generations.

Related Companies

Nvidia
Nvidia
NVDA
US