News

Gigawatt data center builds reach 35 billion as GPU infrastructure dominates capital expenditure

Friday, February 27, 2026 at 03:55 PM

A report on AI cloud economics details that Microsoft and other hyperscalers are extending GPU depreciation cycles up to 9 years for older chips like the V100, while newer H100 and GB200 systems face higher failure risks and thermal constraints. Standalone gigawatt-scale data centers now cost between $30 billion to $35 billion to build, with 80% of that expenditure allocated to IT infrastructure. Nvidia is reportedly extending warranties for H100 systems to absorb reliability risks for cloud providers. Supply chain constraints are expected to drive data center capital expenditures up by an additional 5%.

Context

Microsoft and major hyperscalers are facing unprecedented capital requirements, with gigawatt-scale data center builds now reaching $35 billion. Approximately 80% of this investment is directed toward IT infrastructure, dominated by Nvidia silicon. A single GB200 node is priced at $60,000, pushing rack costs to $600,000 and driving expectations for a further 5% increase in overall CapEx as Microsoft and its peers race to secure constrained supply for the next generation of generative AI services. The long-term return on investment is increasingly tied to extended GPU depreciation cycles, which have stretched from 3 years to as many as 9 years. While older chips like the V100 remain profitable, newer H100 systems face higher thermal constraints and limited repairability. To protect cloud margins of at least 30%, Nvidia is offering extended warranties to hyperscalers, effectively absorbing reliability risks and ensuring that massive infrastructure outlays remain accretive even as hardware utilization reaches peak levels.

Related Companies

Microsoft
Microsoft
MSFT
US
Nvidia
Nvidia
NVDA
US