NewsUnverified

Dylan Patel projects OpenAI and Anthropic to reach 10 GW capacity by 2027 amid EUV and memory supply constraints

Sunday, March 15, 2026 at 11:59 AM

AI labs OpenAI and Anthropic are projected to scale data center capacity from 2-2.5 GW to 5-6 GW by late 2026, targeting 10 GW by 2027. Anthropic faces compute shortages due to conservative contracting, forcing high-premium spot deals. Supply chain bottlenecks are expected to shift from memory in 2026-2027 to ASML EUV lithography tools by 2028-2029, as each gigawatt of capacity requires 3.5 EUV units. Memory costs remain elevated due to lack of fab expansion in 2023, while H100 GPUs retain high residual value due to increased token productivity in newer models.

Context

Semiconductor analyst Dylan Patel of SemiAnalysis recently projected that OpenAI and Anthropic will each reach 10 GW of power capacity by the end of 2027, a massive leap from their current levels of roughly 2 to 2.5 GW. This aggressive expansion is driven by surging inference needs, with Anthropic alone estimated to require 4 GW of new capacity just to support its incremental revenue growth. However, this growth faces severe structural headwinds, particularly a memory crunch where DRAM prices have tripled and relief is not expected until late 2027. Furthermore, by 2028, the primary bottleneck is expected to shift to ASML and its EUV lithography tools, which the supply chain cannot produce fast enough to match downstream data center demand. During a recent appearance on the Dwarkesh Podcast, Patel challenged the prevailing bear thesis regarding hardware depreciation, stating: "An H100 is worth more today than it was three years ago." He argues that because newer models like GPT-5.4 are more efficient, the marginal revenue generated per token by an Nvidia H100 actually increases over time. This creates a divergence in procurement strategies; while OpenAI secured early capacity, Anthropic is now forced to pay steep premiums—up to $2.40/hr for H100 rentals—to keep pace. With 30% of Big Tech’s 2026 CapEx allocated to memory alone, the industry remains in a high-utilization, supply-constrained regime that favors providers with existing scale like TSMC, Google, and Nvidia.

Related Companies

Nvidia
Nvidia
NVDA
US