Odd Lots

Why Paul Kedrosky Says AI Is Like Every Bubble All Rolled Into One

November 14, 2025

Key Takeaways Copied to clipboard!

  • The current AI boom is considered a "meta-bubble" because it uniquely combines the core ingredients of every major historical bubble: real estate speculation, disruptive technology, loose credit (especially private credit), and the potential for a government backstop. 
  • The massive AI capital expenditure (CapEx) creates a Schrödinger's cat scenario where it is simultaneously a strength (building moats) and a weakness (if returns don't materialize) for companies, leading to unsustainable investment binges. 
  • The financing structures for AI data centers, involving complex SPVs and private credit, introduce significant temporal mismatch risk, particularly between long-duration debt (like 30-year loans) and the short useful lifespan of the underlying collateral (GPUs, potentially 18 months to two years when run flat out). 

Segments

AI CapEx as Economic Stimulus
Copied to clipboard!
(00:02:24)
  • Key Takeaway: AI CapEx is currently acting as a massive, unintentional private sector stimulus program driving a significant fraction of U.S. GDP growth.
  • Summary: Data center spending accounted for an estimated 50% of U.S. GDP growth in the first quarter. This spending is so large it could morph into a problem for the wider economy if the investments fail to generate returns. Data centers are an interesting asset class sitting at the intersection of industrial spending and real estate.
The AI Meta-Bubble Ingredients
Copied to clipboard!
(00:08:45)
  • Key Takeaway: The AI bubble is historically unique because it combines all major ingredients of past bubbles: real estate, technology, loose credit, and a notional government backstop.
  • Summary: This episode frames the AI boom as a ‘meta-bubble’ incorporating elements from previous crises. The inclusion of a government backstop notion, often seen in past bubbles like housing, exacerbates the risk profile. The combination of these factors suggests that a gentle landing is unlikely.
Financing Structures and SPVs
Copied to clipboard!
(00:10:07)
  • Key Takeaway: Massively profitable hyperscalers are increasingly using Special Purpose Vehicles (SPVs) to raise capital for data centers and keep debt off their balance sheets to avoid impacting credit ratings.
  • Summary: SPVs are legal structures used to contribute capital and retain legal title to the project, allowing debt to be raised without immediately rolling onto the primary company’s balance sheet. This complexity creates future uncertainty regarding recourse and ownership if performance falters. The shift of tech financing from purely an equity story to a credit story is evidenced by tracking credit default swaps (CDS) for big tech firms.
GPU Lifespan and Depreciation Mismatch
Copied to clipboard!
(00:17:04)
  • Key Takeaway: A critical structural risk is the temporal mismatch between long-term debt financing (e.g., 30-year loans) and the short useful lifespan of income-producing GPU collateral, which can be as short as 18-24 months under heavy training loads.
  • Summary: The useful lifespan of GPUs used for intensive AI training is much shorter than traditional data center assets due to thermal degradation from running 24/7. This forces constant refinancing risk as the underlying collateral rapidly depreciates. This mismatch is compounded by energy scarcity, pushing some builders toward long-lived assets like natural gas plants, risking stranded assets.
Unit Economics and Demand Projections
Copied to clipboard!
(00:24:46)
  • Key Takeaway: Current generation large language models generally have dire, negative unit economics because costs rise linearly with usage, contrasting with traditional software where fixed costs are spread over more users.
  • Summary: Justifying the current CapEx requires achieving revenue scales, such as $3-4 trillion annually, through highly optimistic assumptions about consumer adoption or capturing a large share of the global labor market TAM. The success of smaller, efficient, distilled models suggests that projections based on current bloated, inefficient large language models may drastically overestimate future compute demand.
US vs. China AI Strategy
Copied to clipboard!
(00:37:33)
  • Key Takeaway: China’s approach, focusing on efficiency gains through distillation and rapid adoption of smaller models, may prove more sustainable than the U.S. strategy of massive spending on large, closed-source models.
  • Summary: The Chinese strategy leverages distillation—training smaller models using larger ones—to achieve efficiency gains that refute the bloated nature of current transformer models. While the U.S. focuses on building the largest models, the Chinese approach aligns better with the historical arc where technology eventually trends toward cheaper, more accessible solutions. This efficiency focus challenges the demand forecasts underpinning current U.S. CapEx projections.