After the Gold Rush: When the Bubble Deflates

Edward Zitron's "The Hater's Guide to the AI Bubble" presents a damning indictment of artificial intelligence hype: $560 billion in capital expenditures across major tech companies in 2024-2025, generating a mere $35 billion in revenue. He argues that we're witnessing a house of cards built entirely on NVIDIA's ability to sell GPUs to a handful of companies pursuing unprofitable AI services. When the music stops, Zitron warns, the crash will be spectacular—stock markets will tumble, retirement funds will evaporate, and the entire AI edifice will collapse like a punctured balloon.

It's a compelling narrative, backed by meticulous financial analysis and righteous indignation at corporate waste. But it's probably wrong—not about the financial unsustainability or the hype-driven nature of current AI investment, but about what happens when bubbles actually burst.

The collapse may be far more boring than anyone fears, even as its social consequences prove more persistent than anyone hopes. Yet this framing misses something crucial about how technological transitions actually feel to live through. We're not experiencing clear before-and-after states but rather a disorienting liminal space—what Neil Young captured in "After the Gold Rush" as that dreamlike moment when "the loading had begun" and you can see "Mother Nature on the run." Familiar systems that seemed permanent are suddenly in motion, but their destinations remain unclear. The apocalypse isn't dramatic destruction—it's living suspended between worlds, unable to distinguish between normal adaptation and fundamental transformation.

The Boring Pattern of Technological Collapse

Technology bubbles rarely end in apocalyptic destruction. They deflate into new equilibrium states where infrastructure finds unexpected applications, markets normalize, and life continues—just differently than anyone predicted. The pattern repeats with such regularity that we should expect it rather than dramatic collapse or revolutionary transformation.

Consider the most obvious parallel: the dot-com crash of 2000-2002. During the bubble years, telecommunications companies raised $1.6 trillion on Wall Street and floated $600 billion in bonds to crisscross the country with digital infrastructure. They installed 80.2 million miles of fiber optic cable, representing fully 76 percent of all digital wiring installed in the United States up to that point. When the bubble burst, this looked like colossal waste—companies like WorldCom and Global Crossing went bankrupt, and by 2001, only 5% of installed fiber optic capacity was actually being used.

Yet this "wasteful" infrastructure became the foundation for everything that followed. By 2004, bandwidth costs had fallen more than 90 percent despite internet usage doubling every few years. As late as 2005, 85 percent of broadband capacity still sat unused, creating massive headroom for new applications. That excess capacity enabled YouTube (founded 2005), Netflix streaming (launched 2007), and the entire Web 2.0 ecosystem. The companies died, but the infrastructure lived, and new applications emerged that nobody had imagined during the original bubble.

The dot-com crash also tilled the soil for the next tech boom. Companies that had burned through venture capital still left behind server farms and networking equipment. These facilities became the foundation for cloud computing—Amazon Web Services launched in 2006, built on infrastructure originally deployed for e-commerce that seemed wildly overbuilt at the time. By 2012, 38 percent of organizations were using cloud services, supported by data centers that had survived their original companies' failures.

GPU Infrastructure's Hidden Resilience

Today's GPU proliferation follows a similar pattern. While Zitron focuses on the unsustainable economics of AI applications, he overlooks a crucial technical reality: modern GPUs are not single-purpose AI chips. They are programmable parallel processors capable of general-purpose computing across multiple domains.

This versatility creates genuine demand resilience beyond AI hype. Graphics Processing Units excel at any computation that can be parallelized—a category that includes scientific research, financial modeling, content creation, and data analytics. The same H100 GPUs being deployed for training large language models can accelerate molecular dynamics simulations, enabling researchers to model protein folding for drug discovery. They power real-time risk analysis in financial markets, where microsecond advantages translate to millions in trading profits. They render complex visual effects for entertainment and architectural visualization.

Universities and research institutions are perpetually compute-starved, constrained by budgets that can't keep pace with their computational ambitions. Climate scientists need massive processing power to run weather models and climate simulations. Pharmaceutical companies require enormous computational resources to analyze molecular interactions and screen drug candidates. Engineers designing aircraft, automobiles, and buildings rely on GPU-accelerated simulations to test designs virtually rather than building expensive physical prototypes.

The established scientific computing market alone represents tens of billions in annual demand—not at the inflated scale of current AI investment, but sufficient to absorb significant GPU capacity at more reasonable prices. A market correction that brings high-end GPUs from $30,000 to $3,000 would unlock applications previously considered too expensive, dramatically expanding the addressable market.

Moreover, the cloud computing ecosystem has already evolved to efficiently redistribute computational resources. Services like Vast.ai, TensorDock, and Lambda Labs offer GPU access at "80% less than traditional clouds" by aggregating spare capacity from multiple providers. These platforms demonstrate functioning markets for computational arbitrage—when AI demand softens, capacity can be redirected to scientific computing, cryptocurrency mining, content rendering, or applications not yet invented.

Why This Time Follows the Pattern

The current GPU buildout resembles historical infrastructure overbuilding more than a pure speculative bubble. Like the railroad boom of the 1840s or the fiber optic expansion of the 1990s, it's creating genuine technological capability even when driven by irrational exuberance.

Key similarities include massive capital investment in physical infrastructure that retains utility beyond original applications. Railroad tracks built during speculative manias still carried trains decades later. Fiber optic cables installed for failed dot-coms still carry internet traffic today. GPU data centers being built for unprofitable AI services can run profitable scientific computing workloads tomorrow.

There's also established demand for the underlying technology, just not at bubble-inflated scale. People wanted internet connectivity during the dot-com era; they just didn't want to pay premium prices for pet food delivery. Similarly, organizations need computational power today; they just don't need $20-per-month AI writing assistants.

The infrastructure is fundamentally flexible rather than purpose-built. Unlike specialized equipment for obsolete technologies, GPUs can be reprogrammed for new applications as they emerge. This adaptability creates options for productive reuse that didn't exist in previous bubbles centered on single-purpose assets.

The Paradox of Persistent Disruption

Here's where the "boring collapse" narrative becomes more complex and concerning. While markets may stabilize and infrastructure may find new applications, the social consequences of AI proliferation won't simply normalize. In fact, gradual deflation might prove more socially disruptive than dramatic collapse because it's harder to recognize, address, or organize against.

This returns us to that liminal quality of living through transformation. The psychological difficulty isn't just about job loss or economic disruption—it's about existing in a state where basic frameworks for understanding work, capability, and social organization are all simultaneously in flux. When you can see "Mother Nature on the run"—when systems that seemed foundational suddenly appear unstable—the resulting anxiety persists regardless of whether stock markets crash or stabilize.

The automation ratchet effect means that jobs don't come back when AI hype dies down. Companies that have already deployed AI tools to reduce customer service staff, automate content creation, or streamline administrative processes rarely reverse those decisions even if the technology was oversold. The cost savings become permanent even when the revolutionary promises prove hollow.

Consider the current wave of AI implementation across industries. Legal firms are using large language models to draft contracts and analyze documents. Financial services companies deploy AI for fraud detection and risk assessment. Healthcare systems implement AI tools for diagnostic imaging and patient scheduling. Media companies use AI for content generation and editing. Even if these applications plateau at current capability levels and never achieve artificial general intelligence, they still represent permanent changes to how work gets done.

Workers who invested time, money, and career capital in developing skills that AI can replicate face real losses regardless of whether AI companies remain profitable. A freelance copywriter displaced by ChatGPT doesn't benefit from knowing that OpenAI burns billions of dollars annually. A radiologist whose diagnostic work is automated away won't be rehired when AI capabilities plateau rather than achieving medical superintelligence.

The geographic distribution of impacts adds another layer of complexity. AI infrastructure is concentrated in specific regions—Northern Virginia's data center corridor, Silicon Valley's tech campuses, select Texas facilities. But job displacement spreads much more broadly across industries and communities with little connection to the original AI investment. This creates political tensions between areas that benefit from infrastructure spending and those that bear the costs of automation.

Where Markets Stabilize, People Don't

The most troubling aspect of the "boring collapse" scenario is how it might obscure ongoing social transformation. Dramatic market crashes trigger policy responses—bailouts, retraining programs, unemployment extensions. But gradual displacement that occurs during overall market stability receives less attention and fewer resources, leaving people to navigate fundamental uncertainty without clear markers of crisis or resolution.

This is why the liminal experience matters more than market predictions. For workers, families, and communities, living through the "loading" phase—where change is constant but its direction unclear—creates a persistent psychological burden regardless of whether the ultimate outcome proves revolutionary or mundane. The exhaustion comes not from dramatic upheaval but from the cognitive load of perpetual adaptation to systems in motion, stranded between fading structures and nascent ones not yet fully formed.

Skills obsolescence accelerates during technology transitions regardless of market dynamics. Professional translators, junior lawyers, entry-level analysts, and content creators face competitive pressure from AI tools whether or not those tools generate sustainable profits for their creators. The human cost of adaptation doesn't correlate with stock prices or venture capital returns.

Moreover, computational infrastructure concentrates economic and political power regardless of its profitability. Control over large-scale AI systems provides influence over information flows, surveillance capabilities, and economic opportunities even when those systems don't generate traditional returns. A handful of organizations controlling most advanced computational infrastructure represents a form of power concentration that persists through market cycles.

The employment effects may prove particularly insidious because they compound over time. Each wave of AI-driven automation makes the next wave easier by reducing the political constituency for affected workers. As traditional information work becomes increasingly precarious, the organized resistance to further automation weakens.

The reality is that current AI infrastructure has been built for extraction, not inheritance. These facilities are optimized for maximum immediate value extraction from AI hype—cooling systems designed for specific chip densities, power distribution configured for current GPU architectures, locations chosen for tax incentives rather than long-term utility. Unlike the fiber optic cables of the dot-com era that could carry any digital signal for decades, today's data centers are massive industrial complexes requiring constant power, cooling, and hardware refresh cycles.

This creates different challenges than previous infrastructure booms. When a hyperscale data center goes from economic engine to white elephant, it doesn't just sit dormant like unused fiber capacity—it becomes an expensive burden requiring ongoing maintenance while providing diminished returns. In places like Prince William County, Virginia, or small towns across Texas that offered millions in tax abatements expecting decades of growth, communities may find themselves hosting facilities that consume enormous resources while delivering far fewer jobs and less tax revenue than promised.

The question isn't whether this infrastructure will find noble future applications, but how communities, workers, and regions navigate the aftermath of extraction-oriented development they had little control over. What happens to rural towns when their million-square-foot data centers operate at 30% capacity? How do displaced workers retrain when the skills they developed were specific to systems optimized for unsustainable business models?

These aren't abstract policy questions—they're immediate challenges facing real communities that made infrastructure bets based on promises of perpetual growth. The computational capacity might eventually find productive uses, but the social and economic disruption happens in real time, in specific places, to particular people.

Living with the Questions

If the AI bubble deflation follows the pattern of gradual normalization rather than catastrophic collapse—infrastructure seeking new applications, markets stabilizing at lower levels, persistent social disruption despite apparent stability—then our task isn't predicting outcomes but learning to navigate ongoing uncertainty.

For communities hosting extraction-oriented infrastructure, this means asking: How do we adapt when facilities built for maximum throughput become underutilized industrial complexes? What does economic development look like when your major employer was optimized for unsustainable business models? How do local governments manage the gap between promised tax revenue and actual returns from facilities designed for extraction rather than long-term community benefit?

For workers experiencing gradual displacement, the questions are equally complex: How do you retrain when the skills you developed were specific to hype-driven industries? What does career planning look like when technological capabilities plateau at levels that automate some work but not others? How do you distinguish between temporary market correction and permanent structural change?

For policymakers, the challenge involves preparing for distributed impacts that may not trigger traditional crisis responses: How do you address job displacement that happens gradually across multiple industries rather than through dramatic plant closures? What social safety nets work for people experiencing technological displacement during apparent economic stability? How do you regulate computational infrastructure built for extraction when the extraction proves unsustainable?

These questions don't have clear answers because we're living through the transition rather than analyzing it from a comfortable historical distance. The liminal experience—existing between what was and whatever is emerging—requires comfort with uncertainty and patience with incomplete information.

Beyond the Binary

The most important insight from examining technology bubble patterns is that binary outcomes—revolutionary success or catastrophic failure—rarely occur. Instead, transformative technologies find their appropriate level of utility through messy, gradual processes that nobody predicts accurately.

The printing press didn't immediately democratize knowledge; it took centuries to achieve widespread literacy. The telegraph didn't instantly create a global economy; it took decades of infrastructure development and institutional adaptation. The internet didn't revolutionize everything overnight; it required patient capital, infrastructure investment, and social learning that continued long after the dot-com crash.

Artificial intelligence will likely follow a similar trajectory—overhyped initially, then dismissed after market correction, then gradually integrated into economic and social systems in ways that prove genuinely transformative over time. The current GPU buildout may represent the infrastructure investment phase of this cycle, creating computational capacity that enables applications we haven't yet imagined.

But unlike previous technology transitions, AI deployment occurs within existing systems of economic inequality and political power that may amplify its disruptive effects. The boring collapse may leave us with persistent social tensions that market normalization doesn't resolve. Managing that tension—between technological possibility and social stability, between efficiency gains and human flourishing—represents the real challenge of the post-bubble era.

The gold rush will even end, but the infrastructure will remain—massive, extraction-oriented, and embedded in communities that had little control over its design. Rather than asking what this infrastructure will become, we should focus on how to navigate what it already is: a landscape shaped by decisions made for short-term value extraction rather than long-term community benefit.

The most important choices don't happen during the hype or the crash, but in the quiet years that follow, when communities, workers, and institutions must adapt to systems designed without their input or long-term welfare in mind. We may not have answers to these challenges, but we can insist on asking the right questions: How do we live with the consequences of extraction-oriented development? How do we build adaptive capacity when we didn't control the initial deployment? How do we maintain social cohesion during prolonged technological uncertainty?

For those living through the transition, the experience will remain fundamentally liminal—suspended between the world that was and whatever is emerging. The challenge isn't just managing the technical or economic aspects of change, but acknowledging that this psychological dimension of technological transformation is real and lasting, regardless of how the markets ultimately resolve. The most profound disruptions often happen not through catastrophe but through the gradual recognition that the loading has begun, and we must find ways to live with uncertainty about what's being loaded, where it's going, and who gets to decide. We may not control the system's momentum—but we can still shape its consequences.

Flying Mother Nature’s silver seed to a new home in the sun. — Neil Young

Subscribe to The Grey Ledger Society

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe