An AI Challenge to the Midstream Sector, Part 2: Could Chip Innovations Change Energy Demand Forecasts?

Last month, I wrote An AI Challenge to the Midstream Sector suggesting that pipeline companies investing in infrastructure built to serve AI datacenters should be cautious in analyzing these projects due to rapidly shifting AI market dynamics.

At the time, it seemed reasonable to estimate that technology advancement would require data centers to spend billions of dollars replacing chips every 2-5 years to stay competitive. A month later, a Wall Street Journal article by George Gilder has me asking: What if new technology enables elimination of sprawling data centers that consume huge amounts of power?

Gilder’s, “The Microchip Era is About to End,” raises new concerns challenging midstream investments supporting the AI buildout. Computer chip gearheads have long recognized a practical limit to chip size, how many transistors can fit on a microchip, and how these factors cap the rate of data processing. The concern for midstream investors? The computer chip gearheads appear to have figured a way around these limits.

The centerpiece of state-of-the-art AI datacenters today is the Nvidia Blackwell microchip, with 208 billion transistor switches. Each Blackwell chip costs about $30,000-$40,000. But innovators have created a new “wafer” technology they say offers four trillion transistors with orders-of-magnitude higher memory bandwidth in a fraction of the space.

That’s what a company named Cerebras unveiled last year — a “wafer-scale engine” they dubbed WSE-3, which they claim offers up to 7,000 times more on-chip memory than a conventional GPU. The company can stack 16 WSE-3s totaling 64 trillion transistors and the computational power of a building-sized data center into a figurative box small enough to fit on the kitchen table — at least for some AI workloads. As Gilder pointed out, other technology companies are on a similar path of advancement.

The important factor for midstream companies is that these smaller, more powerful chips also use less energy per unit of compute. As would be the case for nearly any emerging technology, there are still questions about the wafer technology’s cost-effectiveness and long-term viability, as well as whether increased energy efficiency could lead to increased usage, but the existence of the technology is an indicator that the chip industry is ripe for disruption.

This brings me to a key question: How are data center-driven midstream investments to be assessed when burgeoning technologies are injecting uncertainty about future power needs?

If this technology proves to be cost-effective and viable, wafer-scale devices could dramatically change what AI developers call “compute density,” which could affect the size of AI data centers and reduce AI power-demand growth projections.

And what happens when Cerebras releases WSE-4?

The bottom line is much the same as what I wrote back in October — midstream companies must pay close attention to advances in computing power and assess the implications these advances may have on their investment in AI-supporting midstream infrastructure.

Companies will need to model multiple scenarios — status quo growth vs. faster efficiency gains vs. dramatic leaps forward in technology capabilities. Data center power demand may not simply collapse, but it could have major impacts on individual projects. Especially if AI firms are able to leverage compute density to consolidate operations.


Thomas Kalb is Director of the Coastal Bend Midstream Program at Texas A&M University-Corpus Christi. He is a regular contributor to the Let’s Clear the Air blog, where he writes about issues, policies, and considerations impacting how we produce and use energy.

Share This