Uptime Institute's Max Smolaks: Power, Racks, and the Economics of the AI Data Center Boom
The latest episode of the Data Center Frontier Show opens not with a sweeping thesis, but with a reminder of just how quickly the industry’s center of gravity has shifted. Editor in Chief Matt Vincent is joined by Max Smolaks, research analyst at Uptime Institute, whom DCF met in person earlier this year at the Open Compute Project (OCP) Global Summit 2025 in San Jose.
Since then, Smolaks has been closely tracking several of the most consequential—and least obvious—threads shaping the AI infrastructure boom. What emerges over the course of the conversation is not a single narrative, but a set of tensions: between power and place, openness and vertical integration, hyperscale ambition and economic reality.
From Crypto to Compute: An Unlikely On-Ramp
One of the clearest structural patterns Smolaks sees in today’s AI buildout is the growing number of large-scale AI data center projects that trace their origins back to cryptocurrency mining.
It is a transition few would have predicted even a handful of years ago. Generative AI was not an anticipated workload in traditional capacity planning cycles. Three years ago, ChatGPT did not exist, and the industry had not yet begun to grapple with the scale, power density, and energy intensity now associated with AI training and inference.
When demand surged, developers were left with only a limited set of viable options. Many leaned heavily on on-site generation—most often natural gas—to bypass grid delays. Others ended up in geographies that had already been “discovered” by crypto miners.
For years, cryptocurrency operators had been quietly mapping underutilized power capacity. Latency did not matter. Proximity to population centers did not matter. Cheap, abundant electricity did—often in remote or unconventional locations that would never have appeared on a traditional data center site-selection short list.
As crypto markets softened, those same sites became attractive to AI developers struggling to place large GPU clusters amid tightening power availability. Many crypto operators already controlled land, power access, and had operational experience running GPU-heavy environments. For some, the pivot to AI was not radical.
CoreWeave, now one of the most visible AI infrastructure providers, began as a crypto miner. Crusoe Energy followed a similar path and is now developing large-scale AI campuses. Applied Digital offers one of the most striking examples: once focused on crypto, it is now developing roughly half a gigawatt of AI data center capacity in North Dakota.
That geography underscores the broader point. North Dakota is not a traditional data center market. Yet the same attributes that once attracted crypto miners—available land, accessible power, minimal competition—have made it viable for AI infrastructure at scale.
What emerges is a structural insight: AI capacity is being built where power exists, not where the industry once assumed it would go.
Rack-Scale AI and the End of the “Standard” Rack
From there, the conversation pivots to OCP 2025, where Vincent and Smolaks first met—and where rack-scale AI architecture had clearly moved from theory to alignment.
Smolaks argues that the most significant design shift underway is not a single technology, but the rack itself. OCP was founded on the premise that the rack could be reimagined as a unit of innovation. That transformation is accelerating again—but in a less uniform direction.
Rather than converging on a single standardized form factor, racks are fragmenting. They are becoming taller, heavier, and more specialized. Novel geometries and layouts—once confined to hyperscaler experimentation—are now entering broader production environments.
At the same time, what goes into the rack is changing. With next-generation GPU systems pushing toward 200–300 kilowatts per rack, the traditional “everything-in-one-rack” model is breaking down.
Smolaks describes a move toward rack disaggregation: compute in one rack, power equipment in another, networking in a third. The rack becomes a modular participant in a larger system rather than a self-contained unit.
These architectures have physical consequences. Heavier racks require reinforcement, and vendors offering reinforced designs are seeing strong demand. Power distribution is also being rethought, with equipment migrating out of centralized busways and into the rack envelope itself—sometimes on top of racks, sometimes between them.
The result is a white space that is less orderly, but more purpose-built for extreme density.
Liquid Cooling Moves Inside the System Boundary
As rack-scale AI pushes power density higher, the discussion turns to cooling—specifically, the growing complexity of liquid-based thermal systems.
Air cooling, Smolaks notes, is conceptually simple: move air in, let it absorb heat, move it out. Liquid cooling introduces pressure, flow rates, valves, piping, and new mechanical dependencies that are far less forgiving.
When operators attempt to design liquid-cooled environments with Tier IV–level resilience, complexity compounds quickly. Redundancy is no longer additive; it is systemic, requiring tight coordination across mechanical, electrical, and control layers.
Software becomes essential. Smolaks points to a resurgence of computational fluid dynamics (CFD), not just at the room level, but inside the rack itself. Digital twins are increasingly required to model liquid flows and failure modes in real time.
Commissioning is also evolving. Traditional load banks were designed for air-cooled environments. Liquid-cooled data halls require new types of load banks capable of simulating thermal loads before production IT equipment is installed—an emerging but necessary segment of the commissioning ecosystem.
The common thread is clear: complexity is no longer confined to the facility shell. It is moving inward—into the rack, the cooling loops, and the software layers that bind them together.
Operator Resistance and an Inevitable Shakeout
Despite the visibility of liquid cooling in conferences and press releases, Smolaks sees continued operator resistance.
Among Uptime Institute’s global membership, the dominant sentiment is pragmatic rather than ideological: most operators plan to adopt liquid cooling only when workloads force them to—and not before.
The reasons are practical. Bringing liquid to the rack introduces new risk. It requires new skill sets on operations teams. Filtration, corrosion control, and biological growth become front-line concerns rather than background engineering considerations.
At the same time, commercial momentum is unmistakable. Uptime Intelligence is tracking a surge in investments, partnerships, and acquisitions across the liquid cooling ecosystem.
Smolaks sees a familiar pattern: an early explosion of vendors and architectures, followed by consolidation around a smaller set of globally scalable players. For startups, the outlook may be favorable—even if near-term adoption remains cautious.
Power Follows the Rack: 800V DC and the Nvidia Effect
From cooling, the discussion moves naturally to power. One of the most consequential themes at OCP 2025 was the industry’s alignment around 800-volt DC power distribution as a baseline assumption for future AI systems.
Higher-voltage DC is not a new idea. What changed, Smolaks argues, is Nvidia’s explicit commitment to it. By signaling that future GPU platforms will expect 800V DC input, Nvidia effectively put the entire power distribution ecosystem on notice.
The shift is not incremental. New equipment, new safety regimes, and new engineering skills will be required. Racks, busbars, connectors, and protection systems all come back under scrutiny.
Smolaks recounts a conversation with a power vendor that began with a simple question—what changes with 800V DC?—and quickly expanded into a long list of cascading impacts across design and operations. None of them made the data center simpler.
Energy Storage and the Problem of AI Spikiness
As power architectures evolve, Vincent steers the conversation toward energy storage—the point where AI theory meets operational reality.
AI workloads are inherently spiky. Rapid swings in demand stress UPS systems, distribution equipment, and upstream grids. Storage is increasingly viewed as the buffer that reconciles those dynamics.
Mitigation is happening at multiple levels. UPS vendors are redesigning systems to tolerate rapid swings, sometimes integrating capacitors directly into their architectures. At the rack level, Nvidia is incorporating capacitors into power shelves to smooth demand locally.
More striking is Nvidia’s use of software. In its Blackwell GPUs, Nvidia has introduced a feature known as GPU PowerBurn, designed to flatten power demand by preventing sharp drops as well as spikes.
When workloads subside and power draw falls too quickly, the system injects artificial work—burning power intentionally to maintain stability. The goal is not efficiency, but predictability.
Smolaks is careful not to overinterpret the move, but the implication is clear: a dominant silicon vendor is explicitly reshaping workload behavior to accommodate infrastructure and grid constraints.
Open Hardware, Custom Silicon, and Diverging Models
This leads to a broader tension at the heart of OCP itself. OCP is rooted in open hardware and shared reference designs. AI factories, by contrast, are increasingly proprietary and vertically integrated.
Smolaks sees this not as a contradiction, but as productive friction. While OCP reference architectures increasingly reflect Nvidia’s preferred designs, hyperscalers are doubling down on custom silicon.
AWS, Google, and Microsoft have all announced new generations of in-house AI accelerators. Google’s Gemini 3—trained entirely on TPU infrastructure—demonstrates that Nvidia is not the only viable path to cutting-edge AI.
At OCP, Nvidia’s presence was unmistakable. But alongside it were quieter conversations with alternative silicon providers, including startups focused on air-cooled inference systems designed to deploy easily into existing enterprise environments.
Not every AI workload, Smolaks notes, requires rack-scale liquid cooling or extreme density. Deployment friction matters—and alternatives are gaining traction.
Bubble or Supercycle?
As the episode closes, Vincent raises the question hanging over the entire discussion: is AI infrastructure a bubble?
From inside the industry, demand feels immediate and non-elastic. Training tokens, inference latency, power draw—these are real constraints today, not speculative forecasts.
Smolaks reframes the risk. The question is not demand, but profitability.
Unlike the dot-com era, where fiber infrastructure retained value for decades, modern AI data centers are dominated by depreciating assets. Roughly 80 percent of project cost is tied up in IT hardware—primarily GPUs—with uncertain lifespans measured in years, not decades.
If AI adoption and monetization take longer than expected, the industry may face a timing mismatch: repeated capital refresh cycles before returns fully materialize.
Demand, Smolaks stresses, is real. Nvidia cannot ship GPUs fast enough. But much of the buying is debt-funded, and profits remain concentrated among a small number of players.
The industry is waiting for inference to broaden the value pool—to embed AI into products and workflows at scale. Yet many practical enterprise use cases require surprisingly little infrastructure, relying on open-source models and modest retraining rather than massive AI factories.
Some applications—science, medicine, national infrastructure—may justify gigawatt-scale compute. Others will not.
The AI infrastructure future, Smolaks suggests, will not converge on a single model. It will fragment—by workload, by region, by economics.
For now, the industry continues to build. But the long-term sustainability of that buildout will depend less on technological ambition than on whether AI can ultimately pay for itself.
At Data Center Frontier, we talk the industry talk and walk the industry walk. In that spirit, DCF Staff members may occasionally use AI tools to assist with content. Elements of this article were created with help from OpenAI's GPT5.
Keep pace with the fast-moving world of data centers and cloud computing by connecting with Data Center Frontier on LinkedIn, following us on X/Twitter and Facebook, as well as on BlueSky, and signing up for our weekly newsletters using the form below.
About the Author
Matt Vincent
A B2B technology journalist and editor with more than two decades of experience, Matt Vincent is Editor in Chief of Data Center Frontier.



