From Buildings to Token Factories: Compu Dynamics CEO Steve Altizer On Why AI Is Rewriting the Data Center Design Playbook
On the latest episode of the Data Center Frontier Show podcast, DCF Editor in Chief Matt Vincent spoke with Steve Altizer, CEO and President of Compu Dynamics, about what AI infrastructure now demands, and why the facilities being built for this era are starting to look less like conventional buildings and more like industrial production plants.
One core message was consistent throughout the talk: the industry isn’t failing to support AI. It’s still adapting infrastructure that wasn’t designed for it.
Not Falling Short—Just Not Optimized
Altizer drew a clear distinction. Traditional data centers can run AI workloads, but they weren’t built for them.
“We’re not falling short much, we’re just not optimizing.”
The gap shows up most clearly in density. Legacy facilities were designed for roughly 300 to 400 watts per square foot. AI pushes that to 2,000 to 4,000 watts per square foot—changing not just rack design, but the logic of the entire facility.
For Altizer, AI-ready infrastructure starts with fundamentals: access to water for heat rejection, significantly higher power density, and in some cases specific redundancy topologies favored by chip makers. It also requires liquid cooling loops extended to the rack and, critically, flexibility in the white space.
That last point is the hardest to reconcile with traditional design.
“The GPUs change… your power requirements change… your liquid cooling requirements change. The data center needs to change with it.”
Buildings are static. AI is not.
Rethinking Modular: From Containers to Systems
“Modular” has been part of the data center vocabulary for years, but Altizer argues most of the industry is still thinking about it the wrong way.
The old model centered on ISO containers. The emerging model focuses on modularizing the white space itself.
“We’re not building buildings—we’re building assemblies of equipment.”
Compu Dynamics is pushing toward factory-built IT modules that can be delivered and assembled on-site. A standard 5 MW block consists of 10 modules, stacked into a two-story configuration and designed for transport by trailer across the U.S.
From there, scale becomes repeatable. Blocks can be placed adjacent or connected to create larger deployments, moving from 5 MW to 10 MW and beyond. The point is not just scalability; it’s repeatability and speed.
Altizer ties this directly to a broader shift in how data centers are defined. Referencing UL 2755, he described a future where facilities are treated as equipment assemblies rather than buildings. The emphasis shifts away from office space and toward industrial function.
“I don’t think the data center of the future is going to look like a building at all.”
Instead, he sees a field of interconnected systems including generators, transformers, cooling infrastructure, and IT modules, all optimized for output.
Liquid Cooling: The Real Execution Risk
If modularity defines the future, cooling is defining the present, and creating the most risk.
Altizer pointed to wide variability in how liquid cooling systems are being installed today. Differences in pipe materials, fabrication, commissioning, and cleaning practices are creating inconsistency across deployments.
“There’s been so much variability… there’s bound to be some future issues.”
The concern is not immediate failure, but latent problems that emerge over time—especially if systems were not installed with pristine cleanliness.
At the same time, the industry is still building expertise. Many engineers and contractors are only now gaining experience with liquid cooling systems, even as chip designs continue to evolve.
That evolution is pushing infrastructure in new directions. Nvidia’s latest platforms, for example, are designed for full liquid cooling using warmer fluid, which favors fluid coolers over traditional chillers. Many existing facilities, however, are built around chiller-based systems.
The result is a wave of interim solutions that sacrifice efficiency. Altizer described setups where heat moves through multiple exchanges (fluid to fluid, fluid to air, air back to fluid) each step adding complexity and energy loss.
These are not long-term answers. They are transitional.
Power: Complexity Inside, Constraint Outside
From Compu Dynamics’ vantage point, the biggest power challenge is not inside the building.
“The power problem is really outside the building.”
Utility availability, interconnection timelines, and self-generation strategies are the gating factors. Inside the data hall, the challenge is configuration. And that, too, is evolving.
Altizer described a landscape of competing approaches: different UPS strategies, battery placements, generator configurations, and even early discussions about shifting from AC to DC distribution.
One potential future path simplifies the stack dramatically, moving from traditional layered systems to a direct DC bus feeding the racks. The industry isn’t there yet, but the direction reflects a broader push toward simplification under extreme density.
Designing for an Uncertain Demand Curve
When asked how to design infrastructure for an uncertain AI demand curve, Altizer answered candidly.
“If I could answer that question, I think I could make a gazillion dollars.”
Historically, colocation providers built highly adaptable facilities to hedge against demand shifts. AI is pushing toward the opposite: purpose-built environments designed for specific customers and chip sets.
That model works today because revenue expectations are high, with some operators expecting to recover infrastructure costs in just a few years. But Altizer offered a note of caution, recalling the overconfidence of the dot-com era.
He stopped short of predicting a downturn, but the implication was clear: assumptions about payback periods may not hold indefinitely.
From Data Centers to Industrial Plants
By the end of the conversation, Altizer’s view of the next two to three years came into focus.
Data centers will no longer be treated as buildings. They will be treated as industrial plants.
“They’re going to look different, act different, and be maintained differently.”
If GPUs continue to displace CPUs as the dominant compute platform, infrastructure will follow. Facilities will become more specialized, more modular, and more tightly aligned with workload requirements.
Altizer is explicit about where that leads.
“I’m actually looking forward to building industrial plants, token factories.”
That may be the clearest expression of the transition underway. AI is not just increasing demand for data centers. It is redefining what a data center is.
At Data Center Frontier, we talk the industry talk and walk the industry walk. In that spirit, DCF Staff members may occasionally use AI tools to assist with content. Elements of this article were created with help from OpenAI's GPT5.
Keep pace with the fast-moving world of data centers and cloud computing by connecting with Data Center Frontier on LinkedIn, following us on X/Twitter and Facebook, as well as on BlueSky, and signing up for our weekly newsletters using the form below.
About the Author
Matt Vincent
Matt Vincent is Editor in Chief of Data Center Frontier, where he leads editorial strategy and coverage focused on the infrastructure powering cloud computing, artificial intelligence, and the digital economy. A veteran B2B technology journalist with more than two decades of experience, Vincent specializes in the intersection of data centers, power, cooling, and emerging AI-era infrastructure. Since assuming the EIC role in 2023, he has helped guide Data Center Frontier’s coverage of the industry’s transition into the gigawatt-scale AI era, with a focus on hyperscale development, behind-the-meter power strategies, liquid cooling architectures, and the evolving energy demands of high-density compute, while working closely with the Digital Infrastructure Group at Endeavor Business Media to expand the brand’s analytical and multimedia footprint. Vincent also hosts The Data Center Frontier Show podcast, where he interviews industry leaders across hyperscale, colocation, utilities, and the data center supply chain to examine the technologies and business models reshaping digital infrastructure. Since its inception he serves as Head of Content for the Data Center Frontier Trends Summit. Before becoming Editor in Chief, he served in multiple senior editorial roles across Endeavor Business Media’s digital infrastructure portfolio, with coverage spanning data centers and hyperscale infrastructure, structured cabling and networking, telecom and datacom, IP physical security, and wireless and Pro AV markets. He began his career in 2005 within PennWell’s Advanced Technology Division and later held senior editorial positions supporting brands such as Cabling Installation & Maintenance, Lightwave Online, Broadband Technology Report, and Smart Buildings Technology. Vincent is a frequent moderator, interviewer, and keynote speaker at industry events including the HPC Forum, where he delivers forward-looking analysis on how AI and high-performance computing are reshaping digital infrastructure. He graduated with honors from Indiana University Bloomington with a B.A. in English Literature and Creative Writing and lives in southern New Hampshire with his family, remaining an active musician in his spare time.




