Roundtable: Beyond the White Space - Managing Complexity Across the Stack
As AI infrastructure scales into unfamiliar territory, the industry is confronting a simple reality: the data center can no longer be managed as a collection of loosely coupled domains. Power, cooling, mechanical systems, construction practices, chemistry, and digital controls are now interdependent in ways that leave little room for sequential thinking. The performance envelope of modern AI facilities is being set as early as site prep and design; and preserved, or lost, in how faithfully that intent carries through commissioning and into steady-state operations.
What’s emerging instead is a lifecycle view of operations, one that treats data continuity as foundational infrastructure. Across the industry, leading operators are collapsing traditional handoffs by unifying design models, commissioning baselines, and live operational telemetry. Real-time visibility into cooling loops, power systems, and environmental conditions is no longer just about optimization; it’s about sustaining uptime, transparency, and predictability as densities climb from kilowatts to megawatts per rack.
That shift is forcing organizational change as much as technical innovation. AI-era facilities are driving the formation of cross-functional teams that cut across engineering, operations, IT, procurement, and training, rewriting long-established procedures and redefining operational boundaries with customers and partners. New systems from liquid cooling to advanced generator strategies are being integrated not as point solutions, but as adaptive ecosystems that must evolve alongside regulatory, sustainability, and grid realities.
Digital tools are increasingly the connective tissue holding this complexity together. Digital twins, shared dashboards, and data-driven maintenance and modeling practices are giving operators the ability to test assumptions early, align stakeholders around a common operational truth, and reduce risk before capital is locked in. The result is a quieter but profound transformation: data centers designed and operated not as static assets, but as living systems engineered for clarity, resilience, and scale across their entire lifecycle.
Our distinguished slate of panelists for Q4 includes:
- Rob Lowe, Director RD&E – Global High Tech, Ecolab
- Phillip Marangella, Chief Marketing and Product Officer, EdgeConneX
- Ben Rapp, Manager, Strategic Project Development, Rehlko
- Joe Reele, Vice President, Datacenter Solution Architects, Schneider Electric
And now onto our fourth Executive Roundtable question for Q4 of 2025.
Data Center Frontier: AI infrastructure now demands tight choreography among diverse disciplines, i.e. power, cooling, construction, chemistry, and digital systems. How are your teams aligning design and operations data across organizational silos to deliver performance and transparency from site prep to steady-state operation?
Rob Lowe, Ecolab: AI infrastructure requires tight coordination across power, cooling, chemistry, construction, and digital operations.
Ecolab supports this complexity by unifying data from design through steady-state operation, using real-time monitoring and standardized commissioning practices to ensure “Start Clean, Stay Clean” performance.
Shared dashboards and proactive analytics help operators tie cooling performance to uptime, energy use, and environmental impact.
This integrated approach breaks down traditional silos and gives operators full lifecycle visibility across CDUs, glycol loops, and facility cooling systems, enabling more reliable and transparent performance across global portfolios.
Phillip Marangella, EdgeConneX: We have essentially created an entirely new AI-enabled data center product that we call Ingenuity.
Keep in mind that for decades, we have been blowing cold air on servers. Little innovation was needed as rack densities remained in the single digits over that period.
With AI chips, rack densities are rapidly scaling into the triple digits, on the way to over 1 Megawatt per rack. In the not-too-distant future, we will see racks reach the same densities as those in some of our first edge data centers, built over a decade ago.
To respond to the pace and scope of change, we formed a large, cross-functional, internal AI task force nearly two years ago. The cross-functional team includes Engineering, Product, Operations, Commissioning, IT, Procurement, and others.
We have redesigned the data center. We have rewritten all of our operational procedures and created new procedures for areas not previously managed, like direct-to-chip liquid cooling.
We have established a new global training center to ensure the teams are prepared. We have updated and integrated our systems to monitor and manage our sites. We have documented the various operational demarcs between ourselves and our customers and subsequent SLA implications.
And the list goes on, all with the intent of ensuring we can continue to operate these AI factories.
Ben Rapp, Rehlko: As AI-driven energy demand surges, coordinating design and operational disciplines isn’t just valuable, it’s essential for resilience and sustainability. We’re bridging these silos by grounding decisions in shared operational and lifecycle performance data.
That includes fuel flexibility considerations, redundancy and response modeling, emissions forecasting, energy storage planning, and real-world load behavior over time.
A key area where this integration is advancing quickly is generator maintenance strategy.
Historically, generators were exercised using legacy assumptions, high-load, high-runtime schedules designed for older engines. By shifting to data-driven maintenance programs and leveraging modern engine capability, operators can dramatically reduce environmental and operational overhead without compromising reliability.
Optimized exercise schedules and no-load testing procedures can cut annual GHG emissions by up to 78% and fuel use by up to 79%, while service contracts and continuous monitoring ensure the system remains ready during critical events.
When these operational insights are unified with design intent from the start, rather than becoming separate departmental decisions, operators aren’t just deploying equipment.
They’re implementing adaptive systems that evolve with regulatory expectations, sustainability targets, and operational realities. The result is a more transparent, scalable, and future-ready power and mechanical ecosystem from site development through steady-state operation.
Joe Reele, Schneider Electric: This is a clear use case for digital twins.
The ability to create a true digital twin, from land acquisition through to an operational data center, allows us to model how the facility interacts with and responds to the grid.
This capability is increasingly becoming a requirement for large-scale, high-load data center builds.
It enables operations teams to optimize risk and cost, identify efficiencies, and test scenarios before construction begins.
Modeling and refining in the digital world provides significant advantages over discovering issues after physical deployment.
About the Author
Matt Vincent
A B2B technology journalist and editor with more than two decades of experience, Matt Vincent is Editor in Chief of Data Center Frontier.








