Artificial intelligence (AI) is here and it’s growing at an unprecedented speed. AI power demand currently represents 4.3 GW and is projected to grow at a CAGR of 26% to 36%, resulting in a total demand of 13.5 GW to 20 GW by 2028. While next-gen physical infrastructure systems will eventually leverage more AI, data center operators need to pivot quickly to support AI workloads in systems available today.
In this episode of the Data Center Frontier Show podcast, Matt Vincent, Editor-in-Chief of Data Center Frontier, and Steven Carlini, Vice President of Innovation and Data Centers for Schneider Electric, break down the challenges of AI for each physical infrastructure category including power, cooling, racks, and software management.
Listen to this 30-minute podcast to learn more about:
- How liquid cooling technology will evolve.
- The importance of achieving a successful transition from air cooling to liquid cooling to support the growing thermal design power of AI workloads.
- 6 key cooling challenges data center operators must address.
- Why the extreme rack power densities required by AI training servers is so challenging.
- The importance of implementing sustainability strategies based on a standard set of metrics that allow for benchmarking and organizational alignment.
About Steven Carlini:
With extensive global experience, Steven leads the Energy Management BU’s Office of Innovation and Data Center Solutions, a team focused on spearheading Schneider’s data center, digital energy, and residential businesses. He also leads Schneider Electric’s Energy Management Research Center. In 2023, Steve was named to Capacity Media's Capacity POWER 100, a list of the “trailblazers, innovators and leaders driving the global digital infrastructure space.” His areas of focus include innovation, AI, hydrogen, sustainability, 5G and 6G, cloud and edge computing, DCIM, BMS, and EDMS.