The Data Center Frontier Executive Roundtable features insights from industry executives with lengthy experience in the data center industry. Here’s a look at the insights from Harry Handlin, U.S. Data Center Segment Leader for ABB.
Harry Handlin is the U.S. Data Center Segment Leader at ABB where he has responsibility for supporting critical power and electrical distribution solutions to serve mission critical customers. He collaborates with product management to develop new products and technologies for mission critical applications.
Harry has over 40 years in the electrical industry with experience in application engineering, holding two U.S. Patents. He served as Global Technical Committee Chairman for the Green Grid from 2012-2017.
Harry graduated from Auburn University with a Bachelor of Science in electrical engineering. He and his family reside in Birmingham, Alabama.
Here's the full text of Harry Handlin's insights from our Executive Roundtable.
Data Center Frontier: What are the main considerations for the procurement and deployment of mechanical and electrical infrastructure in data center adaptive reuse projects and sites, versus for new construction? And to what degree are supply chain concerns presently a factor?
Harry Handlin, ABB: Due to the lack of available power, adaptive reuse projects are increasing, especially reuse and repurpose of industrial sites where the utility infrastructure and available power exists.
These sites include, but are not limited to steel mills, power plants, aluminum plants, and paper mills.
The main considerations for deployment of data center infrastructure in these sites are design time, equipment footprint, and equipment lead times.
Data Center Frontier: How do you see service level agreements (SLAs) evolving for data center equipment and expansion projects in the age of rapidly escalating AI, HPC and cloud computing demand?
Harry Handlin, ABB: Innovation will be the key driver in the evolution of service level agreements for infrastructure equipment.
AI presents many challenges for service organizations.
First, the rapid growth of the AI market coupled with the increased scale of data centers has created a shortage of qualified service engineers and technicians.
In addition, AI data center locations are not constrained by latency requirements. This has results in many data centers being in built in areas that are unlikely to be supported by a four-hour response time.
For some remote sites, the location is more than four hours of travel from the closet field service location.
Data Center Frontier: What are some key project management tips and strategies for facility operators and developers seeking to balance a need for great versatility with a mutual need for great specificity in data center designs for the AI era?
Harry Handlin, ABB: With exponential growth of data generation and storage, data centers are scaling up rapidly and construction trends are emerging.
Higher power densities are required to support AI data centers.
Being able to adapt is key.
We expect both electrical and HVAC designs to change.
It's all due to the expanded scale of the data center and the significant increases in rack densities.
Data Center Frontier: What's the best path forward for innovation in data center infrastructure optimization, in terms of engineering for ongoing energy efficiency gains and maximum clean energy utilization in the face of AI's exponential power requirements?
Harry Handlin, ABB: Data center PUE will increase for AI.
The two issues with higher rack densities are how do you get power in and how do you get the heat out.
We anticipate innovations in liquid cooling, which will lead to greater efficiencies.
We also expect to see on-site power generation, leading to microgrids with multiple sources of supply.
Another design shift we anticipate is the integration of medium voltage (MV) systems to support the growing scale of the data center's increase in block sizes.