The Data Center Frontier Executive Roundtable features insights from industry executives with lengthy experience in the data center industry. Here’s a look at the insights from Sean Farney, Vice President of Data Center Strategy for the Americas at JLL.
As Vice President of Data Center Strategy for the Americas at JLL, Sean Farney helps clients operate more than 900 data centers sustainably and efficiently. Prior roles include Director of Data Center Marketing at Kohler, Founder and Chief Operating Officer at edge data center startup of Ubiquity Critical Environments, and data center manager for Microsoft’s 120 MW Chicago facility.
Sean strives to embody sustainability personally; he hunts, grows, or harvests a majority of his food and burns deadfall to heat his home, earning him a county-leading Energy Star rating. He holds a master’s degree in Information Technology from Northwestern University.
Contact JLL to learn more about how it helps build, buy, occupy, and invest in a variety of assets, including data centers, globally.
Here's the full text of Sean Farney's insights from our Executive Roundtable.
Data Center Frontier: What are the main considerations for the procurement and deployment of mechanical and electrical infrastructure in data center adaptive reuse projects and sites, versus for new construction? And to what degree are supply chain concerns presently a factor?
Sean Farney, JLL: As someone who grew up in the construction trades, I know remodeling can be a lot trickier than a new build.
Adaptive reuse projects have more unknowns and require more experience.
However, a rigorous site selection process generally leaves you with good structural bones and (usually) robust mechanical and electric infrastructure, albeit dated.
With several phenomenal modular data center products available today, we are seeing many effective adaptive reuse projects where developers use these "good bones" sites to warehouse packaged data center modules.
Data Center Frontier: How do you see service level agreements (SLAs) evolving for data center equipment and expansion projects in the age of rapidly escalating AI, HPC and cloud computing demand?
Sean Farney, JLL: I've managed SLAs across all levels of infrastructure for products ranging from low-latency trading to natural gas to Happy Meal toys (seriously!), and it's quite operationally complex.
On top of the complexity, appetite for technical downtime is plunging as connectedness becomes more pervasive and we trust digital infrastructure to deliver more mission-critical services related to health, safety, security, finance, autonomous vehicles and more.
I think service level expectations and requirements will continue to creep upward.
To satisfy this need and reduce the brand risk and costs of violation, I see a surge in reliability engineering.
This means taking a systemic approach to designing, building and operating technical infrastructure to higher uptime levels instead of cobbling together services for individual mechanical, electrical and plumbing (MEP) and information technology (IT) components.
JLL's Reliability Engineering practice is inundated with requests to solve these big, impactful problems.
The long-term, holistic approach leads to better performance, lower cost and smoother operations.
Data Center Frontier: What are some key project management tips and strategies for facility operators and developers seeking to balance a need for great versatility with a mutual need for great specificity in data center designs for the AI era?
Sean Farney, JLL: Take the time and energy to learn the technology, because the IT equipment for running AI applications is different than legacy IT assets.
Then, talk with business leaders to understand the current and future needs.
As you move into design, plan for optimal scalability and flexibility, knowing that requirements may change as the project progresses.
Our project management team has seen requests to scale in both directions.
Also, be sure to consider how the facility design can support optimal operational efficiency.
If you engage your facility management team early in the design phase, they can provide valuable tips that lead to more efficient and resilient operations down the line.
Data Center Frontier: What's the best path forward for innovation in data center infrastructure optimization, in terms of engineering for ongoing energy efficiency gains and maximum clean energy utilization in the face of AI's exponential power requirements?
Sean Farney, JLL: The industry was already challenged by sustainability goals and regulatory reporting requirements when market interest in AI productization started surging.
This black swan AI event took even the best and most deep-pocketed planners by surprise, but it also created opportunity.
The immediate imperative to deliver change was the impetus we needed to revamp the way we'd been doing things for many years.
Cooling is a prime example. Liquid cooling has been around since at least the days of mainframes, but operators feared the technology for far too long.
AI's massive power/heat density and dramatic market growth forced innovation that ushered in the new cooling tech we needed to minimize environmental footprints.
Liquid cooling can reduce carbon impact by around 40% compared to computer room air conditioners and handlers (CRACs and CRAHs).
I believe we will see a similar paradigm shift in the use of natural gas power plants and behind-the-meter power systems, including small modular nuclear reactors (SMRs), to provide predictable supplies of stable, lower carbon energy.