Data Center Insights: Mike Connaughton, Leviton Network Solutions

Leviton's Mike Connaughton contends that delivering AI data centers at scale will hinge on globally consistent yet locally responsive supply chains, hyperscaler-led standardization across rapidly evolving technologies, and modular infrastructure designs that enable speed, flexibility, and long-term viability - all while proving the economic and societal value of ever-larger deployments.
March 30, 2026
5 min read

The Data Center Frontier Executive Roundtable features insights from industry executives with lengthy experience in the data center industry.

Here’s a look at the Q1 2026 insights from Mike Connaughton, Senior Product Manager at Leviton Network Solutions.

Mike Connaughton is a Senior Product Manager at Leviton Network Solutions with more than 30 years of experience in fiber-optic cabling, including a key role in developing the SMPTE 311M standard for hybrid fiber-optic HD camera cable, and leads strategic support and alliances for data center accounts. Leviton Network Solutions is a global, single-source manufacturer of end-to-end copper and fiber structured cabling systems, delivering solutions purpose-built for hyperscale and AI data centers.

Data Center Frontier:  The industry has entered what many describe as the execution phase of the AI infrastructure cycle. What capabilities — organizational, technical, or operational — will most clearly separate the projects that deliver on time from those that struggle over the next 24 months?

Mike Connaughton, Leviton Network Solutions:  Because AI networks demand unique architectures, layouts, and performance requirements, no two data centers look alike—and many builds require custom solutions. Delivering that level of customization at scale, across diverse markets, will be a defining challenge.

To stay on schedule, data center operators should rely on network infrastructure partners with vertically integrated, globally distributed manufacturing. Suppliers with capacity across continents and strategic regions—and with a strong understanding of local nuances—offer built in supply chain resilience, shorter lead times, and reliable local availability.

At the same time, operators need confidence that every site will receive the same materials, testing standards, protocols, and service quality. The providers that can deliver both regional specialization and global consistency will be the ones that keep projects on track.

Data Center Frontier:  As AI campuses scale into multi-hundred-megawatt and gigawatt territory, successful delivery increasingly depends on tight coordination across utilities, suppliers, builders, and operators. Where is the industry still too fragmented, and what models of collaboration are proving most effective?

Mike Connaughton, Leviton Network Solutions:  This question really highlights how quickly multiple technology domains are evolving at once—optics, cooling methods, power distribution, and high density interconnects are all changing in parallel.

That pace creates fragmentation, because suppliers and builders are innovating on different timelines and sometimes different design assumptions.

Right now the most effective coordination is happening at the hyperscale end user level.

Large operators are driving cross vendor alignment by defining clearer solution requirements and pushing suppliers to ensure interoperability across power, thermal, and networking systems.

In several areas, this collaboration is being formalized through communities like the Open Compute Project (OCP), which provides a neutral forum for developing specifications and accelerating adoption.

Over time, the best practices emerging from these hyperscale led efforts are likely to be codified into more broadly adopted standards—such as updates to TIA 942 and related frameworks—so that the wider market can implement AI campuses with less friction and fewer custom integrations. 

Data Center Frontier:  With AI demand evolving rapidly, many operators are trying to balance speed to market with long-term flexibility. How should developers and suppliers think about future-proofing infrastructure – particularly power and electrical capacity - without overbuilding or locking into the wrong assumptions? 

Mike Connaughton, Leviton Network Solutions:  Future-proofing means designing with enough flexibility to incorporate new technologies when they become available. Achieving this in an efficient, cost-effective manner requires a foundation of infrastructure with a modular design. With cabling infrastructure, modular systems and pre-terminated structured cabling enable incremental scaling and simplified installation, significantly improving the overall speed of deployment.

And it allows data centers to avoid costly rip-and-replace projects and immediately begin leveraging new technologies with minimal downtime. Fixed designs and complex cabling will struggle to keep pace, leading to inefficiencies, compatibility issues and a critical lack of scalability.

Data Center Frontier:  Public scrutiny of large-scale data center development continues to rise, particularly around power use, land, and community impact. Looking ahead, what will define whether the industry optimally maintains its social license to operate as AI infrastructure expands?

Mike Connaughton, Leviton Network Solutions:  Ultimately, this comes down to the economics of scale: The industry’s social license to operate will hinge on whether operators can prove that bigger, more powerful data centers generate meaningfully better outcomes — greater efficiency and improved services — compared to smaller or more distributed alternatives.

As these facilities grow in size and cost, operators must eventually recoup their investments.

If the business case holds, the public will validate their choices through their willingness to pay for the resulting AI driven products and services. But if the benefits fail to outweigh the perceived costs to communities, land use, or the grid—the social and economic support needed for continued expansion will erode.

At the same time, we are seeing AI campus developers engage proactively with local communities and state partners — well before shovels hit the ground — to identify potential challenges early and design solutions collaboratively.

This early stage coordination helps mitigate construction related growing pains, ensures infrastructure and environmental concerns are addressed upfront, and builds the long term trust.

 

Keep pace with the fast-moving world of data centers and cloud computing by connecting with Data Center Frontier on LinkedIn, following us on X/Twitter and Facebook, as well as on BlueSky, and signing up for our weekly newsletters using the form below.

About the Author

Matt Vincent

Matt Vincent is Editor in Chief of Data Center Frontier, where he leads editorial strategy and coverage focused on the infrastructure powering cloud computing, artificial intelligence, and the digital economy. A veteran B2B technology journalist with more than two decades of experience, Vincent specializes in the intersection of data centers, power, cooling, and emerging AI-era infrastructure. Since assuming the EIC role in 2023, he has helped guide Data Center Frontier’s coverage of the industry’s transition into the gigawatt-scale AI era, with a focus on hyperscale development, behind-the-meter power strategies, liquid cooling architectures, and the evolving energy demands of high-density compute, while working closely with the Digital Infrastructure Group at Endeavor Business Media to expand the brand’s analytical and multimedia footprint. Vincent also hosts The Data Center Frontier Show podcast, where he interviews industry leaders across hyperscale, colocation, utilities, and the data center supply chain to examine the technologies and business models reshaping digital infrastructure. Since its inception he serves as Head of Content for the Data Center Frontier Trends Summit. Before becoming Editor in Chief, he served in multiple senior editorial roles across Endeavor Business Media’s digital infrastructure portfolio, with coverage spanning data centers and hyperscale infrastructure, structured cabling and networking, telecom and datacom, IP physical security, and wireless and Pro AV markets. He began his career in 2005 within PennWell’s Advanced Technology Division and later held senior editorial positions supporting brands such as Cabling Installation & Maintenance, Lightwave Online, Broadband Technology Report, and Smart Buildings Technology. Vincent is a frequent moderator, interviewer, and keynote speaker at industry events including the HPC Forum, where he delivers forward-looking analysis on how AI and high-performance computing are reshaping digital infrastructure. He graduated with honors from Indiana University Bloomington with a B.A. in English Literature and Creative Writing and lives in southern New Hampshire with his family, remaining an active musician in his spare time.

You can connect with Matt via LinkedIn or email.

You can connect with Matt via LinkedIn or email.

Sign up for our eNewsletters
Get the latest news and updates
ZinetroN/Shutterstock.com
Source: ZinetroN/Shutterstock.com
Sponsored
Michael Lawrence of Leviton outlines four key subsystems that often required tailored solutions in an AI data center and the challenges data centers face with AI builds: the Entry...
Image courtesy of Integrated Environmental Solutions
Image courtesy of Integrated Environmental Solutions
Sponsored
Mark Knipfer of Integrated Environmental Solutions (IES), explains why data center cooling strategies should be designed for reality, not extremes.