Key Highlights
- XL Batteries introduces long-duration, non-flammable organic flow batteries that offer durability, safety, and economic competitiveness, transforming energy storage for AI data centers.
- STL shifts focus inward, providing high-density fiber and integrated connectivity solutions that address AI-driven internal traffic growth and deployment speed challenges.
- Belden and OptiCool partner to deliver modular, integrated cooling systems designed for uncertain workloads, enabling scalable and water-free cooling at the rack level.
- ABB revolutionizes power delivery with medium-voltage UPS architectures, simplifying infrastructure and enabling flexible, future-proof data center designs.
- Mechanical innovations like direct drive cooling towers and integrated fan wall systems reduce failure points, improve reliability, and streamline large-scale cooling operations.
What does innovation look like when the data center industry moves from planning to building? At Data Center World 2026, the answer had its share of breakthrough concepts and was hyperfocused around practical execution. Across the show floor and in booth conversations, a consistent pattern emerged: the technologies gaining traction are the ones that reduce friction by being faster to deploy, simpler to operate, and flexible enough to adapt as AI infrastructure requirements continue to evolve.
Power remains the gating constraint. The dominant trend may always be about access to megawatts and gigawatts. Moreover though, it’s now especially about how that power is delivered, stored, and managed under real-world conditions, in the face of interconnection delays, uncertain workloads, and rising density. The same applies inside the data center, where cooling, connectivity, and electrical design are all being rethought as integrated systems rather than standalone components.
The following innovation spotlights, drawn from conversations with company representatives on the DCW26 exhibit floor, reflect that shift. Each points to a different layer of the stack. Viewed together, they offer a snapshot of where the industry is focusing its attention now: not on what might be possible, but on what can actually be built, and very crucially, scaled, in the AI era.
XL Batteries: Long-Duration Storage Enters the AI Data Center Conversation
At Data Center World 2026, one of the clearest signals about where innovation is heading didn’t come from a hyperscale operator or an established vendor—but from a startup focused on one of the industry’s hardest problems: energy storage.
XL Batteries, named one of the “Most Inspiring Startups” in the conference’s 2026 Innovation Challenge, is developing non-toxic, non-flammable organic flow batteries designed for long-duration energy storage—ranging from six hours to more than 250 hours.
The company’s technology, spun out of Columbia University and recently recognized as a BloombergNEF Pioneers winner, is aimed squarely at a new reality for data centers: power is no longer just about access—it’s about endurance, flexibility, and control.
From Grid-Scale to Data Center-Scale
Company representatives described a shift in how energy storage is being deployed.
“What used to be utility-scale or grid-scale… is now just large energy storage systems,” one representative said, noting that data centers increasingly sit on both sides of the meter—drawing from the grid when possible, and operating as microgrids when necessary.
That shift is being driven by familiar constraints:
- Interconnection delays
- Explosive load growth from AI
- The need for behind-the-meter resilience
In that environment, long-duration storage becomes more than a grid tool—it becomes part of the data center architecture.
A Different Take on Flow Batteries
XL’s technology doesn’t replace the established flow battery model—it builds on it.
The system functions similarly to a vanadium redox flow battery, but instead of dissolving vanadium in sulfuric acid, it uses organic molecules in pH-neutral water.
Those molecules are “fundamentally stable in the charged and discharged state,” and can be tuned at the molecular level—something not possible with metal-based chemistries.
The implications are practical:
- No reliance on rare earth metals or constrained mineral supply chains
- Feedstocks sourced from geographically diverse materials
- A non-corrosive system environment that simplifies design and lowers cost
The result is a platform designed for 20+ year lifetimes without degradation.
“At year 10, it’s the same—you get the same output,” a company representative said.
That stands in contrast to lithium-ion systems, which often require augmentation within a decade.
Safety as a Deployment Advantage
In the data center context, another differentiator stands out: non-flammability.
Unlike lithium-ion systems, which carry thermal runaway risk, XL’s flow batteries use liquid electrolytes that are inherently non-combustible.
That opens up new siting possibilities—particularly for:
- High-density campuses
- Urban or constrained environments
- Facilities with strict insurance or safety requirements
It also aligns with a growing undercurrent in the industry, where fire risk is becoming a gating factor for on-site energy storage.
One System, Multiple Use Cases
The company emphasized that its platform can address the full spectrum of data center energy needs:
- Fast-response services (e.g., frequency regulation)
- Multi-hour backup and resilience
- Long-duration load shifting and energy arbitrage
In testing, XL reported performance comparable to lithium-ion systems for short-duration response—while extending far beyond lithium’s practical duration range.
That combination is key.
Rather than forcing operators to choose between performance and duration, the goal is to deliver both in a single system.
Economics as the Path to Adoption
Perhaps the most telling insight from the conversation was philosophical.
XL isn’t positioning itself as a sustainability solution first—it’s positioning as an economic one.
“You can’t sacrifice a non-negotiable like uptime for sustainability,” the company said. “The way to get adoption is to compete on specs and cost.”
That framing reflects a broader shift across the industry:
- Sustainability is no longer a standalone objective
- It must align with performance and financial outcomes
Or, as the company put it, technologies that win only on values remain niche. Technologies that win on performance, cost, and values become default.
The DCF Take
As AI infrastructure pushes power systems to their limits, energy storage is moving from the periphery to the core of data center design.
What XL Batteries represents is a new class of solution:
- Long-duration, not just backup
- Non-flammable, not just energy-dense
- Economically competitive, not just aspirational
If the industry is entering an “execution era” defined by power constraints, then storage—especially storage that can operate safely and economically at scale—may become one of the defining innovation battlegrounds.
STL: Moving Up the Stack to the Rack
At Data Center World 2026, company representatives from STL described a deliberate shift in strategy—from its historical position in telecom and outside plant fiber into the interior of the data center, all the way “up to the rack.”
That transition reflects a broader market realignment around AI infrastructure. As one company representative put it, “we are in a period of very interesting transition,” with STL moving from telecom customers toward data center operators as network architectures evolve to become “AI-ready.”
From Fiber Density to Deployment Speed
STL’s new Neuralis platform is positioned as a “soup to nuts” solution—combining ultra-high-density fiber, connectorization, and pre-terminated systems designed to reduce deployment friction.
The company’s legacy strengths remain in play. Representatives pointed to fiber innovations including cables with “almost 7,000 strands” and multi-core fiber designs that compress multiple cores into a single strand to reduce diameter.
But the more immediate bottleneck isn’t just fiber density—it’s installation.
“There’s a huge shortage of trained manpower,” a representative said, noting that traditional on-site fiber termination is both time-consuming and expensive.
The answer: shift more work offsite.
By emphasizing pre-terminated, plug-and-play systems assembled in controlled environments, STL is targeting three pain points at once:
- Labor constraints
- Deployment timelines
- First-time quality assurance
“The more you can do it off site… you’re ensuring that it works first time,” the company said.
AI Is Rewriting the Fiber Equation
The demand signal behind this push is clear—and familiar.
As networks transition from 400G to 800G and beyond, STL sees a massive increase in internal data center fiber requirements—“10 times to 30 times” growth depending on architecture.
That growth is tied directly to AI workloads.
GPU-based infrastructure is driving far more east-west traffic between servers, increasing both fiber density and performance requirements inside the data center.
The implication: fiber is no longer just a connectivity layer—it’s becoming a core scaling constraint for AI systems.
One-Stop Shop—Including Copper
In another notable shift, STL is expanding beyond fiber into copper cabling, positioning itself as a broader infrastructure supplier.
The goal, according to company representatives, is to become a “one-stop shop” for data center connectivity—competing on technology rather than commodity pricing.
Looking Ahead: Hollow Core and Co-Packaged Optics
STL also signaled early engagement with next-generation interconnect technologies, including:
- Hollow core fiber for ultra-low latency use cases
- Co-packaged optics (CPO) as a pathway to reduce both latency and power consumption
These remain emerging areas, but the company is actively building partnerships—including with startups—to position itself “two to three years” ahead of broader adoption.
The DCF Take
What stands out here isn’t just a product launch—it’s a repositioning.
STL is following the AI infrastructure stack inward:
- From long-haul and metro fiber
- To data center interconnect
- To in-building fiber systems
- And now, to rack-level deployment
That mirrors a broader industry transition, where traditional network vendors are moving closer to the point of compute as AI reshapes where performance bottlenecks—and value—reside.
Belden + OptiCool: Modular Cooling for the AI Middle Market
At Data Center World 2026, company representatives from Belden and OptiCool described a joint push into integrated rack-level infrastructure—pairing connectivity, power, and modular cooling into a single deployable system aimed squarely at enterprise and mid-market colocation providers.
The partnership reflects a shift already underway inside Belden itself. Long known as a manufacturer of wire, cable, and connectivity products, the company said it has spent the last several years evolving into a solutions provider—leveraging a broader portfolio that spans industrial networking, automation, and control systems.
That repositioning is now extending into AI infrastructure.
From Components to Fully Integrated Systems
Rather than selling discrete products into bid cycles, Belden is now packaging racks, PDUs, cable management, and cooling into a unified offering—delivered as a manufacturer-backed system rather than a third-party integration.
“We can bring a full solution to the table now,” a company representative said, emphasizing that the company is “standing behind the solution as a manufacturer, not as a system integrator.”
The cooling layer comes via OptiCool, whose rear-door heat exchanger (RDHx) technology is designed to scale alongside uncertain AI workloads.
Two-Phase Rear Door Cooling at Rack Scale
OptiCool’s approach centers on two-phase cooling applied at the rear door, combining the non-invasive characteristics of RDHx with the efficiency gains typically associated with direct-to-chip liquid cooling.
According to company representatives, the system:
- Supports up to 120 kW per rack (with 60 kW demonstrated on the show floor)
- Delivers up to 10x cooling capacity compared to traditional approaches
- Operates at roughly one-third the energy consumption of comparable single-phase systems
Instead of injecting cold air, the system extracts heat using refrigerant as the heat sink, reducing demand on CRAC units and broader facility cooling infrastructure.
Designing for Uncertainty: Modular, Swappable Capacity
The defining feature—and the clearest signal of target market—is modularity.
OptiCool’s units are designed to be swapped in and out in “five minutes or less,” allowing operators to scale cooling capacity from 10 kW to 60 kW within the same rack infrastructure.
That flexibility directly addresses a core problem for enterprise and colo operators: uncertainty.
Customers don’t know what their AI workloads will look like in three to five years—but traditional infrastructure forces them to size for peak demand upfront.
“You have to guess… and spend capex on the maximum capacity you might ever use,” a Belden representative said.
The modular model flips that dynamic, allowing incremental scaling without stranded capital.
Targeting the AI “Middle Market”
Both companies repeatedly pointed to the same opportunity: the gap between hyperscale AI deployments and enterprise reality.
While hyperscale systems are often delivered as fixed, highly standardized architectures, enterprise buyers are asking for something different—customizable, adaptable systems that align with their evolving needs.
“Their stuff is made for hyperscale… they’re not listening to our needs,” one representative said, recounting customer feedback.
That demand is being reinforced by the early stages of the AI inference buildout, which is expected to drive more distributed, enterprise-adjacent deployments.
Not every organization needs—or can support—the “Ferrari” of GPU infrastructure.
“There’s this entire mid-market… that doesn’t need that,” the company said.
Integration as the Product
The Belden–OptiCool partnership is notable less for any single component than for how it’s packaged.
The companies emphasized:
- Single-part-number procurement
- Design-to-deployment support
- End-to-end accountability from a unified vendor stack
For customers lacking in-house expertise, the pitch is straightforward: “We’ve already done the heavy lifting.”
Sustainability and Site Constraints
The solution also aligns with growing scrutiny around data center resource use.
Notably, the system requires no water, addressing one of the most visible points of public and regulatory concern.
At the same time, its efficiency gains reduce overall energy demand from supporting cooling infrastructure—a secondary but increasingly important lever in constrained power environments.
The DCF Take
This is a different kind of AI infrastructure play.
Belden and OptiCool aren’t chasing hyperscale megaclusters. They’re building for the long tail of AI adoption—enterprise and colo environments where:
- Workloads are uncertain
- Expertise is limited
- Capital must be staged, not front-loaded
The key innovation isn’t just two-phase cooling at the rack. It’s the combination of:
- Modular scaling
- Integrated delivery
- And a design philosophy built around uncertainty as a first principle
As AI moves beyond hyperscale into broader enterprise deployment, that combination may prove just as important as raw performance.
ABB: Rewiring the Power Stack—and Simplifying the Mechanical Layer
At Data Center World 2026, company representatives from ABB framed their innovation story around a single idea: simplifying complexity at scale—both in how power is delivered and how cooling systems are built.
That effort spans two fronts:
- A medium-voltage UPS architecture aimed at reshaping how large AI campuses are designed
- A set of mechanical innovations targeting reliability, efficiency, and maintainability in cooling systems
Together, they point to a broader shift: treating the data center not as a collection of subsystems, but as an integrated, scalable platform.
Medium Voltage UPS as the “Cornerstone”
At the center of ABB’s pitch is HyperGuard, described as the industry’s first medium-voltage static UPS—operating directly at grid-level voltages rather than stepping down into traditional low-voltage architectures.
That shift has cascading implications.
By staying at medium voltage, operators can:
- Reduce the number of components in the power chain
- Simplify system architecture
- Lower both capex and opex
- Accelerate deployment timelines
“By building at medium voltage, you can deploy faster,” a company representative said, pointing to reduced wiring, commissioning, and overall system complexity.
The system is designed for scale:
- Configurable in 25 MW blocks
- Expandable to 50 MW blocks via parallelization
- Supporting hyperscale-class deployments in the hundreds of megawatts
But the more interesting concept is architectural.
ABB described the medium-voltage UPS as a “cornerstone” layer—a stable foundation that allows flexibility downstream.
“Last Mile” Flexibility: Designing for an Uncertain Future
The key idea is what ABB calls the “last mile conversion.”
Instead of locking in IT power architecture early, operators can:
- Build infrastructure up to the medium-voltage layer
- Defer decisions about AC vs. DC, rack density, and cooling approach
- Adapt the “last mile” as technology evolves
That flexibility is increasingly critical.
With rack densities climbing past 100 kW—and roadmaps pointing higher—operators are facing real uncertainty about:
- AC vs. DC distribution
- 800VDC architectures
- Liquid cooling adoption paths
ABB’s approach is to future-proof everything upstream, allowing changes only at the edge.
“You can start building without knowing what you are going to deploy later,” a company representative said, noting this is especially valuable for colocation providers.
The result is a system that can:
- Support both AC and DC environments
- Enable retrofits with less stranded infrastructure
- Reduce risk tied to evolving chip and rack designs
Grid-to-Chip Thinking—and the DC Transition
ABB also framed its roadmap within a broader “grid-to-chip” perspective—aligning utility-scale power systems with emerging IT architectures.
That includes:
- Work on 800VDC distribution pathways
- Development of solid-state circuit breakers
- Engagement in standards efforts around interoperability
The company emphasized that no single vendor can define this transition alone, pointing to ongoing collaboration across the industry to avoid fragmentation.
Real-World Deployment: Applied Digital
ABB highlighted its partnership with Applied Digital as a proving ground for the architecture, including a 400 MW-scale facility in North Dakota built around medium-voltage design principles.
The project underscores a key theme: innovation is no longer theoretical—it’s being deployed at scale.
Mechanical Innovation: Removing Failure Points
Alongside power architecture, ABB is targeting another source of friction: mechanical complexity in cooling systems.
Direct Drive Cooling Towers
One example is a direct-drive cooling tower motor that eliminates:
- Gearboxes
- Belts and pulleys
- Associated failure points
By connecting the motor directly to the fan, ABB reduces maintenance requirements and improves reliability—particularly in large-scale deployments where mechanical failure risk multiplies.
Operators are already looking to retrofit existing systems, according to company representatives.
Integrated Motor Drive Systems for Fan Walls
ABB also showcased a redesigned integrated motor drive (IMD) system for fan walls, combining motor and drive into a single unit.
Key characteristics include:
- 60% reduction in material usage
- Plug-and-play replacement (four screws to swap a drive)
- Reduced structural requirements for large fan arrays
- Lower noise and improved efficiency
The scale of demand is striking.
Representatives said hyperscale operators are requesting tens of thousands to 100,000 units for single deployments—reflecting the sheer mechanical footprint of modern cooling systems.
The DCF Take
ABB’s story is about simplification—but at very different layers of the stack.
On the electrical side:
- Medium voltage UPS rethinks how power systems are architected
- “Last mile” flexibility acknowledges uncertainty in AI infrastructure
On the mechanical side:
- Direct drive and integrated systems remove failure points
- Efficiency gains come from eliminating components, not just optimizing them
Taken together, the approach reflects a broader reality:
As data centers scale into the hundreds of megawatts, complexity itself becomes the constraint.
ABB’s answer is to reduce it—upstream in the power chain, and downstream in the physical systems that keep AI infrastructure running.
At Data Center Frontier, we talk the industry talk and walk the industry walk. In that spirit, DCF Staff members may occasionally use AI tools to assist with content. Elements of this article were created with help from OpenAI's GPT5.
Keep pace with the fast-moving world of data centers and cloud computing by connecting with Data Center Frontier on LinkedIn, following us on X/Twitter and Facebook, as well as on BlueSky, and signing up for our weekly newsletters using the form below.
About the Author
Matt Vincent
Matt Vincent is Editor in Chief of Data Center Frontier, where he leads editorial strategy and coverage focused on the infrastructure powering cloud computing, artificial intelligence, and the digital economy. A veteran B2B technology journalist with more than two decades of experience, Vincent specializes in the intersection of data centers, power, cooling, and emerging AI-era infrastructure. Since assuming the EIC role in 2023, he has helped guide Data Center Frontier’s coverage of the industry’s transition into the gigawatt-scale AI era, with a focus on hyperscale development, behind-the-meter power strategies, liquid cooling architectures, and the evolving energy demands of high-density compute, while working closely with the Digital Infrastructure Group at Endeavor Business Media to expand the brand’s analytical and multimedia footprint. Vincent also hosts The Data Center Frontier Show podcast, where he interviews industry leaders across hyperscale, colocation, utilities, and the data center supply chain to examine the technologies and business models reshaping digital infrastructure. Since its inception he serves as Head of Content for the Data Center Frontier Trends Summit. Before becoming Editor in Chief, he served in multiple senior editorial roles across Endeavor Business Media’s digital infrastructure portfolio, with coverage spanning data centers and hyperscale infrastructure, structured cabling and networking, telecom and datacom, IP physical security, and wireless and Pro AV markets. He began his career in 2005 within PennWell’s Advanced Technology Division and later held senior editorial positions supporting brands such as Cabling Installation & Maintenance, Lightwave Online, Broadband Technology Report, and Smart Buildings Technology. Vincent is a frequent moderator, interviewer, and keynote speaker at industry events including the HPC Forum, where he delivers forward-looking analysis on how AI and high-performance computing are reshaping digital infrastructure. He graduated with honors from Indiana University Bloomington with a B.A. in English Literature and Creative Writing and lives in southern New Hampshire with his family, remaining an active musician in his spare time.






