The Open Compute Project (OCP) is working to enable wider adoption of liquid cooling, citing demand from hyperscale computing providers, as well as new applications in edge computing.
The Open Compute Project (OCP) is working to enable wider adoption of liquid cooling, citing demand from hyperscale computing providers, as well as new applications in edge computing. The move reflects growing use of liquid cooling to handle high-density workloads for artificial intelligence and other next-generation technologies.
The Open Compute Project (OCP) announced last week that it is creating an Advanced Cooling Solutions project, saying it is “responding to an industry need to collaborate on liquid cooling and other advanced cooling approaches.” Interest in advanced cooling is clearly on the rise in the wake of Google’s revelation that it has shifted to liquid cooling with its latest hardware for artificial intelligence.
The OCP is a growing community of open source hardware hackers who are building on design innovations created for Facebook’s data centers. Over the past five years, a new generation of hardware vendors has leveraged these open source designs to win business in the hyperscale computing market.
The Open Compute Project has been notable for its role in creating an ecosystem – and thus a market – for cutting-edge technology for hyperscale customers. That could give a shot in the arm to liquid cooling, which has been focused on high-performance computing (HPC) and supercomputing. Hyperscalers have been experimenting with various liquid cooling technologies in their labs for years.
The OCP initiative also raises the prospect of competitive disruption within the world of liquid cooling, where many of the leading providers are specialists offering proprietary solutions. The goal of the project is to “enable a non-proprietary, multi-vendor supply chain for ‘warm water’ cooling.”
“OCP envisions a supply chain offering a variety of IT devices (servers, storage, networking etc.) that can work with a variety of liquid-enabled racks from many solutions providers,” said Bill Carter, chief technology officer of the Open Compute Foundation. “Direct contact, immersion, and other advanced cooling options are within the scope of this project.”
A Google TPUv3 system, which consists of eight racks of liquid-cooled devices featuring Google’s custom ASIC chips for machine learning. (Image: Google)
Google’s adoption of liquid cooling at scale is likely a sign of things to come for hyperscale data centers. Google isn’t the only cloud titan interested in liquid cooling, either. At the Open Compute Summit 2018, Microsoft had a water-cooled server on display in its booth, while Facebook is known to have tested liquid cooling, including submerging its servers in dielectric fluid.
The rise of artificial intelligence, and the hardware that often supports it, is reshaping the data center industry’s relationship with servers. New hardware for AI workloads is packing more computing power into each piece of equipment, boosting the power density – the amount of electricity used by servers and storage in a rack or cabinet – and the accompanying heat. The trend is challenging traditional practices in data center cooling, and prompting data center operators to adapt new strategies and designs.
The emphasis on warm water cooling is consistent with hyperscale computing’s focus on energy efficiency. Cold water is often used to chill air in room-level and row-level systems, and these systems are widely adopted in data centers. The real shift is in designs that bring liquids into the server chassis to cool chips and components. This can be done through enclosed systems featuring pipes and plates, or by immersing servers in fluids. Some vendors integrate water cooling into the rear-door of a rack or cabinet.
Bringing water to the chip enables even greater efficiency, creating tightly-designed and controlled environments that focus the cooling as close as possible to the heat- generating components. This technique can cool components using warmer water temperatures.
However, most servers are designed to use air cooling. A small group of HPC specialists offer water-cooled servers, including Asetek, CoolIT, Ebullient and Aquila Systems. There’s also a group of vendors using various approaches to immersion, including 3M, LiquidCool, and GRC (formerly Green Revolution).
One barrier to adoption is that the up-front installation cost can be higher than those for air-cooled systems. Hyperscale providers need to provision servers by the tens of thousands, and even small differences in cost can add up quickly. That’s an area where the Open Compute Project can make a difference, enabling an ecosystem of users and vendors to work from a common set of designs and concepts.
The OCP liquid cooling project has two goals:
“Harmonization of liquid cooling is a logical step for the OCP community, as it will benefit OpenRack and Olympus-based products as well as Project Scorpio in China,” said Carter, mentioning three of the projects advancing rackscale designs.
The open hardware has been boosted by the rise of cloud computing providers, who have become the leading purchasers of servers and storage in recent years, even as enterprise sales have plateaued. The rise of open hardware has raised the profile of contract manufacturers and ODMs (original design manufacturers), who have adapted OCP designs into server and storage product lines. Direct server sales by ODM providers have grown to about 25 percent of all sales, according to research from IDC.
“Hyperscale growth continued to drive server volume demand in the first quarter,” said Sanjay Medvitz, senior research analyst, Servers and Storage at IDC. “While various OEMs are finding success in this space, ODMs remain the primary beneficiary from the quickly growing hyperscale server demand, now accounting for roughly a quarter of overall server market revenue and shipments.”
Could that precedent repeat itself as demand for liquid-cooling grows? Or will an expanded market be the “tide that floats all boats,” providing plenty of opportunity for both incumbents and ODMs? Time will tell.
In the meantime, it’s important to note that the Open Compute Project sees the opportunity for liquid cooling extending beyond hyperscale data centers to edge computing, where the location and economics of a site may make traditional air-cooling infrastructure problematic.
“With the growth of 5G, IoT, VR, CDN, and latent sensitive applications, data centers are being constructed closer to their customers and often in regions of the globe where traditional air and mechanical (e.g. chillers) cooling becomes quite expensive,” wrote Carter. “Increased power density also introduces cooling challenges. In these cases liquid cooling, and specifically warm water cooling, becomes an effective alternative for heat extraction.”
Explore the evolving world of edge computing further through Data Center Frontier’s special report series and ongoing coverage.