Advanced Computing in the Age of AI | Saturday, December 3, 2022

NREL’s Skynet Stays Cool with RackCDU 

The U.S. Department of Energy's National Renewable Energy Laboratory (NREL) is relocating its Skynet HPC cluster to a new datacenter at the Energy Systems Integration Facility (ESIF) in Golden, Colo. At the heart of this move is Asetek's RackCDU direct-to-chip "hot water" cooling system. 

In an effort to make the most energy efficient datacenter in the world, the U.S. Department of Energy's National Renewable Energy Laboratory (NREL) is relocating its Skynet HPC cluster to a new datacenter at the Energy Systems Integration Facility (ESIF) in Golden, Colo. At the heart of this move is Asetek's RackCDU direct-to-chip "hot water" cooling system.

Skynet was designed to be an HPC cluster, not an energy-efficient supercomputer. The machine originally used air-cooling, and in that form it couldn't be moved to the new datacenter without making a significant impact on the PUE, because the rest of the datacenter is liquid cooled. RackCDU made it possible to move Skynet to its new home.

Asetek has been providing in-chassis cooling systems for over a decade for companies like Dell and HP. The RackCDU is the company's first major installation for datacenters. According to Stephen Empedocles, director of business development for Asetek, the company originally contacted NREL as the product was being developed to get insights on the most important aspects of liquid cooling and determine what enhancements would add the most value.

"They are the foremost thought leaders in liquid cooling and data center efficiency," Empedocles said.

RackCDU's design makes it so there is no need for customized servers, allowing the cooling system to be installed as a retrofit to existing air-cooled HPC clusters. This reduces energy, water and increases server density within the cluster. RackCDU's design also reduces floor-space and rack infrastructure requirements.

"One of the really exciting things about this installation is that it is being done as a drop-in retrofit of an existing cluster," said Empedocles. "Liquid-cooling is not a new idea; but previous systems have relied on custom server designs that tend to be very expensive and complex. Our system fits into standard off-the-shelf servers, and will work with all brands."

RackCDU enables energy savings of up to 80 percent and density increases of 2.5x when compared to modern air-cooled datacenters, which are designed to remove 100 percent of the heat from the server. The 80 percent energy savings makes it so the fans only need to remove the remaining 20 percent, allowing them to be slowed down in terms of speed. This also draws less energy, providing a 10 percent reduction in IT load.

The liquid cooling system does this by removing heat from CPUs, GPUs, memory modules and other hot spots within servers. It then takes the heat out of the datacenter with the liquid, where it can be cooled for free using outside air. Up to 60 percent of the heat generated by the datacenter can be recovered in the form of waste heat, which can be recycled and used for building heat and hot water.

Because RackCDU is so efficient, the payback on a system can be anywhere from zero to 12 months. By reducing the air-cooling load, not as many chillers and computer air conditioners are required. For datacenters that are being renovated or are a new construction, the savings of not having to install those systems can be more than the cost of the RackCDU system itself.

The datacenter at NREL's ESIF uses 75-degree water, or "warm water," to run servers and recover waste-heat that can be used as the primary heat source for the building office space and laboratories. RackCDU uses 105-degree "hot water," which improves waste-heat recovery and reduces the datacenter's water consumption.

The NREL datacenter now requires cooling towers and the use of water to cool, limiting how much waste heat can be recovered. Elevating the temperature makes for more waste recovery.

"The hotter the water is, the easier and more efficient it is to reuse it," said Empedocles.

"Ambient water temperature in the hydronic system is a critical factor in datacenter efficiency and sustainability," said Steve Hammond, director of the Computational Science Center at NREL. "Starting with warmer water on the inlet side can create an opportunity for enhanced waste-heat recovery and reduced water consumption, and in many locations can be accomplished without the need for active chilling or evaporative cooling, which could lead to dramatically reduced cooling costs."

The elevated waste heat recovery can be used to preheat the water going into a boiler, further reducing the cost of heat and hot water for the facility.

"We are thrilled to have RackCDU installed by NREL who is at the forefront of datacenter cooling technology. This latest installation shows that Asetek can improve performance even at the world's most efficient data center," said Andre Eriksen, Asetek's CEO.

With Asetek's RackCDU cooling system being leveraged, the ESIF is designed to be the most energy efficient datacenter in the world with a PUE of 1.06. The liquid cooling system is also cooling Cray's Xtreme-Cool supercomputer.

Related Articles

Envision Solar Completes Second Solar Tree Array Installation for NREL

Asetek Installing Cooling System at NREL

Could Datacenters Go Natural?

Add a Comment

EnterpriseAI