Advanced Computing in the Age of AI | Thursday, March 28, 2024

Cooling from the Inside 

To the datacenter professional, keeping equipment cool requires a lot of money and energy, but what if the computer could be cooled from the inside? 

To the datacenter professional, keeping equipment cool requires a lot of money and energy, but what if the computer could be cooled from the inside? While there are many ways to maintain proper server operating temperatures, experts say more energy-efficient methods will be necessary to keep up with growing demand.

For the average air-cooled datacenter, cooling costs represent 30-50 percent of total energy expenditure. Increased transistor density and higher clock frequency have led to hotter-running chips. Forced air convection is the current standard for cooling chips, but this will not sustain the next generation of electronics. Going forward, processors will need more efficient and compact cooling mechanisms.

The thermal issue is one of the biggest challenges facing the computing industry and it's tied to the predicted demise of Moore's Law. Bruno Michel, manager of Advanced Thermal Packaging at IBM Zurich Research Laboratory, observes that "the risk is that development will stop in just a few years after the 30 years of Moore's Law."

At the same time as Moore's Law is said to be petering out, the demand for processing data is at an all-time high. A full 90 percent of all the data in the world was created in the last two years, and datacenters are responsible for 2 percent of overall US electrical usage. Between the rising energy costs and the growing dependence on hyperscale server farms, the old status quo is no longer sufficient. Big computing has a big energy problem, but there is hope in the guise of numerous innovative cooling technologies covered in a recent Communications of the ACM article.

At Purdue University, scientists at the Cooling Technologies Research Center are experimenting with high-conductivity thermal heat spreaders, such as vapor chambers and heat pipes. The technology relies on vaporization and condensation principles to transport heat away from the chip and the board.

Similar techniques are being studied for their use in mobile electronic devices. Researchers are exploring the feasibility of using heat pipes in devices that are limited by their overall size and thickness. The goal of the project is to develop commercial ultra-thin heat pipes in a five-year timeframe.

Liquid-cooling, a longtime staple in HPC systems, is also a contender. Compared to air, liquids are more efficient coolants because they have are better thermal conductors and have a higher heat capacity. In fact, liquids are estimated to be about 4,000 times more effective at removing heat than air.

In the report Experimental Investigation of a Hot Water Cooled Heat Sink for Efficient Data Center Cooling: Towards Electronic Cooling with High Exergetic Utility, the authors note that "Water cooling using compact manifold micro-channel (MMC) heat sinks is one of the most promising strategies to meet the cooling requirement of chips in next-generation data centers."

While many datacenters are moving to "free" air cooling or ambient cooling, the trend for higher densities may bring about a resurgence in liquid cooling or a combination of the two.

Chipmakers are also working to reduce the power consumption of chips so they don't generate as much heat in the first place. Intel principal engineer Michael K. Patterson, notes: "We've done a lot to reduce the idle power on computers. In the past there was very little difference between idle power and peak power; if the computer was doing nothing, it used pretty much the same amount of power as when it was very active, and at a high utilization rate."

Chipmakers have also developed variable clock-speed mechanisms, which employ the lowest clock speed necessary to get the job done. When the chip is running at lower speeds, energy is saved.

The industry has shift its focus from pure performance to performance-per-watt. As the IBM report notes, "the new target must be high performance and low net power consumption (and, concomitantly, low net carbon footprint)." There's a competition afoot to develop chips that do more with less energy. The ARM development community thinks it can do this better than x86 computing.

Besides chips, there are other up-and-coming developments that can reduce the heat produced by electronics. Ionic wind cooling and piezoelectric fans, for example, have benefits that could extend to server farms. Piezo fans are based on the principle of piezoelectricity. They can displace air like traditional fans, but they have no moving parts and are nearly silent.

There's also the exciting possibility that waste heat from electronics could be turned into electricity via devices called thermoelectric generators (TEGs) or thermogenerators. By recapturing this "lost" energy, overall energy consumption would go down and battery-powered devices would last longer between charges.

As the popularity of green computing attests, energy-efficiency is the rallying cry for the 21st century. From the transistor to the supercomputer, every aspect of the ecosystem will be held to a new standard.

EnterpriseAI