Advanced Computing in the Age of AI | Thursday, March 28, 2024

Datacenter Revamps Cut Energy Costs At CenturyLink 

It is probably telling that these days datacenter managers think of the infrastructure under their care more in terms of the juice it burns and not by counting the server, storage, and switch boxes that consume that electricity and exhale heat. Ultimately, that power draw is the limiting factor in the scalability of the datacenter and using that power efficiently can boost processing and storage capacity and also drop profits straight to the bottom line.

Three years ago, just as it was buying public cloud computing provider Savvis for $2.5 billion, CenturyLink took a hard look at its annual electric bill, which was running at $80 million a year across its 48 datacenters. At the time, CenturyLink had just finished acquiring Qwest Communications, giving it a strong position in the voice and data services for enterprises and making it the third largest telecommunications company in the United States. CenturyLink, which is based in Monroe, Louisiana, also provides Internet service to consumers and operates the PrimeTV and DirectTV services; it has 47,000 employees and generated $18.1 billion in revenues in 2013.

One of the reasons why CenturyLink has been able to now expand to 57 datacenters – it just opened up its Toronto TR3 facility on September 8 – comprising close to 2.6 million square feet of datacenter floor space is that it started tackling the power and cooling issues three years ago.

The facilities come in various shapes and sizes, explains Joel Stone, vice president of global data center operations for the CenturyLink Technology Solutions division. Some are as small as 10,000 square feet, others are more than ten times that size. Two of its largest facilities are located in Dallas, Texas, weighing in at 110,000 and 153,700 square feet and both rated at 12 megawatts. The typical facility consumes on the order of 5 megawatts. CenturyLink uses some of that datacenter capacity to service its own telecommunications and computing needs, but a big chunk of that power goes into its hosting and cloud businesses which in turn provide homes for the infrastructure of companies from every industry and region. CenturyLink's biggest customers come from the financial services, healthcare, online games, and cloud businesses, Stone tells EnterpriseTech. Some of these customers have only one or two racks of capacity, whole others contract for anywhere from 5 megawatts to 7 megawatts of capacity. Stone's guess is that all told, the datacenters have hundreds of thousands of servers, but again, that is not how CenturyLink, or indeed any datacenter facility provider, is thinking about it. What goes in the rack is the customers' business, not CenturyLink's.

 

centurylink-datacenters-2

"We are loading up these facilities and trying to drive our capacity utilization upwards," says Stone. And the industry as a whole does not do a very good job of this. Stone cites statistics from the Uptime Institute, which surveyed colocation facilities, wholesale datacenter suppliers, and enterprises actually use only around 50 percent of the power that comes into the facilities. "We are trying to figure out how we can get datacenters packed more densely. Space is usually the cheapest part of the datacenter, but the power infrastructure and the cooling mechanicals are where the costs reside unless you are situated in Manhattan where space is such a premium. We are trying to drive our watts per square foot higher."

While server infrastructure is getting more powerful in terms of core counts and throughput and storage is getting denser and, in the case of flash-based or hybrid flash-disk arrays, are getting faster, the workloads are growing faster and therefore the overall power consumption for the infrastructure as a whole in the datacenter continues to grow.

"People walk into datacenters and they have this idea that they should be cold – but they really shouldn't be," says Stone. "Servers operate optimally in the range of 77 to 79 degrees Fahrenheit. If you get much hotter than that, then the server fans have to kick on or you might have to move more water or chilled air. The idea is to get things optimized. You want to push as little air and flow as little water as possible. But there is no magic bullet that will solve this problem."

Companies have to do a few things at the same time to try to get into that optimal temperature zone, and CenturyLink was shooting for around 75 degrees at the server inlet compared to 68 degrees in the initial test in the server racks at a 65,0000 square foot datacenter in Los Angeles. Here's a rule of thumb: For every degree Fahrenheit that the server inlet temperature was raised in the datacenter, it cut the power bill by 2 percent. You can't push it too far, of course, or you will start impacting the reliability of the server equipment. (The supplied air temperature in this facility was 55 degrees and the server inlet temperature was 67 degrees before the energy efficiency efforts got under way.)

The first thing is to control the airflow in the datacenter better, and the second is to measure the temperature of the air more accurately at the server so cooling can be maintained in a more balanced way across the facility.

CenturyLink started work on hot aisle and cold aisle containment in its facilities three and a half years ago, and the idea is simple enough: keep the hot air coming from the back of the racks from mixing with the cold air coming into the datacenter from chillers. The containment project is a multi-year, multi-million dollar effort, and CenturyLink is working with a company called SubZero Engineering to add containment to its aisles. About 95 percent of its facilities now have some form of air containment, and most of them are doing hot aisle containment.

"If we can isolate the hot aisles, that gives us a little more ride through from the cold aisles if we were to have some sort of event," Stone explains. But CenturyLink does have some facilities that, just by the nature of their design, do cold aisle containment instead. (That has the funny effect of making the datacenter feel hotter because people walk around the hot aisles instead of the cold ones and sometimes gives the impression that these are more efficient. But both approaches improve efficiency.) The important thing about the SubZero containment add-ons to rows of racks, says Stone, is that they are reusable and reconfigurable, so as customers come and go in the CenturyLink datacenters they can adjust the containment.

Once the air is contained, then you can dispense cold air and suck out hot air on a per-row basis and fine-tune the distribution of air around the datacenter. But to do that, you need to get sensors closer the racks. Several years ago, it was standard to have temperature sensors mounted on the ceiling, walls, or columns of datacenters, but more recently, after starting its aisle containment efforts, CenturyLink tapped RF Code to add its wireless sensor tags to the air inlets on IT racks to measure their temperature precisely rather than using an average of the ambient air temperature from the wall and ceiling sensors. This temperature data is now fed back into its building management system, which comes from Automated Logic Control, a division of the United Technologies conglomerate. (Stone said that Eaton and Schneider Electric also have very good building management systems, by the way.)

centurylink-savings-2

The energy efficiency effort doesn't stop here. CenturyLink is not looking at retrofitting its CRAC and CRAH units – those are short for Computer Room Air Conditioner and Computer Room Air Handler – with variable speed drives. Up until recently, CRAC and CRAH units were basically on or off, but now they can provide different levels of cooling. Stone says that running a larger number of CRAH units at a lower speeds provides better static air pressure in the datacenter and uses less energy than having a small number of larger units running faster. (In the latter case, extra cooling capacity is provided through extra units, and in the former it is provided by ramping up the speed of the CRAH units rather than increasing their number.) CenturyLink is also looking at variable speed pumps and replacing cooling towers fans in some facilities.

"We are taking a pragmatic, planned approach across our entire footprint, and we have gone into the areas where we are paying the most for power or have the highest datacenter loads and tackling those facilities first," says Stone. The energy efficiency efforts in the CenturyLink datacenters have to have a 24 month ROI for them to proceed.

In its Chicago CH2 datacenter (one of three around that Midwestern metropolis and one of the largest run by CenturyLink in its fleet of facilities), it did aisle containment, RF Code sensors, variable speed CRAC units, variable speed drives on the pumps, and replaced the cooling tower fans with more aerodynamic units that ran slower and yet pulled the more air through the cooling towers. This facility, which is located out near O'Hare International Airport, has 163,747 square feet of datacenter space, has a total capacity of 17.6 megawatts, and can deliver 150 watts per square foot.

CenturyLink reduced the load in that CH2 facility by 7.4 million kilowatt-hours per year, and Stone just last month collected on a $534,000 rebate check from Commonwealth Edison, the power company in the Chicago area. All of these upgrades in the CH2 facility cost roughly $2.4 million, and with the rebates and the power savings the return on investment was on the order of 21 months – and that is before the rebate was factored in.

EnterpriseAI