Datacenters Seek Geographical Advantage
According to analysts at IDC, the number of US datacenters is shrinking, but the average size is expanding. Total datacenter space is set to balloon from 611.4 million square feet in 2012 to more than 700 million square feet in 2016.
Virtualization, server consolidation and cloud-based application delivery contributed to the decline in physical datacenter size. Over the same period, the shift to shared resource pools – aka public clouds – has led to the creation of larger and larger datacenters. IDC anticipates that service providers will soon account for more than a quarter of all large datacenter capacity in the US.
With big datacenters, however, come big utility bills. Companies are responding to the power challenge with a variety of novel techniques, including modular build designs and innovative air cooling. Microsoft uses a mix of these approaches to achieve an impressive PUE (or power usage effectiveness) reading of 1.15 to 1.2, while the average US datacenter maintains a PUE of about 2.0.
Software-based power management is another tool to help curb expanding power consumption. But perhaps the most obvious and enticing approach is some version of free or near-free cooling. Microsoft, Facebook, Yahoo and Google among others are actively pursuing these concepts. Instead of bringing the cold to the datacenter, why not bring the datacenter to the cold? That's what prompted Google and Facebook to build sites in Northern Europe.
As datacenter operation and maintenance costs continue to rise, large-scale operations are looking to geographical cures: sites that offer a mix of low-cost and/or green energy, cold and/or dry ambient temperatures, and a "business-friendly" tax structure.
Remote locations do not make sense for all businesses, however. In fact, some operations are willing to pay top dollar to house their servers in expensive, urban centers. Financial services companies, in particular, are bucking the "free cooling" trend, but they're not the only ones. Online gaming, online gambling (big in Europe), and to some extent content delivery are all areas where latency matters. "Those datacenters need to be close to their users," observes analyst Jason dePreaux, an associate director for data center and critical infrastructure at IHS.
In all these sectors, it makes sense to prioritize market proximity over cheap energy.
"In terms of proximity, I think in the future we will see a polarization in some ways," notes dePreaux. "With things like high-frequency trading, we're dealing with the speed of light, where milliseconds are critical."
On the other hand, Web-scale datacenters that are mainly oriented to processing many small tasks (think Google searches) are not very latency-sensitive. These operators are looking for cheap, preferably green, power, and a climate that is conducive to free cooling. It's ok if it's not physically close or as redundant as a traditional datacenter.
According to dePreaux, the market will move in both directions: there will be datacenters close to large population centers and datacenters in more remote locations where power is cheap, the climate is cool and there are tax advantages.
Datacenters are also being located in tandem with renewable energy sources, like solar and wind farms, so that efficiency is not lost to the grid. Taking this model a step further, researchers are exploring the distributed datacenter approach, in which computing loads are redistributed in real-time to sites where electricity is most available and/or cheapest.
The hyperscale datacenter players, Google, Facebook, Microsoft, et. al., are closest to this scenario of shifting whole computing loads among datacenters. The really interesting question, according to dePreaux, is to what extent this approach will trickle down to the rest of the market and to what extent the rest of the market will transition over to these huge clouds. Will there even be a role for the enterprise datacenter in the future?
If computing is to be meted out by a handful of global cloud providers, dePreaux believes that such a future is still a long way off because companies have millions or billions of dollars invested in their own homegrown applications and they are not quick to let them go to the cloud world. The impact of these legacy applications on the infrastructure should not be underestimated in his opinion.
There are many trends that can inform the datacenter sector – microgeneration, colocating an energy source, and using "waste heat" to warm buildings are some examples. The IT industry is responding to pressure to be more energy-efficient, but there are plenty of efficiencies that can be extracted that do not necessitate major infrastructure changes. For example, there are emerging cooling technologies that work hand-in-hand with the latest standards from the American Society of Heating, Refrigerating and Air Conditioning Engineers (ASHRAE).
Research firm IHS affirms that emerging cooling technologies – such as water and air-side economizers, evaporative and adiabatic cooling and custom air handlers – will help reduce operational costs. The analyst group anticipates that this niche will grow at nearly three times the forecast rate of the total datacenter cooling market.
"Cooling equipment in general consumes anywhere from 30 to 50 percent of the total energy use of a typical data center," said Andrés Gallardo, research analyst for data center and critical infrastructure at IHS. "The apparatus is necessary to run a data center, but it does not have a direct impact on any company's revenues. Even so, cost-saving strategies could significantly reduce this expense and allow the company to allocate those resources to revenue-generating activities."