Covering Scientific & Technical AI | Thursday, October 3, 2024

Open Compute In Full Bloom At Facebook North Carolina Datacenter 

The Facebook culture is one of hacking bits of software together to make a complex system and constantly tweaking it to improve its performance, efficiency, and scale. This is a bit harder to do with datacenters and the infrastructure that runs inside of them, but to its credit Facebook brings the same hacker culture to its hardware as it does to its software and it is arguably one of the big innovators in the current IT era on both the hardware and software fronts.

Most of the hyperscale giants are very secretive about their server, storage, switching, and datacenter designs. In a very real sense, these are competitive advantages, hence the secrecy. Facebook, being a social media company that wants to leverage community to accelerate innovation, takes a different approach and three years ago open sourced the designs for its machines and the Prineville, Oregon datacenter that houses them as the Open Compute Project. These efforts, the company said earlier this year, have allowed it to cut $1.2 billion from its infrastructure costs – and that is not including electricity savings, just infrastructure costs.

Facebook has not said much about its second facility in Forest City, North Carolina, or the iron that is inside of it. But EnterpriseTech was invited down to take a tour and to see some of the refinements that have been made to Facebook's datacenter and server designs and the new cold storage facility that the company is constructing to hold exabytes of photos that Facebook users want to keep – even if they never do open them again.

The Forest City datacenter lies between Charlotte and Asheville, roughly in the center of the Tar Heel state, not too far from other big facilities operated by Google (in Lenoir) and Apple (in Maiden). The Facebook datacenter was built on a brownfield site that used to be a dying facility for the Burlington textile company and then for a short time was a plant that made boats. Keven McCammon, site director of the Forest City datacenter, said that Facebook had the 160-acre facility cleaned up to be environmentally safe and recycled most of the materials that were onsite from the prior factories.

At the moment, Facebook has two massive 350,000 square foot facilities for its main servers and storage, which is the original Building 1 and the more recent Building 3. There is a level spot of mud adjacent to Building 3 that is a spot for Building 2, when it should become necessary. Each one of these buildings has four complete data halls inside of them. Building 4 is a smaller 90,000 square foot facility that will have three data halls that are used for cold storage and that employ a miniaturized version of the outside air cooling system used in the larger facilities.

The first thing you notice in the datacenter is that it is dark. The hallways linking portions of the datacenter together as well as the aisles where server, storage, and networking gear live are all equipped with motion sensors, which trigger LED lighting to come on only when people are present (The LED light sensors were deactivated in a few places so we could get pictures of the blinky lights).

McCammon started the tour up on the roof of Building 3 so we could get a sense of the size of the facility and to get a better view of Building 1 with Building 4 hanging off of it.

facebook-building-3-roof

This picture above shows just half of the roof and the boxes mounted there are air conditioning units that are there for the 20 or so days out of each month in a year where the weather in North Carolina could get hot enough, sticky enough, or both (they do tend to go together) such that the evaporative cooling methods employed in the Forest City facility would not work. McCammon said that last summer was the second hottest summer on record in North Carolina and the air conditioners were not necessary.

As you look off in the distance from Building 3 up the hill to Building 1 and 4 you can see a giant 250,000 gallon water tank. Each of these facilities has two such water tanks, which are used to cool the air coming into the datacenter. On the day EnterpriseTech made its visit, the outside air in the morning was a crisp 32 degrees, which was unusually cold for mid-April in North Carolina, so there was not much need for the evaporative cooling. In fact, Facebook likes to keep the datacenter at something around 83 degrees Fahrenheit and on that morning it was using some of the exhaust heat from the server infrastructure to warm up and dry out the air coming in from the surrounding countryside.

facebook-building-3-roof-view-building-1-4

The first stage of the cooling is getting the air inside the datacenter and filtering out the dust and the pollen – and on that day, there was plenty of pollen in the air.

facebook-building-3-air-filter

"When my allergies are acting up, I come to the data hall," quipped McCammon.

The filters have to be swapped out every eight or nine months because they get cruddy and a team of technicians can do it for the entire hall, floor to ceiling, in a about a day.

The system that Facebook has created is able to change the air temperature by as much as 20 degrees using a combination of dehumidification and evaporative cooling. As the air comes into the facility, the hot server air dries it out. In the Prineville datacenter, this air is then drawn through openings in the wall where waterfalls cascade down over vents and the moving air causes the water to evaporate and this evaporation cools the air compared to the outside temperature or the mix of outside air and data hall air, as the case may be.

With the Forest City datacenter, Facebook switched from these waterfalls to Munters media, which is a corrugated cardboard material that can be kept wet through soaking and do the evaporative cooling much more efficiently.

Here is a section of the wall one stage in from the air filters that shows the Munters media and the louvers that were open because the evaporative cooling was not needed on the day we visited:

facebook-building-munters

The air is pulled through the facility by giant banks of industrial fans and as you might expect, the fans consume the vast amount of the energy used by the cooling system in the Forest City datacenter. These fans are not particularly large but they make a pretty decent racket, and importantly, the pull is so strong that the pressure differential between the various stages of the air cooling system, the data halls, and the outside corridors is so great that it is difficult to open the doors.

facebook-building-3-fans

In the Oregon datacenter, in fact, people were snapping off doorhandles because of the pressure gradients and so in the North Carolina facility Facebook put vestibules between the different areas so the pressure could be gradually changed and people could open the doors without hurting themselves or the doors.

Once the air is pulled above the data hall it is cold and drops down onto the cold aisles where the switches, servers, and storage receive it. Facebook uses hot aisle containment – meaning it closes off the space between the rows – and those hot aisles can run as hot as 110 degrees to 115 degrees. The hot aisles are linked to another set of holes in the ceiling, which link up to a set of much larger fans with louvers on them that pull the hot air out of the data hall and push it outside.

At the back of Building 3 are the two massive water tanks as well as a series of backup generators, which can keep the machines going in the event of a power outage. The precise megawatts used by each building was not available at press time.

The North Carolina facility has tens of thousands of servers and the entire fleet of machines at Facebook numbers, including the Oregon and Sweden datacenters that it has designed and built as well as some other facilities it leases, has hundreds of thousands of boxes. Facebook has a three-year depreciation schedule for its systems, according to McCammon, and that means there are still older machines from various suppliers who predate the Open Compute Project that are in the Forest City datacenter; it is about 15 percent of the local fleet in North Carolina, soon to be zero. McCammon would not say how frequently the facility gets new servers but it is not daily and it is more than once a week. A truck was expected sometime the day we visited, in fact, but we didn't see it.

facebook-building-3-back

The hall that we saw in Building 3 had three rows for network equipment, which McCammon said came from Cisco Systems. He walked us rows of database servers based on the "Winterfell" Open Compute design:

facebook-building-3-winterfell

And here is a rack of servers that mix the "Winterfell" server nodes with Open Vault storage arrays. The latter has two 1U trays, which are hooked together by a hinge, and each tray holds 15 SATA disks that are currently using a mix of Western Digital and Seagate disk drives that have 4 TB of capacity. Facebook could shift to 6 TB drives to boost capacity in the arrays by 50 percent and the company is looking to embed storage controllers in the disk trays themselves, thus freeing up enough room in the rack to add two more systems or four trays.

facebook-building-3-windmill

Other racks were equipped withe the "Windmill" servers, which are the second generation of Open Compute nodes.

The Open Compute racks used in Building 3 are the triplet Open Rack designs that also have an outboard battery backup for the systems. Facebook doesn't use uninterruptible power supplies but rather has batteries that have just enough juice to keep the racks serving until the backup generators kick in. This cuts down on complexity and cost in the datacenter. So does running 480 volt power directly to the servers, which under normal circumstances is stepped down to 120 volts. But this just creates heat so Facebook doesn't do it.

The latest addition to the Forest City complex is Building 4, which is a cold storage facility much like the one that opened in Prineville late last year. This facility uses dense disk arrays based on the OpenVault design and these are configured in the triplet Open Racks. However, the cold storage facility does not have the battery backups that the production servers and storage does. The assumption is that because most of these pictures are rarely accessed, Facebook users will not care all that much.

The cold storage facility in Building 4 has all of the disk drives in the arrays turned off until they are required to spin up to access an old photo, which means this building can get by with a much smaller outside air cooling and movement system than the bigger data halls where the servers are. The data for a particular photo is chopped up by a Reed-Solomon algorithm and spread across multiple disks in multiple storage nodes and only the nodes disks for a particular photo are turned on in a particular rack at any given time. The data is accessed in parallel, which speeds up the access considerably compared to putting it on one disk drive. Enough to mask the difference, in fact. The cold storage facility is much smaller, as you can see from the photo, and it has only one row of network equipment instead of three like the server halls.

Here's the second hall of the cold storage facility that is under construction now, which will open in June:

facebook-building-4-construction

A third hall will also open this summer, bringing the cold storage capacity of the North Carolina datacenter up to 3 EB (that's exabyte). That may sound like a lot until you realize that Facebook has over 400 billion photos to archive and that another 350 million are added every day. Even the first hall is still largely empty at this time:

facebook-building-4-inside

A rack of cold storage only burns 2 kilowatts and one of the data halls in the cold storage facility, when loaded, will only burn 1.5 megawatts to house 1 EB of data. This storage method costs one-third the price of using regular Open Vault storage where the disks are spinning all the time. The datacenter cooling costs are one-fifth as high as because most of the disk drives are off most of the time. At the moment, only Prineville and Forest City have cold storage facilities, but it stands to reason that the third Facebook datacenter, in Lulea, Sweden, will get one soon and that the fourth datacenter, located in Altoona, Iowa and slated to open later this year, will also have this massive nearline, virtual disk drive for its server halls.

While the Forest City datacenter pays property taxes, which local governments always like, and they employ a fair number of people in construction jobs, the massive amounts of automation that Facebook has created for its systems means that the datacenters are not big job creators once they are running. Facebook has employed on the order of 2,500 local construction workers, engineers, and architects to build its North Carolina facility but it only employs around 80 people who work in three shifts. As it turns out, one technician can manage about 25,000 servers thanks to the automation systems and redundancy that Facebook has created for its applications and infrastructure.

On the day we visited, across all of the currently operating buildings at the Forest City datacenter, Facebook had a power usage effectiveness (PUE) of 1.07. This is the ratio of the total power consumed by the facility compared to the the total power used by the servers, switches, and storage and that rating is about as good as anyone on the planet gets. The fact that Facebook and its hyperscale peers can do it in North Carolina is a testament to what can be done when engineers think outside of the box. You can see an online dashboard that shows the performance metrics of the Forest City datacenter, and as EnterpriseTech has previously reported, Facebook has open sourced the software behind the performance dashboards it created for its own datacenters so others can employ them.

AIwire