This audio was created using Microsoft Azure Speech Services
Facebook not only has an innovative cooling strategy for its new Prineville, Ore. data center, it’s building its own servers that consume less power and are easier to repair.
Those were the highlights from the final morning keynote session on Monday at the Gartner Data Center Conference in Las Vegas, where Gartner’s Raymond Paquet interviewed Frank Frankovsky, Director of Hardware Design and Supply Chain with Facebook.
For its first few years, Facebook’s data center strategy was to use leased facilities but as it grew, it was clear it wasn’t a sustainable model from an opex and capex perspective. Thus the company embarked on its project to build the Prineville data center.
The Oregon location was crucial to enabling the data center to be cooled 100% by ambient air, using an innovative water misting technology – not wholly unlike the misters that you might find around a hotel pool here in Las Vegas, at least if it wasn’t so unseasonably cold. Air is brought in from outside and cooled an additional 20 degrees or so by the misters, filtered, then fed into the floor beneath the racks of data center equipment. The Oregon location is critical because 50-year weather patterns for the area show it is cool and dry, meaning low humidity – both key to making the ambient air strategy effective.
The cooling system works not only because of the location, but because Facebook doesn’t try to keep its data center as cool as most companies do. Frankovsky said he was always frustrated that he had to wear a jacket in data centers and had air blowing up his pant leg. “Servers do not need to be as comfortable as humans,” he said.
Anyone, regardless of location, can take advantage of Facebook’s “vanity-free” approach to server design and to stifle “gratuitous differentiation.” Differentiation is the enemy in any large organization, as it makes it more difficult to troubleshoot and repair equipment. So Facebook’s idea was to strip away anything that doesn’t contribute in some way to the operation of the server, like much of the housing. That makes it much easier to repair servers when something goes wrong – 7 times faster in some cases, Facebook found from numerous time and motion studies.
The project began with a small team, “3 engineers and a bucket of bolts,” as Frankovsky put it, that went after the biggest issue Facebook had – servers. “Now we have 6 engineers and 2 buckets of bolts.”
The team also came up with a way to conserve power by sending direct current to the servers, rather than performing the usual conversion from AC current from the utility grid to the DC current that servers use. That conversion results in the loss of 10% or more of power. Facebook has also eliminated the need for data center UPS systems, instead using 480-volt battery cabinets. When the AC power fails, the servers fail over to the battery cabinets. The result is a PUE of about 1.07 – very close to the 1.0 that is considered perfection.
What’s more, the servers chew up less power through simple things like slowing down the fans. Ideally, you want very hot air coming out the back of a server, Frankovsky said, because that means you’re getting the most out of your cooling power. Slower fan speeds help in that regard. Hot air containment systems are then employed to keep the hot air from mixing with the cold.
All of this is documented through the Open Compute Project, where anyone can see all the specs for what Facebook has done, as well as offer up their own ideas. Now there is a prime example of social networking coming to IT.