Deploying Open Rack, Servers and Storage from the Open Compute Project

This audio was created using Microsoft Azure Speech Services

The data center industry has traditionally been risk averse and for good reason – business critical and life critical applications cannot be taken lightly. With that the industry is also highly collaborative with information sharing on the newest trends and who is doing what and with what success or failure. The industry has recently taken steps to formalize this information sharing and it’s called the Open Compute Project and it has a regular meeting called the Open Compute Forum. The members and attendees in these meeting include the brightest minds from the largest and innovative companies in the world.

Some of the recent outcomes from the project include Open Rack, Servers and Storage.  They are all “clean sheet” designs and the racks have different dimensions from the usual 19” and 23” width and 42U height – the U height itself is also different. Servers and storage are also different than what we are used to, they are standardized, very modular and designed for fast install.  Different processors are available as well as various storage capacities that affect power and cooling requirements.

Speaking of power, there are standardized BBU (battery back-up unit), and BBS (power supply module back-up solution), outputting 12.5VDC and backed up with lithium ion battery in the rack. However, battery back-up from more traditional methods like uninterruptible supplies can be added as primary or for redundancy. Although the goal is to have the Open Compute data center design “grid to gates”, it is not there yet. Power distribution and cooling are not specified as of yet outside of the BBU or BBS.

So how is Open Compute getting deployed? It’s currently done through projects that power and cool OCP standardized IT with physical infrastructure. Designs that incorporate the Open Rack, Servers and Storage are built from the inside out from the IT room power distribution, air containment and CRACS, out to the facility power distribution and UPS (if applicable) and mechanical room for the heat rejection – chillers, cooling towers, etc… All managed from data center infrastructure management.

Read our whitepaper “Analysis of Data Center Architectures Supporting Open Compute Project (OCP) Designs” for more information


Tags: , , , , , , , , , , , , , , ,