Computing Evolves with the Open Compute Project

This audio was created using Microsoft Azure Speech Services

If you have been following or involved in the data center game for many years you first saw the original “computer rooms” that were designed for huge and very hot running mainframes.   After a while we saw mini computers populating data centers and then the advent of clustering servers with X86 processors in rack mounted single server and blade configurations.  That’s where we are – giant “server farms” are springing up that house up to hundreds of thousands of servers.

When you have that kind of scale you are challenged with many things including massive power needs from the utility and efficient power distribution to bring that power all the way down to the servers.   The asset management of those servers is also challenging due to the scale.   Monitoring, managing, troubleshooting and repair of the servers actually takes teams of full time dedicated staffs working 24 hours a day.

Standardization, modularization and design for fast install and service would be ideal.   And what if you could design and package your own servers – power supply, processors, memory, etc, to reduce packaging and cut out vendor mark-ups?    Plus if you shared these designs with your peers and solicited feedback, you could accelerate design reliability and performance.   This is the idea of the Open Compute Project and in theory it makes a lot of sense.

I’ll be at the OCP U.S. Summit this Wednesday: come visit me and my colleagues at the Schneider Electric booth #B7 or for more information on how Schneider Electric is involved with the Open Compute Project, click here to learn more.

Tags: , , , , ,