This audio was created using Microsoft Azure Speech Services
I’ll be at DatacenterDynamics London during this week to officially launch our Data Center Genome project. If you haven’t seen my blog post “Your Planet Needs You! Help Cut Waste Caused By Over-Sizing Data Centers”, you might wonder what we mean by Data Center Genome, what comprises data center DNA, how we are going to go about mapping it, and what problems we are going to solve as a result of this course of action.
Let me start by answering the last question first – why should we bother? The simple answer is that throughout the operational lifecycle of a data center, it seems that every facility manager ends up trying to find answers to the same old questions:
- Where does all the power in my data center go? Are there servers in the IT estate that are doing nothing? Which ones are under-utilized and which ones can I switch off?
- How can I plan for expansion of anything from a new application to the deployment of a disaster recovery zone?
- Where do I have enough space and PDU power to deploy new equipment without causing a hotspot?
- How can I ensure compute capacity to meet current and future requirements?
- How do I balance limited space, power and cooling resources?
- How much power do I really need?
If we can provide answers to the first question by adding the DNA of diverse and numerous data centers to the Genome library, then we will be in a position to assess which proportion of the data center industry’s 340 TWh energy consumption is wasted. With that information we can start to realize as much as $10 billion in hard currency savings, not to mention lowering carbon emissions and the unwanted contribution that the industry makes to global warming.
So what do we mean by mapping data center DNA? Essentially we mean gaining an understanding of all the components which comprise the data center, from CPU to CRAC in various states of work or load. The difference is that we’re trying to bring to this effort is that we want the data center community to donate this DNA using the operational data of equipment in their own facilities.
The simple reason for taking this approach is that the nameplate values which tend to be used in other applications will not resolve widespread problems caused by over-sizing. You can read more about how this situation has arisen in Henrik Leerberg’s post “When It Comes to Data Center Design, Never Make Assumptions”.
Einstein is famously quoted as saying “We can’t solve problems by using the same kind of thinking we used when we created them.” In solving the problems caused by over-sizing we need a new paradigm. We believe that a socially sourced data repository – The Data Center Genome – is the way forward. Would you sign up to that? Visit – or go to and add your ideas or feedback about this project. Or you can follow the campaign on Twitter @dcgenome or by searching for the hashtag #dcgenome.