Let’s play a little game. Say you’re building a data center and you go to management with two proposals, one for a Tier 3 data center that costs $8 million per megawatt and another for a Tier 3 facility that costs $15 million per megawatt. Which do you think management will choose?
The less expensive one, of course. And do you know what management will have bought? A whole lot more risk than they would’ve if they had opted for the more expensive data center.
I’m simplifying matters, of course, but my point is that the tier model that we’ve been using to classify data centers is insufficient to accurately describe the risk profile of a data center.
The Telecommunications Industry Association (TIA) came up with the tier system as part of its TIA-942 standard, which provides guidelines for building data centers. The tier system is meant to describe the level of resiliency and redundancy built into a data center, along with an expected level of availability. Tier 1 is the simplest, with an expected availability of 99.671% and Tier 4 is intended to be most reliable, with availability of 99.995%, which equates to annual downtime of no more than about 24 minutes.
The thing is, as my example above is intended to illustrate, you can build two data centers that are both Tier 3 (or Tier 1, or 2 or 4[PD1] ) but one may have a much higher risk profile than the other. It all depends on how much you spend to build in redundancy and what sort of outages you’re most concerned about protecting against.
For example, the $8 million/megawatt Tier 3 facility may be built in a commercial warehouse that is not hardened to Miami Dade hurricane rating specs, has four hours of diesel fuel storage and 6 minutes of UPS run time. The $15 million/megawatt data center, on the other hand, is in a hardened facility able to withstand a category EF3 tornado, has 4 days of fuel and water on hand and 15 min. of UPS run time. Technically, both may be Tier 3 facilities, but where would you rather have your IT equipment during hurricane season?
A better way to approach data center design would be to start with a discussion about the company’s business model and risk appetite. It may turn out that the design includes elements of different tiers. Maybe you’ve got just an n generator but 72 hours of fuel storage. Or you’ve got an n generator but 2n UPS, because the data center is located in an area with notoriously flaky utilities. [PD2] This kind of approach from a risk perspective is absent in a lot of designs I see, where people focus on building a data center such that it falls into a certain tier and that’s it.
What we really need is a model that can quantify how well a data center is protected against various risks. Then you could have some more informed conversations with management on what they’re really buying. For example, that $8 million data center might have a risk profile of 643 while the $15 million design has a risk profile of 212, and here are all the elements that make up the difference. Now you’re talking about spending dollars to lower your risk profile, which are terms that senior executives will be quite comfortable with.
Schneider Electric is working to come up with just such a risk profile model. I can’t say for sure when the work will be done but am hopeful we’ll have something out by the end of the year. Let me know if you think this will be a valuable contribution.