This audio was created using Microsoft Azure Speech Services
Let’s play a little game. Say you’re building a data center and you go to management with two proposals, one for a Tier 3 data center that costs $8 million per megawatt and another for a Tier 3 facility that costs $15 million per megawatt. Which do you think management will choose?
The less expensive one, of course. And do you know what management will have bought? A whole lot more risk than they would’ve if they had opted for the more expensive data center.
I’m simplifying matters, of course, but my point is that the tier model that we’ve been using to classify data centers is insufficient to accurately describe the risk profile of a data center.
The Telecommunications Industry Association (TIA) came up with the tier system as part of its TIA-942 standard, which provides guidelines for building data centers. The tier system is meant to describe the level of resiliency and redundancy built into a data center, along with an expected level of availability. Tier 1 is the simplest, with an expected availability of 99.671% and Tier 4 is intended to be most reliable, with availability of 99.995%, which equates to annual downtime of no more than about 24 minutes.
The thing is, as my example above is intended to illustrate, you can build two data centers that are both Tier 3 (or Tier 1, or 2 or 4[PD1] ) but one may have a much higher risk profile than the other. It all depends on how much you spend to build in redundancy and what sort of outages you’re most concerned about protecting against.
For example, the $8 million/megawatt Tier 3 facility may be built in a commercial warehouse that is not hardened to Miami Dade hurricane rating specs, has four hours of diesel fuel storage and 6 minutes of UPS run time. The $15 million/megawatt data center, on the other hand, is in a hardened facility able to withstand a category EF3 tornado, has 4 days of fuel and water on hand and 15 min. of UPS run time. Technically, both may be Tier 3 facilities, but where would you rather have your IT equipment during hurricane season?
A better way to approach data center design would be to start with a discussion about the company’s business model and risk appetite. It may turn out that the design includes elements of different tiers. Maybe you’ve got just an n generator but 72 hours of fuel storage. Or you’ve got an n generator but 2n UPS, because the data center is located in an area with notoriously flaky utilities. [PD2] This kind of approach from a risk perspective is absent in a lot of designs I see, where people focus on building a data center such that it falls into a certain tier and that’s it.
What we really need is a model that can quantify how well a data center is protected against various risks. Then you could have some more informed conversations with management on what they’re really buying. For example, that $8 million data center might have a risk profile of 643 while the $15 million design has a risk profile of 212, and here are all the elements that make up the difference. Now you’re talking about spending dollars to lower your risk profile, which are terms that senior executives will be quite comfortable with.
Schneider Electric is working to come up with just such a risk profile model. I can’t say for sure when the work will be done but am hopeful we’ll have something out by the end of the year. Let me know if you think this will be a valuable contribution.
Conversation
Venessa Moffat
11 years ago
Great idea. Makes me wonder though how you would compare data centres that are at different phases of their lifecycle. For example, comparing one that isn’t built yet with an existing one which needs some fit-out to compy with requirements.The former would include all the construction risks as well as schedule and cost etc, whereas the latter would have fewer of these, but possibly more risks relating to future viability. Having a number to indicate an overall risk rating ignores these differences in types of risk. You’d need some kind of framework which would maybe categorise the risk areas, so that the business could make in informed decision…? I think it’s a great idea though. Even buying colocation has a risk profile that only experienced buyers understand. All data centres are not created equal….
Myron Rodney Sees
11 years ago
Joseph, we have been thinking along the same lines. I think Venessa hits the nail on the head by addressing the data center through its lifecycle. Too many centers are built on day one redundancy requirements; and, not those requirements that may exist during the lifetime of the center. We have modified our projection tables that were initially used for capacity requirement only to include the value of the data center to the organization over the lifetime; and, design to the end point.
Joseph Reele
11 years ago
Thank you very much for the feedback and your thoughts on this Vanessa. You are correct in that a categorization would make looking at the risk easier and essentially comparing Apples to Apples sort of speak. The risk index model would cover all aspects, by category, and I love your statement regarding where the data center is in its life cycle as that plays a significant role in the risk index. A data center being built certainly does not have existing business impact as there are not any critical operations, business, applications and so on that is supporting where as an existing data center that is in operation would have a different set of risks as there is or could be impact to the existing operations taking place inside. And, if the criticality or other changes in its operation change…then its risk index can follow those changes. So the risk index needs to be able to look at all of these and in a consistent and measurable manner. If you would like, we may could arrange a time where we can discuss this further. Again, thank you very much for your time and feedback, we certainly appreciate that and are always looking for feedback. Respectfully – Joe.