Ever since data center energy costs and emissions started to get noticed by people outside the industry, many trade media column inches as well as those involved in data centers have been engaged in an ongoing debate about two interlinked but often seemingly opposite issues: efficiency and uptime.
Around the globe, the high levels of data center power consumption have been highlighted as not only a major issue but also a major cost of ownership. It’s also generally accepted that servers in data centers consume a significant amount of power even when they are idle. In fact, a gamut of experts have talked at length about this very subject whilst manufacturers have introduced all kinds of energy efficient products and guidance to “help” operators reduce their power consumption and run more efficient facilities.
It is also true that in many countries energy costs have risen and will only continue to do so over time, seemingly another strong reason to work hard to reduce your power consumption (or to move your facility to a low energy cost geography).
However, the flip side is that the majority of data center management are charged primarily to maximise the uptime and resiliency of their facilities even at the expense of higher energy costs, stranded or lost capacity or generally less efficient facilities overall. A great deal of discussion within the industry has also been around helping identify products, training, consultancy, and so-on to minimise the possibility of IT resources not being available at the moment they are needed, as the on-costs can be enormous depending on your line of business.
But if you’ve also been keeping track of the now regular flow of surveys and data about the data center industry you’ll also see that the general trend for power use is still rising along with the amount of space which is projected will be built.
For me, I can’t help but wonder how big do the carrots have to be to make people change their energy consumption habits?
No-one seems to be denying that lower energy use is a good thing – after all there are wider benefits than a healthier bottom line: There are lower carbon emissions associated with lower energy use, less pressure on the power resources local to facilities, and fewer negative headlines talking about the size of the city your data center uses more power than. Plus there’s the satisfaction to be drawn from running a tight and efficient operation, and being a good corporate and global citizen as you minimise the impact of your activities on the Earth’s resources. In the not-too-distant future, there’s probably a raft of taxes which can be avoided as well.
But with all that it still feels like many companies would rather keep paying their energy bills even if they’re escalating higher and higher, than aggressively transitioning their data centers to a much lower energy footprint. Perhaps it’s that some operators have their energy bills paid for by another part of the business. Or perhaps it’s because the costs of potential problems as well as those of re-engineering facilities aren’t perceived to be compelling enough yet to encourage change. I’ve even had it put to me by the editor of a well-known magazine, that carbon taxes don’t have enough bite to promote a change in habits.
All of which brings me back to the question in the title: How big do the savings need to be?
I’d be interested to know your thoughts, please do send us a comment.