Data Center Lifecycle Principles: Part 1 of 2 – Design for Change

This audio was created using Microsoft Azure Speech Services

Recently, we’ve begun to hear more about a lifecycle approach to data centers. For years, the approach to data centers tended to be design- and technology-centric; what’s the best design based on the need, and what are the best pieces of available technology?

That traditional approach tended to fail, however, as sites ran smack into rapidly evolving information technology (IT) trends like virtualization, blade servers, cloud computing, and rising energy costs. These forces had a way of turning even seemingly well-designed data centers into inefficient or inflexible assets.

As a result, interest is growing in the lifecycle approach to data centers. As the name implies, it involves designs which address long-term concerns, and a strong focus on continuous improvement.

This lifecycle concept sounds well and good, you might ask, but how can an organization excel at it? How can you actually become a data center lifecycle leader?

While many factors are involved in the lifecycle approach, two key principles go a long way toward executing the concept. First, when data centers are designed and built, more attention needs to be paid to how the data center will be operated in the future, and how it might need to change. Second, to make the data center as efficient and reliable as possible over its lifecycle, it’s crucial to establish a foundation for continuous improvement, making use of an audit and upgrade strategy.

In this post, let’s focus on the first principle, and explain the second in a follow-on post. That first principle really comes back to how to you ‘design-in’ the ability change.

Until recently, not much thought has been given to the concept of designing data center for change. The data center was designed and built to handle a projected workload, but not much thought was given to upgrades down the road. But there are ways of making change easier.

For example, you can design-in certain levels of redundancy in the power infrastructure, so when it comes time to do an upgrade, there is little downtime.

Power and cooling infrastructure also has become more modular in recent years. Row-based cooling or hybrid cooling, for example, tend to be more easily scalable to changing load profiles.

When choosing something such as cooling infrastructure, data centers managers should not only consider what the “first cost” should be, but also other factors that play into lifecycle costs such as the agility of manageability of the system. For a deeper dive on cooling choices, take a look at White Paper 130.

It’s not only cooling that has become more modular, but also other key components of data center physical infrastructure (DCPI), as discussed in White Paper 76. Another trend in data center modularity is DCPI “facility modules” that have the key infrastructure prebuilt into a cube-like configuration, allowing for “Lego-block” approach to adding capacity.

However, designing for change isn’t as simple as opting for modular products. You also have to create models for how a data center might change, and it’s important to start this analysis at the design stage. This typically involves the use of data center infrastructure management (DCIM) tools.

By leveraging these analytical tools early on, the organization has a baseline to make decisions about what kinds of modular equipment are needed, how much redundancy to build in and where, and how the data center can be reconfigured. These tools also are vital for the second principle of lifecycle leadership: establishing a foundation for continuous improvement.

To sum up, the lifecycle approach spans many factors, but the principles are simple. One of the most important of these: right from the start, design for change.

Tags: , , ,