Reflecting on 10 Years of the InfraStruxure Architecture

This audio was created using Microsoft Azure Speech Services

As we began the year 2013, it occurred to me that our InfraStruxure data center architecture turns 10, making it a good time to reflect on how IT and InfraStruxure itself have changed, as well as look at what has remained the same.

Following the dot com bust of the early 2000s, it was no longer acceptable or viable to overbuild data centers. It became a requirement to think in terms of a scalable, modular, pay as you grow approach to data centers, and to bridge the gap between the facilities and IT team. InfraStruxure met those requirements. It was revolutionary in its time, delivering a set of blueprints and products that enabled customers to think of data centers in a holistic fashion.

Rather than piece together data centers from a series of disparate and often incompatible components, InfraStruxure offered a way to fully integrate power, cooling, racks and management tools. It simplified data center design while simultaneously improving scalability, reliability, efficiency and manageability. It enabled facilities teams and IT teams to work jointly, as opposed the more traditional separate, parallel approach.

That’s one thing that hasn’t changed – InfraStruxure still offers a way to build data centers that are at once more scalable, reliable, efficient and manageable. That’s important because it enables customers to address the performance level they need in each of six areas, which likewise haven’t changed: availability, efficiency, density, manageability, agility and cost. InfraStruxure remains the foundational architecture for a highly agile next-generation data center that can scale up or down, respond instantly to changes in the data center and accommodate the rapid evolution in IT technology.

Another thing that hasn’t changed is the need for an architectural approach to building data centers. The idea has been validated by several other vendors that are using a similar approach with their own products, such as the Vblock platform that VMware, Cisco and EMC came out with a few years ago. It includes pre-integrated versions of EMC storage, Cisco networking gear and VMware virtualization software, with the idea being to make it easier to implement a virtualization solution.

In terms of what has changed, obviously cloud computing is playing a big role in today’s compute environments. Whether companies build a private cloud, use public cloud providers or some mix of the two, it’s certainly a paradigm shift from the data centers of a decade ago. Just as InfraStruxure was the proper architecture following the dot com bust, it’s the proper foundation to support today’s build out of the cloud. It delivers on the desire to convert cap ex into op ex thru a build as you grow approach, and allows you to do more with less by delivering critical data center operational and performance status to both facilities and IT.

Few will argue that the IT footprint inside any given company will disappear anytime soon. But the more companies make use of cloud resources, it does bring another change – it increases the importance of the connection to those resources. That means you’ve got to ensure what remains is reliable and that not only your data center but also your networking gear has an appropriate level of redundancy and reliable power, which means UPS and generator backup (a topic we’ve covered previously).

In terms of what’s new, I’d say data center operators have a newfound desire to understand what drives efficiency in their data centers. Long ago they began to understand that overbuilding was the greatest contributor to inefficiency. Now they are learning that even small gains in energy efficiency can deliver big savings over time, so are taking steps to wring out every efficiency they can.

Maybe it’s close-coupled cooling along with a containment system to more efficiently deliver cooling to high-density racks of virtual servers. Or perhaps it’s making more effective use of economizer mode technology for their cooling system.

The good news is a steady stream of new tools has emerged to help in these efforts. They include new Energy Star qualified  high-efficiency UPS systems,  reference designs, a suite of software based analysis tools (Trade-Off) that answer the what if questions prior to building a data center,  as well as data center infrastructure management software that helps companies see their data center as a whole and make smarter decisions.

It’s gratifying to know that tens of thousands of companies have taken advantage of InfraStruxure and to see an idea that seemed revolutionary 10 years ago remains relevant today. Here’s to another 10 years of data center evolution and innovation.

Tags: , , , , ,