Fortune 500 companies around the world are trying to rationalize their data centers for better efficiency of scale. Some of them have approached Schneider Electric with the goal of a massive consolidation in the number of facilities they operate, from as many as 500 down to (only) 150 data centers.
The available options to them include outsourcing and co-location among others, and the reasons are obvious: A large automotive manufacture, for example, could well ask itself: “Why should we spend 50 million Euro of our precious capital reserves to build a data center when we could more usefully employ that money to build a new car?”
One of the outcomes of this sort of thinking is a burgeoning market for modular or containerised data center solutions. There’s some interesting perspective about recent examples of companies implementing modular design strategies available on the Pike Research blog “The rise of the modular data center”, and the Data Center Knowledge “DCK Guide To Modular Data Centers” is also useful resource.
Early modular offerings were hampered by the perception that they were too focused either on disaster recovery solutions or the unique needs of internet class data centers. However, increasing demands for faster technology roll-out as well as the pressure to reduce capital expenditure have advanced the case for standardization and modularity – to diminish uncertainty and to balance growth with capital cost. Data density, virtualization and cloud services all help to maximize asset potential.
It is easy to scale up (or down) using containerized facility modules: the design cycle is specified in terms of weeks, while installation and commissioning is specified in terms of days. Compare that to traditional data centers with their own unique design, reliance upon custom-engineered parts, installation time measured in months and weeks to commission.
It is easy to expand a modular data center using facility modules or to build a new one on a green field or brown field site. It is also conceivable for modular facilities to be sited temporarily, for six months or a year at a time, as in the Olympic Games for example, after which time it could be dismantled and moved elsewhere. It is also much more predictable because everything is standardized, using factory-tested manufactured systems with pre-programmed software with guaranteed performance.
Using modular and standardised power and cooling plant provides solutions across a broad spectrum of user scenarios from standard building blocks. Uniquely designed data centers by contrast are quite impossible to predict, needing fine-tuning before one can have an idea of what will happen. This will often result in over-sizing of the data center, which also eats into profits.
There is no doubt that the demand for increased data center space will continue unabated for the foreseeable future. Cost, time and predictable performance are three major factors which are taken into consideration when meeting the demand. In all three respects, the standardized modular approach to data center deployment offers such a significant advantage to data center developers, so much so that it could render traditional design and build projects obsolete.