This audio was created using Microsoft Azure Speech Services
It’s common today that businesses need hundreds of trillions of floating-point calculations per second to manufacture their own “super computer”. Every supercomputer uses the very latest, multi-processors in parallel to solve a complicated problem – cure for a specific disease, financial arbitrage, and etc. The process is to pick the processors, the power supplies, the chassis, the Ethernet switch, and the operating system, cluster everything together, and start crunching numbers. It’s very similar to how some people are still putting the foundation layer of a data centers together – you pick the power train, the cooling, the racks and power distribution, and even the DCIM, and then put them all together. It works in a super computer because you are running a streamlined process in a controlled environment that has a single purpose.
But data centers are different. Most data centers operate in a dynamic environment where business changes fast and constant technical updates. A super computer keeps chugging along without any disruptions while a data center has constant changes of servers, storage, and applications or cloud services. A project team doesn’t “enjoy” assembling a data center from multiple components from multiple vendors that may not be designed to work together – but there has never been a better solution.
There is always a chance that things can go wrong in a traditional data center project:
Too many parties in a project – A traditional data center project involves many different professionals like electrical contractors, mechanical contractors, designers, end users, facility departments, IT departments, and executives to make thousands of decisions. The more decisions involved, the less likely comes a seamless project.
Complexity and duration of a project – A traditional data center project involves tremendous work from planning, designing and building the site to finally starting up the system. From a mathematical point of view, uncertainty in a project increases exponentially with the increase of steps you take. Taking the construction period as an example, shipping may be delayed, transportation damage may happen, and heavy construction can be invasive and disruptive to normal facility operations.
Quality variance of individual equipment components – In a traditional data center, the equipment components normally are ordered from different vendors and manufacturers. The integration of these components is not simple and it requires much effort and talent to integrate, test, and commission them.
Prefabrication simplifies the entire data dent project:
In a prefabricated data center project, uncertainty is improved in all aspects. The number of decisions to make is greatly reduced because of simplicity of planning and design by using higher level pieces – integrated sub-system building block modules. Complexity of the installation is much lower as modules are easier to be transported, installed, connected, and then start-up. As the modules are designed to work together, incompatibility risks are mitigated.
Uncertainty cannot be fully eliminated from any project, but there is an opportunity to reduce it by simplifying the processes using prefabricated data center building blocks.
Conversation
Prashant
8 years ago
excellent read.
“Transformation is a vital part of nature. If we do not transform, we will not survive.”