This audio was created using Microsoft Azure Speech Services
Major market and technology trends such as the Internet of Things (IoT) and smart “everything” have driven exponential growth in the amount of big data that needs to be captured, stored, analyzed and connected. In fact, experts project that by the year 2020, about 1.7 megabytes of new information will be created every second for every human being on the planet. As the digital transformation of our society expands, data centers play the critical role of providing the technology backbone that supports our digital lifestyle. In fact, data centers are accommodating these needs by occupying almost two billion square feet of facility floor space across the world.
While this digital transformation has been occurring, expectations of how a data centers provide value have also evolved. Data centers need to operate on a continuous basis with very limited downtime – when they do go down it’s now front-page news. If something goes wrong, the management system and service tech armed with digital tools are expected resolve the problem quickly and to provide insight into the root cause to help prevent another occurrence.
The shock to the system necessitates adjustments
This need for “high stakes” data center management became more apparent recently when more than one big-name airline experienced data center outages which resulted in subsequent negative customer relations and associated financial consequences. These incidences demonstrate how the lines between IT and the IT-supported businesses are becoming blurred.
All industries are being affected. One recent article on the future of banking, posed the question “So, what will the bank of the future look like?” and the author answered it with an emphatic and unambiguous, “It will either cease to exist or will become an IT company.”
As a consequence of this new IT availability-driven reality, traditional approaches to managing data centers have begun to outlive their usefulness. A new, more agile and open model is required. The look and feel of what is now a “hybrid” data center environment involving the cloud, on-premise data centers and edge computing is much different from the typical data center management environment of even five years ago.
The new open architecture model
An important key for managing all three elements of the hybrid data center portfolio is to view the cloud, on-premise, and edge elements as part of a larger whole that is built on an open, multi-layered architecture. When piecing together such an architecture, a holistic view should be taken and development planned in the aggregate, with the application and the physical environment being central considerations.
Fortunately, there’s no need for data center managers to reinvent the wheel by architecting a hybrid data center portfolio from the ground up. Schneider Electric has recently launched EcoStruxure IT, an open, but tailored, stack of connected products, edge control level software, and cloud-based services for supporting applications and data analytics. This common platform aggregates IoT data from information technology (IT) and operations technology (OT) hardware devices into a cloud based data repository.
The EcoStruxure IT architecture helps customers unlock the potential of sensors and data points within the data center to vastly improve performance and availability. By linking to more than 800 customers, over 1,000 data centers, 88,000 devices and more than 2 million sensors, EcoStruxure IT provides predictive insights that generate advance notice before any failures occur. This useful information can also be monitored in real-time by experts in the Schneider Electric service bureau, who can provide explanation of the recommendations and dispatch of service experts as needed. This dispatch function can also be automated if desired.
To learn more about the agility and efficiency benefits of EcoStruxure IT, take a look this recent press release or watch this video.