How Benchmarking Helps Improve Critical Data Center Infrastructure Reliability and Performance

This audio was created using Microsoft Azure Speech Services

A new tool is coming soon to help data center operators improve the performance of their critical infrastructure: Benchmarking. Made possible by cloud-based architectures and Internet of Things (IoT) technology, it promises to help data centers operate more proactively, efficiently and reliably.

Benchmarking has long been used as a tool to improve performance in all sorts of businesses and disciplines, from technology, finances and marketing to retail. The popular activity tracker Fitbit, for example, is now being used to track how well users sleep, determining how long an individual experiences various stages of sleep (light, deep and REM). By collecting data from thousands of users, Fitbit can enable customers to see how their sleep patterns compare to others of the same age and gender, and get tips for improving sleep.

Benchmarking comes to data center infrastructure

Now a similar form of benchmarking is coming to critical data center infrastructure, including UPSs and cooling equipment. It’s enabled by newer equipment that is instrumented to collect performance data and communicate it to a centralized cloud-based platform, such as Schneider Electric EcoStruxure ITTM. It’s essentially IoT technology applied on a large scale, to create a vast collection of data that can be mined to gain insights on when a particular piece of equipment is not functioning at a peak level or is in need of service.

It will be delivered as part of a larger Digital Service offering, where a service provider provides real-time operational visibility, alarming and shortened resolution times without all of the costs associated with deploying an on-premises data center infrastructure management (DCIM) system.

Download 451 Research Report

The architecture used to store benchmarking data is an important piece of the puzzle. It requires a model that enables the data to be effectively stored and managed such that artificial intelligence (AI) can be applied to find trends that indicate performance issues. Crucial to the effort is being able to compare what the data is saying to manufacturer recommendations for how the equipment should be performing, or tolerance ranges.

Consider a three-phase UPS, for example. Benchmarking can enable you to consider parameters such as temperature, average load, how frequently bypass mode kicks in and more – for thousands of users. Applying AI to all that data enables the system to identify when the operating conditions are such that the battery in a particular UPS will wear out faster than it should. Then the data center operator can take corrective actions, perhaps adjusting the cooling infrastructure or moving IT equipment to address hot spots. Worst case, they are at least aware of the situation and can keep a close eye on the UPS battery and replace it before it fails.

That kind of actionable data helps to improve overall data center performance and reliability.

Data Center Benchmarking requires the right data – and lots of it

Success with benchmarking for critical data center infrastructure requires you collect the right data from the appropriate equipment, then format it correctly such that the AI engine can effectively and accurately compare it. It requires both a high level of data science expertise – and lots of data.

For its EcoStruxure Asset Advisor platform, for example, Schneider Electric currently collects more than 283 million data points a day (and growing) from thousands of pieces of data center equipment, populating a highly structured data lake that enables the kind of detailed comparisons required for accurate benchmarking and predictive maintenance.

Explore what EcoStruxure IT can Do for Your Data Center Operations

Discover what EcoStruxure IT can do for your data center operations. You’ll find customer testimonials, blog posts and a free research paper from IDC that clearly demonstrates its value.

Tags: , , , , , , ,