In any walk of life, you don’t want to take a sledge hammer to crack a nut, or as my colleagues in some parts of Europe say, shoot sparrows with a cannon. We’re wired up not to take drastic approaches to solving simple problems, until it comes to things like technology.
I’m saying that in an office where most people have more tech in their watches than it took to land a man on the moon. And applications on their PCs that are more than a little over-specified for the sorts of spreadsheets, documents and presentations they create in their daily work.
In the data center, things are a little different and generally speaking there are strong incentives to make sure that assets in the IT and physical layers achieve a high level of utilisation. That means that we’re not powering and cooling ghost servers or equipment that could be switched off, and that the IT and the physical infrastructure which supports it are sized correctly to meet the business need and budget requirements.
You may or may not be surprised to know that in research* most data center managers state the same objectives. Improve the speed of time to deploy IT services and workflow management; increase availability of compute resources, improve asset management, reduce downtime and increase the flexibility to move workloads as needed.
In addition, no matter what size of data center being managed, there is an ongoing responsibility to ensure the complete system runs efficiently and reliably throughout the life cycle so that ROI can be maximised. And naturally it goes without saying that uptime is key.
However, data center operations can be a complicated, expensive, and time-consuming affair for any size of facility in any size of organisation: power, cooling and IT capacity has to be juggled with in just the right way to guarantee uptime, control cost and keep order in a dynamic, ever changing environment.
It’s another way in which all sizes of data centers are linked in their need for management tools to do keep all balls in the air. It’s been said that in the absence of tools and visibility into the facility systems, the constant balancing act between availability and efficiency is a little like the data center manager being asked to walk a tightrope, blindfolded and without a net.
When it comes to moves, adds and changes, without a clear view of data center power, cooling and space capacity managers are potentially putting data center operations at risk. Conversely, by modelling and assessing in advance the impact on the existing load of potential changes, risk is mitigated because decisions are being based upon sound information and data.
Over the past five years, the use of data center management tools has increased beyond their early application for managing assets. Today, suites of applications enable granular control and management of not just the white space but the entire facility from BMS to VM, including power, security and lighting. The thing is that these tools are accessible to, and appropriate for all data centers. They are not exclusively for large space facilities and certainly not a sledgehammer to crack that server room nut.
By providing a holistic view across the data center and facility, data center management tools provide the visibility and tools needed to manage resources and optimize and control performance. By getting the right information to the right people, decision making can be better informed; downtime can be prevented; infrastructure changes can be speeded up and costs can be lowered.
In each if these respects, data center management software meets the challenges found in every size and shape computing facility, from a few racks in a server closet to a major data center installation. For more information, please watch this short video entitled “Full Data Center Visibility with StruxureWare for Data Centers.”
*Source: State of the DCIM Market: Services Driving Growth. Jennifer Cooke, IDC Web Conference 19/05/2016