This audio was created using Microsoft Azure Speech Services
Until recently, there’s been a numbing inevitability about survey results from research into the data center sector. Finding number one; all downtime is the result of human error. Finding number two; uptime is more important than efficiency – the penalty for not being able to keep the lights on is greater than the cost of keeping the lights on.
And then, earlier this year, IDC’s Research Director, Jennifer Cooke, shared with us her research amongst around 400 enterprise and service provider data center operators in a report titled “Datacenter Facilities Infrastructure Management and Operations Survey, January 2017.” It’s a really insightful study in my opinion, amongst other things, indicating the majority of downtime is now being caused by system error.
But for me, the most interesting aspects of the research is in unpacking the impact of a range of data center problems on the business itself. In particular, IDC reports that there is a clear line of delineation between data centers which use dynamic management tools to provide real-time visibility of infrastructure and those using static methods. In short, the former experience fewer problems.
The main problems emerging include slower equipment deployment times and the inability to meet deadlines. Underlying causes such as the lack of a holistic view of data center resources (e.g., power and cooling capacity), and a lack of coordination between IT and facilities organizations, clearly have a negative impact. Unless addressed, this is likely to escalate as the demand for IT services grows.
So, it’s no great surprise that improving internal processes and investing in software are perceived to be the highest priorities in terms of overcoming data center challenges. With process change pivotal, a top benefit of DCIM is improving workflow. The idea of using software for dynamic management of resources and following this through with internal process improvements is a key step forward.
I hope you’ll forgive this commercial message from our sponsor, but the recent introduction of Struxureware Data Center Operation 8.1 includes new features that recognize this requirement. As such, this new version enables data center operators to design, manage and execute workflow tasks more efficiently and gain greater control over the data center environment.
New features include workflow template creation to simplify workflow management at the same time as providing more control of tracking, moves, adds, and changes across all data center assets in order to improve system performance. Additionally, integration of third-party IT services such BMC Remedy, enables the sharing of relevant information across systems from multiple vendors to give a complete view of data center performance and availability to boost reliability and efficiency.
Naturally we’ve considered the needs of users and provided an adaptive user interface so that the system can be accessed via any desktop, tablet or smartphone device. Whether in the data center floor or the NOC, this makes it simpler for everyday tasks to be managed, such as tracking status updates, managing project timelines, and setting task owners.
Bridging the gap between IT and facilities management has long been seen as an elusive holy grail. However, in meeting the need to get equipment moves, adds and changes deployed in a transparent and accountable manner, the workflow benefits which are integral with the DCIM package provides management with a sustainable solution to current and future data center challenges. For more details, please visit dedicated webpages for Schneider Electric’s Struxureware DCIM suite.
Conversation
Ashley Woods
7 years ago
This is a wonderful article. Very Informative. You have touched upon some very important aspects in data center management. Improving the internal processes and investing in good software is absolutely essential for an efficient data center management. Companies need to decide on their budget allocation keeping these factors in mind.