Is the Definition of the Software-defined Data Center Ambitious Enough?

This audio was created using Microsoft Azure Speech Services

My last blog about the software-defined data center was focused very much on the hype surrounding the SDDC concept. My concern is that in the rush for new things, those with vested interests (vendors, analysts and journalists) can sometimes forget market adoption of new ideas can take time.

At the same time, I think we can all agree that SDDC is a thing. Although I stand by my point that we’ve yet to agree on an encompassing definition of SDDC; what the concept fully comprises, who it’s targeted at, and the potential benefits it brings.

In my opinion, just as DCIM was intended to represent far more than a group of loosely connected applications, SDDC wants to be more than a convenient umbrella title for anything which can be software-defined in the data center.

At Schneider Electric, our experience with DCIM has been that intentional and open integration between applications has enabled far greater benefit to the user, as well as providing opportunities for bolt-on functionality as users’ experience with the software suite grows.

So, if the decision was up to me, what would I make the software-defined data center category about? At a recent briefing, it was interesting that I found my own views in complete alignment with industry analysts’ ideas. Firstly we’re agreed about the need for convergence between IT Service Management and DCIM in order to integrate everything in the white space.

Secondly, using software, we need to abstract away everything physical in the data center – ideally using DCIM. This would enable the allocation of racks or servers to a specific purpose. By investing DCIM with the importance of each purpose, power and cooling allocation can be made according to the relative priority or importance of the purpose (or its importance).

Doing all this work will lead us towards the Holy Grail of data center management – understanding the exact cost of running specific applications. This is something we can’t do today, and something we won’t be able to do until we’ve established the relationships between the application and server, server to rack, rack to cooling and all the way down the stack.

I think we’re close to establishing this link between the IT and the physical layer. Today we have all the components in place, but we need convergence or better communications to make it happen. In doing so it could drive the opportunity for cost reductions/ efficiency improvements. DCIM could become a gateway to justifying any investment in the data center.

Convergence could also give rise to a more dynamic data center – the sort we’ve been speaking about for the last five years. Integrating IT and physical infrastructure can enable greater orchestration as well as automation; because to be effective as well as efficient, things will have to happen without human intervention.

If you could hook up these relationships. You could also make more informed decisions because you can look at cost implications holistically. Not assumed or estimated costs, but real costs. To me, that is a much bigger story than making physical resources available through software.

 

Tags: , , , , , , , ,

Conversation

  • Dushy Goonawardhane

    9 years ago

    Great article. Makes complete sense to hook up the IT & physical layers to have a holistic view in decision making. The opposite being having to peace meal bits and pieces but still ending up with an inacurate outcome.

Comments are closed.