How New Availability Expectations Change the Way Data Centers are Managed and Operated

This audio was created using Microsoft Azure Speech Services

According to The Atlantic’s trilogy on household spending, by 1990, at least 90% of US households had electricity, a stove, car, fridge, clothes, air conditioning, color TV, microwave and a cell phone. With infrastructure and networks in place to support increasing quality of life in the Western world, when we plugged a device into an electrical socket, we confidently did so anticipating energy to flow.

Technology has played an important role improving lives and today its consumption spreads faster than ever. A new tech-savvy generation has grown up with that same confidence about internet availability and latency as we have for electricity. Content in its many forms, from HD film and TV streaming to social media channels and gaming should never be more distant than a fingertip away.

The bottom line is that digital traffic is expanding at an exponential rate – currently around 23% annually – as we travel into the connected age. Not so very long ago, the majority of data center traffic was being generated and communicated by IT servers, desktops and smart mobile devices, much of it in the workplace. Today, consumers and particularly machines are taking their toll. Take a look at my recent interview where I talk about the increasing availability expectations for data centers.

As enabling technology is becoming embedded in a ubiquitous fashion, the numbers tell the story: A third of the world’s population connected to the internet; 1.3 million internet video views, every minute of every day; 30 billion IoT devices telling their stories by 2020; commercial airliners transmitting 40TB of information every hour in flight.

Data has become an important corporate resource. Consequently, the number of devices being equipped with sensors and mobile connectivity to provide continual status updates about use, environment, alarms and so-on is on the increase. Frankly, the cloud was not designed to service and support this kind of requirement.

Further, the cloud and its centralized data center architecture, will also show itself to be inappropriate for emerging applications like driverless cars. As these types of advances are made and look to scale, the SLAs delivered by cloud service providers will not be able to support latency and bandwidth requirements, let alone satisfy regulatory demands for safety and security.

In the enterprise, with more and more applications outsourced, a hybrid approach to data center architecture has emerged in which edge data centers and the resources they host have become increasingly business critical. Edge also plays a pivotal role in an industrial setting, enabling machine to machine (M2M) data to be processed and actioned closer to source.

While in the consumer context, data – in the form of content –  is increasingly being located and distributed nearer to the point of consumption via, e.g., content distribution networks (CDN). In each of these applications, the requirement of high levels of resilience is as important as low latency and high bandwidth to ensure customer satisfaction. For more details, please download white paper 256, Why Cloud Computing is Requiring us to Rethink Resiliency at the Edge.

But whether the consumer of services is sitting at a desk in front of a screen, or on a sofa in front of the TV, the key to realizing these advantages will be the ability to manage an increasingly complex and eclectic hybrid data center environment with more certainty. Inevitably, software and automation will be indispensable in meeting the availability expectations of the always on generation.

Tags: , , , , , , , , , , , , , ,