This audio was created using Microsoft Azure Speech Services
The headline above defines the question the cloud computing industry is in the process of working out right now, and the answer is not what you might think.
When cloud computing and the Internet were initially developed, the primary applications were email and the very beginnings of social media – remember MySpace? Technologists dreamed of a day when voice could be transformed into digital packets and moved over the Internet. This technology – voice over IP (VoIP) – took decades to mature to the point where it became viable but VoIP is now a preferred technology to analog phone lines. The main inhibiting factor was latency, which caused packet loss and loss of voice quality.
Today we are attempting to make another huge technological leap – moving the business applications we all use every day off of our PCs and servers and into the cloud. It’s not a monumental technical challenge to take IT applications and put them in a large, remote cloud data center and call them “services.” But it is a huge challenge to duplicate the speed at which these services run on local servers, considering that centralized cloud data centers can be hundreds or even thousands of miles away from the users.
Super high bandwidth is built into long haul networks enabling them to transmit data at high speeds. But the main problem is network congestion. The same data that you need to make a business or even life or death decision is traveling on the exact same network as the video of little Susie riding her bike for the first time without training wheels. If the video of Susie and others gets to the network transfer point or “hop” before your medical record does, yours has to wait. And it’s only going to get worse – Cisco estimates global IP traffic increased more than fivefold from 2010 to 2014 and will increase nearly threefold by 2019.
Customers increasingly expect good, reliable performance from their applications and cloud services. While they may understand that events such as severe weather can cause network downtime, if a new Game of Thrones episode premieres and causes massive network delays well into the next day – that’s unacceptable.
Avoiding such congestion is forcing service providers to move data closer to the user. I am calling this a move to the edge and it’s starting with regional data centers. Regional data centers are not the physical end of the network, but it’s a move closer to end users and as such will reduce latency and transmission costs, while increasing security and in many cases help companies comply with data sovereignty regulations. That’s why the strategy of Microsoft, Google and other service providers is to cache heavily used subsets of data in regional data centers to serve major markets or urban areas.
So these providers are building regional data centers right? Nope, the time it takes for permitting and the number of data center design and build experts are prohibitive. Not to mention that they need the data centers now. The logical answer is to house their cloud data centers in regional data centers owned by colocation companies. These are the companies building medium-sized data centers in strategically located urban areas. This enables the Internet giants to address issues around service levels, transmission costs, security and data sovereignty regulations.
Clearly this is a significant opportunity for colocation providers. It’s just one of several outlined in our new, free report, “Opportunities and Threats to Colocation Providers from Around the Globe.” Click here to download your copy now.