This audio was created using Microsoft Azure Speech Services
In the late 1990s, most companies looking to realize the benefits of cloud computing then struggled with application performance, security, interoperability, and management.
Fast forward 20 years and we see almost all enterprises have implemented cloud-first strategies, hosting as many applications or services as possible in the cloud. But it’s not as simple as hosting the applications in centralized cloud data centers somewhere far away. Many applications are hosted in the core cloud, some in regional or metro clouds, some are in private clouds, and many cloud services are distributed across remote sites with local instances of the cloud architecture at the edge.
I think we can all agree that today’s cloud data center architectures are much more complicated. For a review of the increasingly complicated data center architectures emerging, see my blog on distributed, hybrid, tethered edge clouds that focuses on the newest and fastest growing data center location – the edge.
So, what are the main drivers of moving to the edge?
In a nutshell, they include:
- Scale IT Elastically – bursting to central core cloud when needed
- Reduce Latency – especially valuable for time sensitive applications
- Raise Redundancy – “like-for-like” service that mirrors those in the central cloud
- Address Geopolitical Issues – censorship, security, privacy ,and data sovereignty
- Simplify – leverage the cloud provider to design, architect, distribute, manage, and update the services.
And don’t forget that we are in the middle of a data center demand boom. Today’s new automated digital lifestyles and changes to our work culture drive more and more need for data center capacity. On top of that, regulatory constraints must also to be adhered to and corporate IT performance metrics met.
Cloud networking software that enables prioritization and routing
So in the future, which starts now, more processing and application delivery will be done closer to the user and the data. It all makes sense but you may have asked yourself, how does it work? I understand, in theory all of these applications and servers are probably duplicated in different sites and share data and add redundancy. But, which processing or content request goes to which server? The simple answer is cloud networking software that enables prioritization and routing within hybrid data center environments.
Organizations using cloud networking software called application load balancers can deploy it in one, or multiple cloud environments. This includes public and private clouds and distributed edge locations. For example, if you are shopping on Amazon, the request goes through the load balancer and directs it to the preferred data center and server, which may be on a local edge “outpost” of a regional data center or a central core data center.
Elastic Load Balancing can instantly add scale to your applications and improve application reliability via health checks. It can also implement flexible network access transition rules for better security and directly integrate into virtual servers and cloud services.
The user has the power to identify preferences within their network. While most people will default to the fastest response or shortest distance, those are not the only choices for routing algorithms. Users can choose to assign incoming requests to the server with the least active connections, specify server consistency – will always connect to the same server, direct to the server dealing with the least traffic (bandwidth), or direct to the server dealing with the least pending requests.
Drive more performance and an optimal user experience
This is becoming a very hot topic. Client inquiry on the subject of cloud networking has increased dramatically over the past two years. More than five times from the first quarter of 2019 to the first quarter of 2021*, according to Gartner. Companies are desperate to deliver the best customer experience. Driving processing and application delivery to the edge won’t work by just locating the data centers there.
Load balancers and software that lets users define the optimum parameters to deliver the performance needed are key. As edge data centers proliferate, they will run applications from multiple cloud providers. And the load balancers of the future will need multi-cloud capability. Summing it up, properly balanced loads in edge data centers will drive more performance and the most optimal user experience.
For best practices on configuring and deploying at the edge see White Paper 278 Three Types of Edge Computing Environments and their Impact on Physical Infrastructure Selection. Schneider Electric has invested heavily on making micro data center configuration, purchase, and deployment efficient at scale, see our Reference designs with our IT Alliance Partners.
* Market Guide for Cloud Networking Software Published 17 May 2021 – ID G00740639