This audio was created using Microsoft Azure Speech Services
As consumers, we create data all day long, whether or not we’re aware of it. Our connected cars, doorbell cameras, and smartwatches all generate data that needs to be processed somewhere. At the enterprise level, IoT sensors, smart factories, 5G cellular networks, and the overall digitization of manual business processes have created an explosion of data.
Deloitte predicts that worldwide data volume will reach 175 zettabytes by 2025, while IDC puts the number at 180 zettabytes. So how and where do organizations manage all this data? Recent projections estimate data center energy consumption at 2,700 terawatt hours by 2040, with 60% of that consumption from distributed sites. This means more equipment, more power use, and more challenges to address.
The old model of sending all that data from the source to a centralized data center is less feasible due to latency and associated costs. As a result, network edge data centers are taking center stage. The “network edge” describes the interface point where the internet and computer networks connect. This model is a distributed system in which small, strategically placed, network edge data centers process data as close to where it is generated as possible. Placing this computing power near the data creation source reduces latency for real-time applications. It also cuts the bandwidth costs associated with sending large volumes of data to the cloud or a central data center for processing.
Access the eguide “Optimizing the Network Edge — For Scale, Sustainability and Resiliency
Who are the key players at the network edge?
No matter who builds or manages them, edge data centers have several key requirements. For example, they should be able to process high volumes of data on-site, have high-speed, reliable connectivity, and be built on commodity rather than proprietary hardware and software. And they should also be constructed and managed sustainably.
At Schneider Electric, we are following a clear market trend that telcos, colocation providers, and hyperscale cloud and service providers are all evolving toward a new ecosystem and converged model enabling ‘multi-access (MEC) edge computing’ and supporting distributed cloud environments. Here’s how it might look.
Telcos
As telcos adopt 5G, their data centers will transition from legacy telecom hardware to standard IT servers and from proprietary software to open standards and SDN (software-defined networking). Also, when telco architectures start to resemble cloud service providers, that opens up the possibility of hosting cloud services and telco controls in the same network edge data center.
Hyperscalers
Hyperscalers have recognized that enterprises are facing challenges associated with latency and bandwidth costs. They have responded by offering to manage network edge data centers for enterprises. For example, AWS Outposts is a fully managed service delivering Amazon Web Services infrastructure and services to any on-premises or network edge location.
Colocation services
Similarly, colocation service providers provide managed services for enterprises that don’t have the physical space, the skills, or the time to build out their own distributed network of small data centers.
Digital transformation and new business models to support the cloudification of the market are driving the convergence of cloud, colocation, and telco data center functionality. This model will relieve congestion in the core network, conserve capacity in backhaul transmission networks, and contribute to higher network efficiency and reduced operating costs.
Discover the network edge
To learn more about network edge data centers and how your organization can leverage them, we invite you to download the eguide, “Optimizing the Network Edge — For Scale, Sustainability and Resiliency.” This eguide, which we regularly update, covers key trends, lessons learned, and best practices at the network edge.
Add a comment