This audio was created using Microsoft Azure Speech Services
The massive migration of critical applications from traditional data centers to the cloud has garnered much attention from analysts, industry observers, and data center stakeholders. However, as the great cloud migration transforms the data center industry, a smaller, less noticed revolution has been taking place around the non-cloud applications that have been left behind. These “edge” applications have remained on-premise and, because of the nature of the cloud, the criticality of these applications has increased significantly.
Let me explain. The centralized cloud was conceived for applications where timing wasn’t absolutely crucial. As critical applications shifted to the cloud, it became apparent that latency, bandwidth limitations, security, and other regulatory requirements were placing limits on what could be placed in the cloud. It was deemed, on a case-by-case basis, that certain existing applications (e.g. factory floor processing), and indeed some new emerging applications (like self-driving cars, smart traffic lights, and other “Internet of Things” high bandwidth apps), were more suited for remaining on the edge.
Considering the nature of these rapid changes, it is easy for some data center planners to misinterpret the cloud trend and equate the decreased footprint and capacity of the on-premise data center with a lower criticality. In fact, the opposite is true. Because of the need for a greater level of control, adherence to regulatory requirements, low latency, and connectivity, these new edge data centers need to be designed with criticality and high availability in mind.
The issue is that many downsized on-premise data centers are not properly designed to assume their new role as critical data outposts. Most are organized as one or two servers housed within a wiring closet. As such, these sites, as currently configured, are prone to system downtime and physical security risks, and therefore, require some rethinking.
Systems redundancy is also an issue. With most of the applications living in the cloud, when that access point is down, employees cannot be productive. The edge systems, when kept up and running during these downtime scenarios, help to bolster business continuity.
Steps that enhance edge resiliency
In order to enhance critical edge application availability, several best practices are recommended:
- Enhanced security – When you enter some of these server rooms and closets, you typically see unsecured entry doors and open racks (no doors). To enhance security, equipment should be moved to a locked room or placed within a locked enclosure. Biometric access control should be considered. For harsh environments, equipment should be secured in an enclosure that protects against dust, water, humidity, and vandalism. Deploy video surveillance and 24 x 7 environmental monitoring.
- Dedicated cooling – Traditional small rooms and closets often rely on the building’s comfort cooling system. This may no longer be enough to keep systems up and running. Reassess cooling to determine whether proper cooling and humidification requires a passive airflow, active airflow, or a dedicated cooling approach.
- DCIM management – These rooms are often left alone with no dedicated staff or software to manage the assets and to ensure downtime is avoided. Take inventory of the existing management methods and systems. Consolidate to a centralized monitoring platform for all assets across these remote sites. Deploy remote monitoring when human resources are constrained.
- Rack management – Cable management within racks in these remote locations is often an after-thought, causing cable clutter, obstructions to airflow within the racks, and increased human error during adds/moves/changes. Modern racks, equipped with easy cable management options can lower unanticipated downtime risks.
- Redundancy – Power (UPS, distribution) systems are often 1N in traditional environments which decreases availability and eliminates the ability to keep systems up and running when maintenance is performed. Consider redundant power paths for concurrent maintainability in critical sites. Ensure critical circuits are on emergency generator. Consider adding a second network provider for critical sites. Organize network cables with network management cable devices (raceways, routing systems, and ties). Label and color-code network lines to avoid human error.
A systematic approach to evaluating small remote data centers is necessary to ensure greatest return on edge investments. To learn more, download Schneider Electric White Paper 256, “Why Cloud Computing is Requiring us to Rethink Resiliency at the Edge”. This paper reviews a simple method for organizing a scorecard that allows executives and managers to evaluate the resiliency of their edge environments.
Conversation
Andreas Rockenbauch
8 years ago
The edge will always be determined by the telcos and what bandwidth they will provide. Nobody will build a data center in Mecklenburg, where you have a speed of max . 16MB in you internet connection. Before edge will become reality, a lot more fibre in the ground is needed. As we all know the business model of telcos, this won’t happen until there is someone paying for it.
Sanja Milovanovic
7 years ago
Correct, and clearly communicated. As I understand the article refers to a setup where a cloud is already engaged, and some of the traffic needs to be ‘brought back’ or kept closer to the user: bandwidth consuming, or applications that are critical from security or latency point of view. In this setup, existing high bandwidth links are an assumption.
Mike Kubicki
8 years ago
Also, users need to also have a plan to get a “person” on-site to work with these Edge IT spaces when needed. The need may arise to get correct personnel on-site for a human interaction with the equipment, whether it be a server, UPS, cooling or other device. SLAs and expectations need to be set with the business to ensure business continuity is maintained.