From Buzzwords to Reality: Cutting Through the Hype to Provide Edge Computing Clarity for Data Center Management

This audio was created using Microsoft Azure Speech Services

The hype around edge computing may be at its zenith – at the peak of Gartner’s “hype cycle”.  A cursory search on Google and you can find the following statements:

Edge Computing Buzzwords

The assertions are that edge will be the next multi-billion-dollar opportunity that will make industries massively more efficient.  It’s an amazing opportunity! 

The problem is we don’t seem to know exactly what it is. These articles – and plenty more – attempt to demystify the edge, clarify its meaning, and even state the characteristics of edge data centers. I witnessed the confusion in person at an event earlier this year when every single speaker presented a different definition of edge. And, I don’t find definitions like the ones below – edgy, edgier, and edgiest – to be exceptionally helpful:

many views on hybrid IT architectures

Getting clarity on edge computing

Part of the problem is that people in the industry talk about edge computing as if it’s new.

It’s not.

We believe the first instances of the edge date back to the 90s when Akamai introduced its content delivery networks. Then came the term cloud computing in 2006, and Cisco introduced fog computing in 2012. If we stipulate that 2012 was beginning of edge, that implies we’ve been working on it for six or seven years. And during that time, the hype surrounding edge has ballooned and, in my opinion, kicked into high gear in 2018.

Regardless of this hype, what seems certain is the new hybrid computing architecture will require a more robust edge infrastructure. Users are now asking how fast an app will load on their device, not just if it will load, and they expect responsiveness.

At Schneider, our popular white paper Why Cloud Computing is Requiring us to Rethink Resiliency at the Edge questioned if the local edge was becoming the weakest part of your ecosystem. The paper theorizes that the future will consist of three types of data centers: cloud; regional edge; and local edge, which is the first point of connection (for people and things) to the network. Think of it as yesterday’s server rooms and wiring closets are becoming tomorrow’s Micro Data Centers.

Since the white paper’s publication, Schneider research shows our resiliency theory is proving to be true. That’s why we, as an industry, are having so many conversations about how to keep the availability of the local edge as high as necessary.

So, we need to ask: what do we pragmatically have to do as an industry to overcome the challenges that the local edge presents?

For the hype around edge computing to become a reality and deliver on the promise that the edge holds, our industry needs to improve in three key areas. 

    1. The Integrated Ecosystem

      To deliver this solution, our industry must work together in ways that we haven’t had to in the past. It’s a transformation in how we operate to deliver a solution to a customer.

      And, transformation doesn’t happen overnight.

      Physical infrastructure vendors must join forces with system integrators, managed service providers, and the customer on innovative systems that are integrated and deployed on-site. All of it needs to come together with a thorough understanding of the customer application and be delivered at multiple locations worldwide, leveraging existing staff. This is part of the challenge.

      We believe the solution is a standardized and robust Micro Data Center that can be monitored and managed from any location. We’ve been doing a lot of innovative work with HPE including integrating our supply chains. We’re also working with Scale Computing, StorMagic, and others. We just announced expanded Cisco Certification of the entire NetShelter product line, certified to ship with Unified Computing System or UCS inside of them. We are making progress as an industry.

    2. The Management Tools

      Imagine you’re on the links at St. Andrews, poised to play 18 holes, and in your bag instead of clubs you find a rake and shovel just like Kevin Costner’s washed up golf pro Roy McAvoy in the classic 90s film Tin Cup. In the same way that Roy doesn’t have the right equipment, the management tools we have are inadequate to address the challenges at the edge.

      One data center operator may have 3,000 sites dotting the globe with multiple alarms per site per day and no on-site staff. It’s easy to understand how they would get overwhelmed very quickly. This can become an almost unmanageable problem.

      Management tools must move to a cloud-based architecture. This will allow thousands of geographically dispersed edge sites to have the same level of manageability that we provide for large data centers. With a cloud-based architecture, you can pay as you grow and start with what you need. It’s easy to scale, upgrades are automatic, and it has up-to-date cybersecurity. Most importantly, this approach enables access from anywhere, at any time, from any device.

      In a traditional world, the data center operator with thousands of sites is probably used to fielding calls from non-IT staffers that go like this:

      “I’ve got a problem. This thing is beeping!”

      “What’s the problem with your UPS?”

      “UPS? No, we use FedEx.”

      “The UPS is your battery backup. It has an alarm.”

       “I don’t know anything about that but this thing is beeping!” 

      And it can go on and on.

      In a cloud-based architecture, multiple players in the ecosystem can see the same data and work from the same exact dashboard at the same time. When everyone can see the same data at the same time, it eliminates this conversation. And, very soon we will be able to manage this process holistically as opposed to a collection of individual devices.

    3. Analytics and Artificial Intelligence (AI) to Augment Staff

      The promise of analytics and machine learning is fantastic but we still have a lot to learn. The image below of headlines summarizes a cursory online search about how to deploy AI. Let’s see . . . we have 4 training steps, 5 baking steps, 6 implementation steps, 7 steps to  successful AI, 8 easy steps to get started, then there’s 9 baby steps, 10 learning steps, 11 rules, and don’t forget the 12 steps to excellence.

    No rational human can make sense out of this.

    The 4-12(?) steps to deploy AI

    Of course, I will be the first to admit that we may not be helping because we decided to introduce our own approach. We reject that you need steps and rules. We believe you need four ingredientsWe started work on this two years ago and we’re bringing it to the market now – I think we’re on the right track. The four ingredients are:

    • A secure, scalable, robust cloud architecture. It may sound easy but when you tell software developers to stop making on-premise software and instead make software architected for the cloud, it’s not just a technology change. It’s a change in the way they have to think.
    • A data lake with massive amounts of normalized data. (Of course, how you normalize the data and knowing what data to collect is another challenge.)
    • A talent pool of subject matter experts.
    • Data scientists to develop the algorithms.

    It is our experience that once you have these ingredients, which provide a solid foundation, you can start doing something interesting. You can become more predictive and help data center operators know when there’s a problem before it occurs.

    The state of edge computing

    This is the state of edge computing as I see it, presented with intermittent sarcasm but without hype and hopefully without confusion. For more insights on this topic, I encourage you to check out the Schneider Electric blog directory and our white papers and find out about EcoStruxure IT.

Tags: , , , , ,