This audio was created using Microsoft Azure Speech Services
Sharing best practices; discussing successes and failures and figuring out what’s next are reasons why most professionals attend key industry events like the 2016 Datacenter Dynamics Converged enterprise conference. There, I hosted a panel of experts who are all dealing with the rapid emergence of edge computing. Here’s what to know about this trend.
Our session focused on deploying and managing compute at the edge of the network, and featured Todd Traver, VP IT Optimization and Strategy, Uptime Institute; Ron Sacks Co-Founder, CEO & Managing Partner, Provdotnet; and Leonard Francis, Associate Director of IT for Weill Cornell Medical College. Having a service provider, industry consultant and enterprise user on the panel gave us a well-rounded view of the state of the edge.
What and where is the edge?
Ron from Provdotnet, compared the onset of edge computing to a decade earlier, when everyone was trying to define the cloud. Looking back on how we thought about cloud then, the reality of cloud today is much different. The same will be said for edge computing, which will change as new technologies, capabilities and use cases emerge.
Ron’s organization takes a “three knowns” view of the nascent edge, breaking up the arrival into: “What we know about it; what we know, we don’t know; and what we don’t know, we don’t know.”
We know the edge is being driven by the need for speed and bandwidth capacity, while the technical footprint is getting smaller. These demands are pushing compute power closer to end users. Still, Ron wondered about where the edge really is — is it a regional location? A local micro data center? Or in the trunk of an autonomous car?
As an industry we need to determine what it means to be on the edge. “How do you figure out what level of edge want to play on [as a business]? What level do we need to offer, as a service provider? These are perplexing questions,” Ron said.
According to Todd, determining your plan on the edge follows a more traditional approach than you might imagine. In order to be successful on the edge, “You should have the same thought process and thoroughness in planning as when you are building a major data center,” he said. “You must take a holistic view and plan in aggregate, with the application being the central consideration.”
Applications are the big focus for Leonard from Weill Cornell Medical College, as his customers push boundaries. “Because of the amount of data and required processing speed for research departments, for example, [the researchers] want to be in the same room as the equipment,” he explained. “I’d be happy still living in my own data center world, where I have the most control; but our users have pushed us out of our comfort zone to the edge.”
What to Do When Going to the Edge
Overall, before you make the move, according to Todd, you must understand the entire stack — from the application and the network, to the IT infrastructure and location. Everything used to be consolidated in one place, now the IT environment operates without borders. “The net takeaway is planning for reliability and resiliency in this new environment,” he said.
For me, the key takeaways from this discussion are:
- Understand the application first. Why does it need to be on the edge?
- Define the form the solution needs to take. Can it be hosted on a simple appliance or does it require several racks of compute?
- Figure out where the solution will physically go. Can it be placed or hosted in a regional service/colocation provider? Does it need to sit outdoors? Or, is it in the same room as the users?
- Design for criticality. How important is uptime for this application?
Once you understand these four aspects, in this order, you will be on your way to creating a well laid out plan for deploying compute on the edge.
For more information, visit the Schneider Electric Resource Center.