This audio was created using Microsoft Azure Speech Services
Yesterday and today in San Jose CA, 1,000 industry experts got together to exchange opinions on what was happening in the Web Scale data center market. This event was “invite only “and free to the most qualified data center experts. So what is Web Scale? DCD– it is characterized as: open source, software defined, lights out, NFV, hyper converged, network edge in a cloud architecture. Sure are a lot of buzz words but essentially Web Scale refers to the centralized cloud data centers with the new edge portion of the architecture thrown in. Essentially the conference is centered around the full ecosystem for how enterprise data centers are being re-defined and how the economics of digital business, IT and data center service delivery is being re-shaped.
One area of focus is the IT zettabyte era and its impact on the network, data center and cloud infrastructure. 1 ZB = 1000 to the 7th power bytes = 10 to the 21st power bytes = 1,000,000,000,000,000,000,000 bytes = 1000 exabytes = 1 million petabytes = 1 billion terabytes = 1 trillion gigabytes. We are there now, there will be 1.6 million peta bytes transferred over the Internet this year (223,000 DVDs (4.7Gb each) to hold 1Pb). With 4 million petabytes created. A yottabyte is next FYI (10 to the 24th power). Whatever the designation massive new reservoirs of data created. The key is to find patterns in the data while the information is still useful. The term “fresh data” is used to describe this. An easy to describe example of fresh data is sensors from self driving cars. The car senses which cars come to the intersection first and then waits for it’s turn to go. Then this data is discarded. When it senses a pot hole in the road this data is fresh until another car reports there is no longer an issue (after the hole has been filled in by a road crew). The time to create value is very limited in this example and many others – like the weather .
Another interesting point is that human data that is generated has context. For example someone showing what they are eating for dinner on social media is easy to understand. I am not sure why people are compelled to do this but at least there is context. Machine data on the other hand does not have context. The software application or analytics are responsible for contextualizing this data. If we take the weather example and its 85 degrees Fahrenheit, it’s helpful to know the location and the time of day and year. If it’s noon time in August in Miami, then its normal but if it Anchorage in the winter then it’s something else.
Urbanization is another theme and Schneider Electric has been talking about this for years. Data centers embedded in these urban environments are essential for smart cities to function. Also, the combination of the telecom network with the data network is a topic I have been talking about for years. My session on the benefits of edge computing discussed both of these issues. I also explained the philosophy of the cloud computing architecture with edge computing supporting IoT. The impression from a lot of people is that IoT only means wearable’s which may be the future but is not the reality right now. I discussed where edge is being deployed in remote and branch office Locations, network closets, server rooms, and industrial sites. It’s more and more clear that Web Scale is dependent on edge computing and this architecture will continue to evolve in order to support the ever increasing world of smart connected devices and the benefits we derive from them.