As a child I was always fascinated with patterns and in particular, their impact on people. Whether it’s the repetitive beat of a drum inspiring us to dance or our early learnt abilities to distinguish the key features of cat from dog, patterns are clearly essential for human beings, both to socialize and to navigate a complex world. With its ability to recognize very complex patterns in unstructured or semi-structured data, artificial intelligence (AI) is poised to change the data center in major ways. I explored some of the possibilities and the overall potential impact of AI in the data center during a session at Schneider Electric’s recent Innovation Day event for cloud and service providers.
Artificial Intelligence: Then and Now
AI has certainly been around for a long time but, until now, its application was more theoretical. Unless you happened to be operating a super computer, processors were too slow to create meaningful results. That meant the early applications were often restricted to users with unusual budgets – the military were early adopters of image recognition technology to spot camouflaged vehicles in the huge quantities of aerial and satellite imagining that took off in the 1980s. GPU’s changed all of that. Suddenly anyone could run complex machine learning models on consumer grade PC components.
At its heart, AI is enabling us to solve problems; or at least, provide approximate answers, to hard problems that can’t, or can’t easily, be solved by traditional modeling or mathematical techniques. At Equinix, we’re already applying AI to our data centers.
1. Data-driven Models for Data Center Optimization
Optimizing data centers is hard. IT loads and external ambient conditions vary dynamically and sometimes quite unpredictably; and of course, as operators and engineers we expend considerable effort during design to model, optimize and improve.
If you want to materially improve the efficiency of an operational datacenter, you have limited options. You can build a detailed physics model and run ‘what-if’ simulations; but with so many variables, and the usual limitations on the depth of the model, it’s hard to derive meaningful insight. Plus, what you build for one facility, may not be particularly useful in helping to understand the performance of others.
By contrast, the AI approach – which in this context is really a data approach – takes the data and trains a network such that it is able to model the PUE of your facility. The inputs to your model are the many data points that you collect from your infrastructure and your ‘test function.’ What you use to prove whether the network is accurately modelling the real-world performance of your facility, is the delta between predicted and measured PUE. Once you have a model that’s capable of reliably predicting your PUE, you can tweak the input variables to simulate ‘what-if’ scenarios. For example: What happens if a particular pump is turned on (or off)? What would the impact be if the volume or temperature of chilled water was increased? What if the set point for CRAC units is raised? Expert engineers then make decisions derived from the data model.
Beyond this, once there’s trust in the data and confidence in the models, the next step is taking humans out of the equation and letting AI dynamically optimize the data center in real time. Embedding AI for full data center automation will increase efficiency even more moving to a ‘fly-by-wire’ style model of datacenter.
2. Pattern-based Learning to Effectively Monitor Equipment
AI will also help with effective equipment monitoring. Many experienced datacenter engineers will tell stories of equipment that they could tell was faulty (or would shortly fault) because it sounded funny or it smelt different.
We’re testing our ability to augment those human capabilities to take data from our infrastructure – augmenting additional sensors if necessary and looking for patterns that ultimately translate into ‘that device sounds funny’ – well before that became detectable to people.
The aerospace industry is a great template for how significant operational efficiencies can be made by replacing a planned service regime with a more flexible, data led program and Equinix is excited to develop our capabilities in this space, alongside our key partners such as Schneider Electric.
Coming Soon to a Data Center Near You
AI is hard. There is a considerable skills gap, although it’s improving all the time. For many organizations, issues around data (availability and quality) may limit what they can do. But the key is the data – whether you plan on applying AI now or in the future – if you don’t have the data, and worse, if that data is no good, you’ll struggle to build accurate AI models.
AI is only going to get bigger — addressing more and more classes of “unsolvable” problems. In five years, data center equipment without some kind of embedded AI will be rare. For more on the future of data center efficiency, take a look at this session from Innovation Day. Kevin Brown, SVP of Innovation and CTO, IT Division, Schneider Electric talks about where the next 80% of data center performance improvements will come from.
About the Author:
David Hall, Senior Director of Technology Innovation, Equinix
David has had an extensive career in the data center industry, spanning commercial, technical and innovation roles.
At Telecity Group, he led the teams responsible for developing capacity and services for the hyperscalers – everything from edge networking to core compute deployments. He was also responsible for creating the world’s first private network to connect networks and enterprises to the public cloud.
Now at Equinix, David is working to develop the next generation of Equinix data center design. Inspired by some of the technologies and approaches pioneered by the hyperscalers, he has a particular focus on extending the availability of those solutions to smaller service providers and enterprises while developing new solutions for challenges such as the evolution at the edge.
David lives in London with his Old English Sheepdog, Oppenheimer.