This audio was created using Microsoft Azure Speech Services
Generative artificial intelligence (Gen AI) could profoundly transform business operations. Organizations are investing significantly in Gen AI, spending $5 million to $20 million on the technology. They expect AI algorithms and Large Language Models (LLM) to deliver unprecedented gains in efficiency and productivity to better compete in their respective markets.
However, AI can create serious cybersecurity headaches for data center operators. Securing Gen AI workloads from cyberattacks is a major endeavor, requiring cooperation between clients, technology providers, the colocation operators, and hyperscalers responsible for the infrastructure.
AI models widen the attack surface. A recent Uptime Institute report based on a survey of data center operators highlighted concerns about the cybersecurity risks posed by additional network connections introduced by AI-based control mechanisms. These may result in vulnerabilities that bad actors can exploit to manipulate data. As such, colocation data centers and hyperscalers must add new layers of cybersecurity specific to AI workloads and data to protect themselves and their end users.
Cybersecurity risks to AI models and intellectual property
Model manipulation is a serious concern. As AI models learn from interactions with their target users, they can be manipulated if they are not properly protected. Bad actors can manipulate the data, getting models to confidently deliver inaccurate and potentially dangerous results unknown to users.
Another issue involves intellectual property. Advanced AI algorithms that organizations use to run their business processes must be protected against reverse engineering to prevent model replication. Threat actors can reconstruct model information and potentially access the private data of organizations and their users for harmful purposes.
Key steps to counter AI-related attacks
As Gen AI gains traction, it’s difficult to predict the extent of the cyber risks it will create – or how much it will cost organizations to defend against bad actors. But hyperscalers and colocation data centers must act now to implement robust defenses.
Protecting data, intellectual property, and customer infrastructure at colocation and hyperscale data centers is an ongoing effort that has to adapt to the evolution of Gen AI workloads. It’s inevitable that new risks will emerge. They will have to be studied and understood, so data centers can take measures to properly architect the infrastructure and employ good governance. As in other areas of cybersecurity, a multilayered infrastructure is necessary, and includes measures such as:
- Regular security assessments – It is crucial to conduct routine tests and assessments on a data center’s AI systems. These reviews reveal existing and potential vulnerabilities that otherwise might go unnoticed.
- Robust data encryption – Whether at rest or in flight, AI data should be encrypted and constantly updated to prevent bad actors from accessing and manipulating it.
- Federated data models – Federated models enable data created in the cloud to be shared in multiple edge and on-premise sites, where it is augmented with site-specific information. If intercepted in transition, the data won’t make sense without the nuances specific to each location.
- Visibility and ongoing monitoring – In the case of AI helping AI, deep-learning models can be applied to monitoring and sifting through data to spot anomalies and signs of manipulation and malicious activity.
As colocation operators and hyperscalers add cybersecurity controls to protect AI workloads, they mustn’t forget the infrastructure. AI requires additional power and equipment, so there is a need to protect it against attacks because it could cause operation disruptions.
Schneider Electric works closely with colocation data centers and hyperscalers to secure data and infrastructure. One of the solutions we use to help mitigate AI-related risks are Intrusion Detection Systems (IDS) to monitor network traffic and look for anomalous behavior. Events such as a significant change in power consumption, for instance, would warrant further investigation to determine if it was caused by malicious activity.
Looking forward – Mitigating AI-related threats
A multilayered approach to data center cybersecurity is necessary for colocation operators and hyperscalers. It starts with identifying the location of all assets so they can be properly secured. In the age of Gen AI, when the potential for harm increases significantly, implementing and maintaining resilient cyber defenses has never been more important. Explore strategies to bolster security for AI and other cyber threats via our cybersecurity services site.
Add a comment