// Most utilities are protective of customer data and always have been. That’s a good thing because these days they have no choice.
As utilities increasingly adopt smart technology – including meters, thermostats and the like – they have access to more and more detailed customer data than ever before. At the same time, they must comply with more and more regulations around protecting that data than ever before.
The North American Reliability Corp. (NERC) Critical Infrastructure Protection (CIP) standards now have multiple requirements around cyber security, with more on the way. Beyond that, any utility that accepts credit cards from its customers – which means all of them, probably – must comply with the Payment Card Industry Data Security Standard (PCI DSS) for protecting customer credit card data and other personal information.
PCI DSS isn’t just a standard, however. It’s a whole framework that spells out the tools and processes required to keep customer information secure. In that sense, it’s a helpful resource. But let’s not kid ourselves: achieving PCI DSS compliance takes a lot of work as well.
To help prime the pump, so to speak, I thought I’d offer up a few security best practices that will hopefully get you pointed in the right direction. These are four tried and true practices in IT circles, but may be new to some utilities that have only more recently begun to dig in to cyber security issues.
1. To ensure security, partner wisely and build to industry standards
Building to industry standards such as PCI DSS is one way to ensure proper security. But these days the simplest way may be to outsource part of the security, in effect, by using a cloud-based platform to host your compute resources and databases. That’s the strategy the North American Wiser team at Schneider Electric is following. Rather than build all of our own IT infrastructure, we opted to run on top of the Microsoft Azure cloud platform.
In so doing, we can take advantage of Microsoft’s expertise in providing security, which is significant. I know some get skittish about security when it comes to cloud resources. But if you’re honest with yourself, you’ll have to admit that the likes of Microsoft or Amazon or other top cloud providers know more about how to provide effective cyber security than most companies, probably even yours. Given the clients they serve, they have business incentives to ensure tight security, and the resources to dedicate to the mission.
A post on the Microsoft Technet site, “10 Things to know about Azure Security,”nicely summarizes some of the security steps Microsoft has taken with Azure. It’s compelling stuff.
2. Test thoroughly and stay up to date with patches
If you go with a cloud provider, you don’t have to deal with this next step, but if not – be prepared to keep up with security patches. Application vendors routinely come out with patches to fix the bugs that are bound to crop up in any piece of software. While some bugs simply cause problems with the application, others open the door for intruders to break in.
That means utilities need a way to stay on top of the patches as they come out and assess the severity of the vulnerabilities the patches are intended to fix. Some may need to be patched right away while others can wait for a routine maintenance cycle. The key is to have a consistent process in place so you can make that call.
It’s also a good idea to having a third party firm regularly conduct penetration testing on your environment, to see if they can find any holes. No matter how good you think your own testing team is, third party testing provides a good reality check.
3. Employ effective identity and access management tools
Whether you go with an on-premises or cloud solution, you need to be able to ensure that only authorized users are allowed access to applications and data. Even further, you need to limit access to only the apps and data the each user needs to do their job. (This is known as the concept of least privilege, which is discussed in this blog post, accompanying a video on utility security.)
That takes an effective identity and access management (IAM) platform. Such a platform should be able to ensure you can determine who a user is, what their level of access should be, as well as where they are and what device they’re using. The idea is to ensure you don’t allow access to resources unless you’re confident that both the user is authentic and the connection is secure. For example, while you’d let a financial analyst access a customer record from a laptop at headquarters, if the same analyst was logging in from an iPad at a coffee shop over a public Wi-Fi connection you would probably deny access – because you can’t be sure that connection is secure.
The IAM platform should also be able to extend to any third parties who may need access to your systems. And, by the way, you should vet those third parties to ensure their own security policies are up to snuff.
4. Follow the principle of defense in depth and segmentation
Finally, there’s an age-old adage that security needs to be layered, or what’s known as defense in depth. That is, you need multiple levels of security such that if an intruder manages to get through, say, an outside perimeter firewall, there’s another layer deeper inside the network – maybe an intrusion prevention system – to thwart the attack.
It’s also essential to segment the network, keeping the most sensitive and valuable resources away from portions of the network that have access to the public Internet. Customer credit card data is one good example; keeping it in its own secured area is essential to achieving PCI DSS compliance.
One last thing to keep in mind: the job of providing proper security is never “finished.” Remaining vigilant in the face of security threats requires that you constantly follow sound processes and pay attention to detail. We’re dealing with an enemy that seemingly never sleeps, but if you stay on top of your game, hopefully you will.
7 years ago
How about storing data on cloud? Storing data in the cloud to reap the benefits like higher security, data centralization, scalability, easy integration, etc. Cloud based data storage ensures there is backup and recovery of data in place, rather provides for full data administrative services that takes care of indexing and normalization of your data as well. What are your views on this?
7 years ago
I am a big advocate of storing data in the cloud — despite the additional risk that utilities perceive exists with such action. It was somewhat veiled in this post but there are references to Azure (Microsoft Cloud) that somewhat demonstrate my lean. I think utilities have so much to gain by moving to the cloud not just for the benefits you mention, but also for the new opportunities that could be generated from having all their data in one place and appropriately linked.
At some point, I may dedicate a post to just the topic you suggest!