This audio was created using Microsoft Azure Speech Services
An interesting trend is occurring in the operational technology (OT) space, particularly in industrial environments, one that IT professionals would do well to take notice of: IT related equipment and software is, more and more, directly interacting with traditional OT layers of the enterprise. While this brings many advantages that are often described in terms of trends like the Internet of Things (IoT), it also requires new thinking about how such systems are managed.
Once upon a time the equipment used in industrial environments bore no resemblance to anything you might find in IT spaces such as a data center. We had programmable logic controllers (PLCs), distributed control systems (DCS) and proprietary networking technologies to communicate with and between plant control systems. The device/sensor layer was not part of any network and completely isolated.
While those control systems still exist today, they are often augmented with clients and servers running on more traditional IT operating systems connected to open networks that support software meant to communicate with upstream management systems. This certainly offers new levels of control and reporting capabilities. Typically, these systems communicate over some form of Ethernet network that is similar, in many ways, to those found throughout the business environment.
Essentially, you have a hybrid of traditional OT and IT technologies working together in the same space as part of a flatter and more open architecture. That’s the technology evolution often referred to as IT/OT convergence.
This technology convergence also merits an organizational or even cultural convergence to effectively leverage the evolution and mitigate the potential risks it introduces.
Traditionally, OT equipment, networks and software were managed by the OT departments, while IT handled enterprise networks and equipment in the data center and business offices. The disciplines were separate and, to some extent, even avoided each other’s areas of responsibility.
That worked fine in the traditional environment because the OT equipment didn’t change much and only connected to the IT enterprise in a hierarchical way. Control systems could be installed and remain largely unchanged for years, only modified to suit changes to a process or operating conditions. It was little like data centers and other IT environments, which were constantly evolving. There was no need for IT involvement. Even where network and compute assets did exist in the OT space, they were often isolated from the enterprise. Considerations such as updates, patches, software deployment and cyber-security were largely ignored. IT standards were not implemented
As traditional IT technology finds its way to industrial environments, that’s increasingly no longer acceptable. Much of the technology needs the same kind of care and protection as the IT gear in the IT space with the same kind of management requirements and business continuity objectives as much of the data center infrastructure.
In many ways, the trends we refer to as “edge compute” and “edge networking” represent the dispersion of assets traditionally located in the data center to compute applications closer to point of use and often in the OT space. They also speak to the inclusion of formerly isolated and unmanaged compute assets: “IT sprawl,” into a distributed and remote data center-like infrastructure, managed and monitored much like the assets were grouped together in a single environment.
We are now drawing data from just about every piece of equipment in an industrial environment and feeding it to reporting systems that help keep close tabs on what’s happening, providing data to improve operations. It’s typically systems using traditional IT servers and networks doing the bulk of that work, systems that need consistent attention.
From my perspective, I see two critical aspects to ensuring those systems remain functional on a 24×7 basis, or close to it.
First, they need clean and continuous power, which means they need protection from an uninterruptible power supply (UPS) system – just like data center IT gear. Now that doesn’t necessarily mean that every OT component needs UPS backup, but those supporting control and data reporting systems certainly do, the so-called “brains” of the environment (a concept we covered in this post).
Second, you need a managed infrastructure in place to house the IT assets. That can take several forms, including using traditional IT racks, servers, power distribution units (PDUs), firewalls and the like. It also includes having standards around software deployment and maintenance, including patches and upgrades. It may also involve virtualization technology, if your company supports it, as most today do. Having one or more reference architectures for such application environments is very useful in this respect.
This does not mean it falls on OT groups to take on those management tasks. Rather, it provides IT and OT groups the opportunity to work together to come up with standards for how this infrastructure will be deployed and managed. OT groups can then deploy their applications but do so on edge compute environments that serve as part of a distributed IT infrastructure that meets their strategic needs, and that IT is willing and able to support.
With this kind of cooperation in place, companies are far more likely to achieve the business continuity objectives they seek for their industrial environments and the IT systems that support them.
This is just one of the issues I see as we travel further into an IoT world. To find out what others are thinking, Schneider Electric surveyed more than 2,500 business decision-makers around the world to get their vision of the IoT and the opportunities it presents. In our “IoT 2020 Business Report,” we sum up the findings, detail predictions for the future of IoT, outline ways organizations can realize immediate IoT value and more. Click here to download a free copy.