UsingEnglish.com explains the “watching sausage being made” idiom in this way: “If something is like watching sausages getting made, unpleasant truths about it emerge that make it much less appealing. The idea is that if people watched sausages getting made, they would probably be less fond of them.”
LESS fond?! There’s a Discovery Channel “How It’s Made” episode on hot dogs. Normally this TV show tends to be soothing…mesmerizing…oddly satisfying as you watch simple raw materials rhythmically turn into beautiful finished goods like a well-orchestrated ballet. But this hot dog episode is horrifying. It will make the most ardent carnivores among us queasy. Some things are definitely better off unseen.
With 10 plus years of product development experience, I know that “sausage making” can be an apt analogy for what often happens when a product goes from concept to “store shelves”. It can be messy, wasteful, unduly iterative, and ugly. But, in the end, that can still be OK. For most products, regardless of how haphazard or inefficient the development process is, the end product can still end up being good enough for market success. For data center remote monitoring platforms, however, this is unacceptable. HOW it gets developed…how disciplined, controlled, and thorough the process is…is critically important to how safe the platform is from a cyber security perspective. And so data center owners who are considering using a third party to help manage and monitor their facility infrastructure systems remotely, should find out how these vendors develop their offers and manage their offer creation processes.
Digital remote monitoring platforms work by having connected data center infrastructure systems send a continuous stream of data about itself to a gateway that forwards it outside the network or to the cloud. This data is then monitored and analyzed by people and data analytic engines. Finally, there is a feedback loop from the monitoring team and systems to the data center operators. The data center operators have access to monitoring dashboards from inside the network via the gateway or to the platform’s cloud when outside the network via a mobile app or computer in a remote NOC.
There is understandably a concern these externally connected monitoring platforms could be a successful avenue of attack for cyber criminals. Today, cyber security threats are always present and the nature of cybercriminal attacks is constantly evolving. Preventing these attacks from causing theft, loss of data, and system downtime requires a secure monitoring platform and constant vigilance by a dedicated DevOps team. Before selecting and implementing a digital monitoring platform, vendor solutions should be evaluated not just on the basis of features and functions, but also on their ability to protect data and the system from cyber-attacks. Knowing how secure a platform is requires an understanding of how it is developed, deployed, and operated.
Secure Development Lifecycle is a process by which security is considered and evaluated throughout the development lifecycle of products and solutions. It was originally developed and proposed by Microsoft. The use of an SDL process to govern development, deployment, and operation of a monitoring platform is good evidence that the vendor is taking the appropriate measures to ensure security and regulatory compliance. The vendor should be using a process that is consistent with ISO 27034. There are 8 key practices to look for in an SDL process. They are described briefly here…
There should be a continuous training program for employees to design, develop, test, and deploy solutions that are more secure.
The cyber security features and customer security requirements to be included in product development should be enumerated clearly and in detail.
Security architecture documents are produced that follow industry accepted design practices to develop the security features required by the customer. These documents are reviewed and threat models are created to identify, quantify, and address the potential security risks.
Implementation of the security architecture design into the product follows the detailed design phase and is guided by documentation for best practices and coding standards. A variety of security tools as part of the development process including static, binary, and dynamic analysis of the code should be used.
Security testing on the product implementation is performed from the perspective of the threat model and ensuring robustness. Regulatory requirements as well as the deployment strategy are included as part of the testing.
Security documentation that defines how to more securely install, commission, maintain, manage, and decommission the product or solutions should be developed. Security artifacts are reviewed against the original requirements and to the security level that was targeted or specified.
The project development team or its deployment leader should be available to help train and advise service technicians on how best to install and optimize security features. Service teams should be able to provide help for customers to install, manage, and upgrade products and solutions throughout the life cycle.
There should be a product “Cyber Emergency Response Team” that manages vulnerabilities and supports customers in the event of a cyber incident. Ideally, this team should be the same group of people that has developed the application. This means that everyone involved knows the product in detail.
White Paper 239, “Addressing Cyber Security Concerns of Data Center Remote Monitoring Platforms”, gets into more details including covering key elements related to developer training and management, as well as describing recommended design attributes to look for in the monitoring platform itself. Being aware of how remote monitoring platforms are developed, deployed, and maintained can help you make a better decision as to which one to implement. If you start thinking about how sausage is made as you interview a vendor, take that as a clear signal to go elsewhere!