Skip to main content

Control System Attacks: The Power of Virtual Digital Twins & Configuration Files

When we look at operational technology (OT), system failures and outages are to be expected; they are often unavoidable in complex control system operations. However, these operational 'hiccups', may be indications of OT cybersecurity incidents. 

One of the greatest challenges we face as OT security practitioners is distinguishing between a standard system issue and a targeted cyber-attack. This predicament creates operator/responder fatigue and is analogous to the boy who cried wolf: is it a false alarm or a genuine threat? Do we know what our vulnerable assets are? Sometimes, even just answering these questions may be much more complex than looking at a single asset, class or condition. This classic fable finds a striking parallel in our OT environments, where the 'wolf' of potential or actual cyber-attacks often masquerades as the 'boy’s cries' of typical operations and system failures, causing confusion and delay in effective attribution and responses. 

To fully articulate the extent of this issue, we need to explore the Purdue Model, which has remained the gold standard for Industrial Control System (ICS) architecture, despite rumblings that it is outdated. The Purdue Model for Control Hierarchy (an industrial reference model) provides a structured framework for process control, separating network functions and assets into multiple layers. From the physical process layer, where field devices like sensors and actuators are found; moving up to supervisory control with Programmable Logic Controllers (PLCs) and Human-Machine Interfaces (HMIs); all the way to the business planning and logistics system on the top-level, each layer represents a unique set of assets and communication protocols. While this segregation is beneficial for structured operations, it poses a challenge for comprehensive cybersecurity. Security tools, techniques, and procedures (TTPs) effective at one layer might be nonexistent, inadequate, or even incompatible and detrimental, at another. There will always be assets that cannot be "secured" from a technological standpoint.  

Some of the OT asset discovery and management solutions on the market today use Deep Packet Inspection (DPI) to gather asset information. This technology emerged from the IT world as a tool for examining the data part (and not just the header) of a packet as it passes an inspection point. This process identifies the packet's content to determine whether it complies with predetermined rules. In another wrinkle, these rules typically rely on known conditions for which to search, doing little for a zero-day exploit.  

In the realm of operational technology, DPI tools safeguard and monitor the network traffic within and across the upper layers of the Purdue Model. These layers are undergoing a transformation into more open IT-like hardware and software, influenced by the Industrial Revolution 4.0. However, the complexity and diversity of protocols used deeper down in OT, the latency sensitivity, and the physical differences between "programming IT" and "configuring OT" assets, require more of a "CMDB for OT" approach to cybersecurity. The ability to safely and seamlessly access the configuration files of OT assets for a quick "drill down" examination is imperative, based on the sheer nature of OT systems, for cybersecurity incident response or research and is extremely beneficial in supplementing operational efficiency. Relying solely on DPI-gathered asset information can fall short of providing the complete tapestry of risk, vulnerability and asset visibility. This might be akin to reading every fourth page of a novel. It obscures vital plot developments, characters' motivations or subtle foreshadowing - all crucial to understanding the complete story. 

The packet headers, serial numbers and network information gathered through DPI network analysis tools provide a breadth of insights into networked asset system communications, adversarial movement and potential anomalies. For instance, they can help identify an unauthorized device attempting to communicate within the boundaries of the system or flag unusual data transmission patterns. However, they don't necessarily illuminate the complete operational context of these communications. Without understanding the interplay of all devices capable of being captured (up and down the layers) within a control system, along with the operational conditions at the time of an incident, we find these data points can leave analysts wanting, generating more questions than answers. 

Configuration files provide a deeper source for understanding a system's setup, including the roles and behaviors of various devices, their communication patterns and the expected operational parameters and conditions. Analyzing these files can yield much more detailed insights, enabling a quicker, more effective response to potential issues or attacks. Configuration files - the DNA of our systems - hold a wealth of untapped data. They can reveal the system's design, architecture and potential vulnerabilities. It's like having a comprehensive synopsis of the novel, including all characters, their motivations, and the entire plot.  

By employing emerging virtual digital twin technology (a digital replica of the target system), we can go further and analyze these configuration files in a safe, non-disruptive environment. Using the configuration files, we can definitively tell what is connected, what applications are running and at what revision level. We can look at the up and downstream effects of a change. With the power of configuration file analysis, we gain visibility into configuration baselining. We can see what keys were changed and when (what sequence, etc.). Traditionally, a small adjustment to the tolerance gap of a process that produced unwanted production results could elude discovery without the ability to drill down, examine and compare configuration files.  

When configuration file analysis is paired with a virtual twin of the system, researchers and analysts can safely investigate issues, test hypotheses and develop countermeasures without risking the live system. In the continuously evolving landscape of OT security, these techniques offer a promising way forward. They offer an unprecedented opportunity for deep analysis akin to forensic examination without the usual confines. It propels us from being reactive to proactive, from battling uncertainty to commanding confidence. 

Unfettered access to and analysis of the configuration files illuminate the system's inherent structure and significantly increase our success rate for streamlining our response time (MTTR) in mitigating OT cyber events. In our constant pursuit of enhanced OT cybersecurity, the pairing of virtual twins and the last known "good" configuration files becomes a potent tool to unmask the 'wolf' and defend our systems effectively. 

About the Author

Edward Liebig is the Global Director Cyber Ecosystem in Hexagon’s Asset Lifecycle Intelligence division. His career spans over four decades, with over 30 of those years focused on cybersecurity. He has led as Chief Information Security Officer and cybersecurity captain for several multinational companies. He's also led Professional and Managed Security Services for the US critical infrastructure sector for two Global System Integrators. With this unique perspective Edward leads the Cybersecurity Alliances for Hexagon PAS Cyber Integrity. In this role he leverages his diverse experience to forge partnerships with service providers and technologies that drive collective strengths to best address our client’s security needs. Mr. Liebig is an adjunct professor at Washington University in St. Louis and teaches as part of the Master of Cybersecurity Management degree program.

Profile Photo of Edward Liebig