# Working Paper

Abstract

A substantial amount of research on the security of cyber-physical systems assumes that the physical system model is available to the adversary. In this paper, we argue that such an assumption can be relaxed, given that the adversary might still be able to identify the system model by observing the control input and sensory data from the system. In such a setup, the attack with the goal of identifying the system model using the knowledge of input-output data can be categorized as a Known-Plaintext Attack (KPA) in the information security literature. We first prove a necessary condition and a sufficient condition, under which the adversary can successfully identify the transfer function of the physical system. We then provide a low-rank controller design which renders the system unidentifiable to the adversary, while trading off the LQG performance.

## Introduction

Cyber-Physical Systems (CPSs) refer to the embedding of widespread sensing, networking, computation, and control into physical spaces with the goal of making them safer, more efficient and reliable. Driven by the miniaturization and integration of sensing, communication, and computation in cost effective devices, CPSs are bound to transform several industries such as aerospace, transportation, built environment, energy, health-care, and manufacturing, to name a few. While the use of dedicated communication networks has so far sheltered systems from the outside world, use of off-the-shelf networking and computing, combined with unattended operation of a plethora of devices, provides several opportunities for malicious entities to inject attacks on CPSs. A wide variety of motivations exist for launching an attack on CPSs, ranging from economic reasons such as drawing a financial gain, all the way to terrorism. Any attack on safety-critical CPSs may significantly hamper the economy and lead to the loss of human lives. While the threat of attacks on CPSs tends to be underplayed at times, the Stuxnet worm provided a clear sample of the future to come (Chen 2010, Fidler 2011).

A substantial amount of research effort has been dedicated to identifying possible security vulnerabilities of the CPS and develop countermeasures. To this end, many attack models, such as stealthy attack11The stealthy attack is also referred to as false data injection attack, zero dynamics attack in the literature. (Liu 2009, Sundaram 2011, Pasqualetti 2013, Fawzi 2014, Teixeira 2015), replay attack (Mo 2014, Mo 2015) and covert attack (Roy 2011), have been proposed by various researchers. Teixeira et al. (Teixeira 2015a) propose a characterization of different attack models based on the attacker’s resources, which are divided into three different categories: knowledge of the system model, knowledge of the real-time control and sensory data (disclosure resources) and the capability to modify the control and sensory data (disruptive resources). Their results illustrate that many attack models proposed in the literature require the knowledge of the system models from the adversary. For example, in the stealthy attack scenario (Pasqualetti 2013), the adversary will inject an external control input to the physical system and then remove the physical system’s response to this malicious input from the sensors’ measurements. The system operator will not be able to detect the attack if the response to the malicious control input is removed perfectly. However, such an attack requires the adversary to know the perfect model of the physical system, which may be difficult to acquire in many practical scenarios, since the modeling information is usually stored inside the controller. On the other hand, we argue that in many situations, the control and sensory data are much easier to acquire. This is due to the fact that these data are typically not encrypted for many CPSs (Koscher 2010). Furthermore, even if the control and sensory data are encrypted, it might be easier to break the security of sensors and actuators due to their low computational capability. Thus, for the adversary, the disclosure resources may be more available than the model knowledge.

In this paper, we discuss whether the adversary can use its disclosure resources to gain the model knowledge by the means of system identification. We model the CPS as a linear feedback control system, which is illustrated in Fig \ref{fig:feedback}. The adversary is assumed to only use its disclosure resources. In other words, it can only passively observe the control input $$u$$ and the sensory data $$y$$ and cannot inject any disturbances to the system. The goal of the adversary is to learn the physical system model $$\mathcal{G}(z)$$, which further enables the adversary to launch other attacks, such as stealthy attack and covert attack. Such an attack model is very similar to the Known-Plaintext Attack (KPA) studied in information security, where the adversary has samples of both the plaintext and the corresponding ciphertext and want to deduce the encryption key. For our case, one can view the system model, the control input $$u$$ and the sensory data $$y$$ as the encryption key, plaintext and ciphertext respectively.

As a result, we will focus on KPA in this paper. The main contributions of the paper are twofold:

1. 1.

We provide a necessary condition and a sufficient condition, under which the system is vulnerable to KPA, i.e., the adversary can successfully identify the system model $$\mathcal{G}(z)$$. The results can be viewed as an application of classical system identification (Ng 1977, Anderson 1967, Anderson 1982, Anderson 1969, Glover 1973) for the closed-loop system described in Section \ref{sec:sysid}.

2. 2.

We design a countermeasure to KPA by using a “low-rank” controller design strategy for $$\mathcal{K}(z)$$ while trading off the $$LQG$$ control performance.

The rest of the paper is organized as follows: In Section \ref{sec:model}, we model the system as a linear feedback control system subject to Gaussian process and measurement noise. In Section \ref{sec:sysid}, we provide necessary and sufficient conditions, under which the adversary can identify the system model $$\mathcal{G}(z)$$. We further provide a numerical algorithm for the adversary to compute $$\mathcal{G}(z)$$. In Section \ref{sec:countermeasure}, we present a controller design which is resilient to KPA while only incurring minimal control performance loss.