Control and game-theoretic methods for secure cyber-physical-human systems
MetadataShow full item record
This work focuses on systems comprising tightly interconnected physical and digital components. Those, aptly named, cyber-physical systems will be the core of the Fourth Industrial Revolution. Thus, cyber-physical systems will be called upon to interact with humans, either in a cooperative fashion, or as adversaries to malicious human agents that will seek to corrupt their operation. In this work, we will present methods that enable an autonomous system to operate safely among human agents and to gain an advantage in cyber-physical security scenarios by employing tools from control, game and learning theories. Our work revolves around three main axes: unpredictability-based defense, operation among agents with bounded rationality and verification of safety properties for autonomous systems. In taking advantage of the complex nature of cyber-physical systems, our unpredictability-based defense work will focus both on attacks on actuating and sensing components, which will be addressed via a novel switching-based Moving Target Defense framework, and on Denial-of-Service attacks on the underlying network via a zero-sum game exploiting redundant communication channels. Subsequently, we will take a more abstract view of complex system security by exploring the principles of bounded rationality. We will show how attackers of bounded rationality can coordinate in inducing erroneous decisions to a system while they remain stealthy. Methods of cognitive hierarchy will be employed for decision prediction, while closed form solutions of the optimization problem and the conditions of convergence to the Nash equilibrium will be investigated. The principles of bounded rationality will be brought to control systems via the use of policy iteration algorithms, enabling data-driven attack prediction in a more realistic fashion than what can be offered by game equilibrium solutions. The issue of intelligence in security scenarios will be further considered via concepts of learning manipulation through a proposed framework where bounded rationality is understood as a hierarchy in learning, rather than optimizing, capability. This viewpoint will allow us to propose methods of exploiting the learning process of an imperfect opponent in order to affect their cognitive state via the use of tools from optimal control theory. Finally, in the context of safety, we will explore verification and compositionality properties of linear systems that are designed to be added to a cascade network of similar systems. To obfuscate the need for knowledge of the system's dynamics, we will state decentralized conditions that guarantee a specific dissipativity properties for the system, which are shown to be solved by reinforcement learning techniques. Subsequently, we will propose a framework that employs a hierarchical solution of temporal logic specifications and reinforcement learning problems for optimal tracking.