Send email Copy Email Address
2020-09

Let’s not Trust Experience Blindly: Formal Monitoring of Humans and other CPS

Summary

The control logic of complex systems is based on experience: Trained experts steer a machine directly until they help develop an automated controller. Recently, this process was further improved by successfully incorporating machine learning techniques, where the controller was learned from tremendous amounts of empirical data. The resulting controller excels most of the time, especially in situations similar to ones occurring in the training data. In a safety-critical context, however, this is not enough, so formal guarantees about the behavior of the controller become crucial. When a full static analysis and subsequent verification is infeasible due to the complexity of the system, runtime monitoring is still applicable. It acts as a connecting link between the efficiency of trained controllers and formally verifiable guarantees. A runtime monitor assesses the system health based on sensor readings by using a specification that contains information about desired system states and their expected evolution over time. When the monitor encounters a violation of the specification, it raises an alarm. For complex systems, characterizing the desired behavior requires an expressive language. Moreover, provably correct behavior requires formal semantics and an evaluation algorithm with static guarantees on resource consumption to prevent crashes during runtime. This thesis presents formal semantics for the specification language RTLola and shows that it satisfies the aforementioned criteria by introducing an evaluation algorithm with static time and space bounds. The approach is evaluated based on examples from health monitoring and aircraft controllers.

-
Date published

2020-09

Date last modified

2020-09-14 07:29:21