66386 St. Ingbert (Germany)
- Sponsorship Award by the European Association for Computer Science Logic for the ESSLLI 2016 course "Model Counting for Logical Theories"
- The 2011/2012 Best Paper Award of the DFG priority programme Reliably Secure Software Systems
- Microsoft Research European Ph.D. scholarship 2008-2011
I am a tenure-track faculty member at the Helmholtz Center for Information Security (CISPA) in Saarbrücken, Germany. Before joining CISPA, I was a Lecturer (Assistant Professor) at the University of Sheffield, UK and at the University of Leicester, UK before that. Prior to that, I held postdoctoral positions at the University of Texas at Austin, and at the Max Planck Institute for Software Systems in Germany. I received my PhD from Saarland University in Germany in 2014. My research is in the area of formal methods, focusing on the specification, verification, and synthesis of reactive systems. I investigate primarily quantitative versions of these questions, centered around the aspect of uncertainty in system and environment models. I am particularly interested in applications of formal methods to autonomous systems, where my work addresses the limitations faced by autonomous control due to imperfect sensing and stochastic disturbances.
Proc. 5th International Symposium On Leveraging Applications of Formal Methods, Verification and Validation (ISoLA)
Formal Methods for AI Safety
The rapid progress in artificial intelligence and machine learning has lead to the deployment of AI-based systems in a number of areas of modern life, such as manufacturing, transportation, and healthcare. However, serious concerns about the safety and trustworthiness of such systems still remain, due to the lack of assurance regarding their behavior. To address this problem, significant efforts in the area of formal methods in recent years have been dedicated to the development of rigorous techniques for the design of safe AI-based systems.
In this seminar, we will read and discuss research papers that present the latest results in this area. We will cover a range of topics, including the formal specification and verification of correctness properties of AI components of autonomous systems, and the design of reinforcement learning agents that respect safety constraints.
Each participant will give a presentation of an assigned paper, followed by a group discussion. All students are expected to read each paper carefully and to actively participate in the discussions.
Participation in all meetings is mandatory (exceptions require an official document, such as a doctor's certificate).
Each review is to be submitted before the meeting at which the paper will be presented and discussed.
In addition to your review you will have to submit 2 questions that you will ask to the presenter of the paper.
Each of the three reviews contributes 10% of your final grade.
The slides and presentation make up 40% of the final grade.
The summary will make up 30% of the final grade.