With the advance of machine learning techniques, there has been a quick adoption of this technology in a broad range of application scenarios. As machine learning is becoming more broadly deployed, such approaches become part of the attack surface of modern IT systems and infrastructures. Hence, we research attack vectors and defenses of today's and future intelligent systems build on AI and machine learning technology.
Evasion attacks. While there has been a leap in performance of machine learning systems in the past decade, many unsolved issues hinder the deployment of such models in critical systems with guarantees on robustness. In particular, Deep Learning techniques have shown strong performance on a wide range of tasks, but are highly susceptible to adversarial manipulation of the input data at the same time. Successful attacks that change the output and behavior of an intelligent system can have severe consequences ranging from accidents of autonomous driving systems to by-passing malware or intrusion detection. We research robustness of machine learning models in benign conditions as well as under adversarial manipulation and develop new models and defenses in order to protect against classes of such attacks.
Inference attacks. Machine Learning services are offered by a range of providers that make it easy for clients e.g. to enable intelligent services for their business. Based on a dataset, a machine learning model is trained that then can be accessed e.g. via an online API. The data and the machine learning model itself are important assets and often constitute intellectual property. Our recent research has revealed that such assets can leak to customers that use the service. Hence, an adversary can exploit the leaked information to gain access to data and/or the machine learning model only by using the service. We seek a fundamental understanding of these novel inference attacks on machine learning models and propose defenses that allow secure and protected deployment of machine learning models.
Privacy. The success of today’s machine learning algorithms is largely fueled by large datasets. Many domains of practical interest are human centric and are targeted at operating under real-world conditions. Therefore, gathering real-world data is often key to the success of such methods. This is frequently achieved by leveraging user data or crowdsourcing efforts. We research privacy preserving machine learning techniques that prevent leakage of private information or linking attacks e.g. in distributed and collaborative learning setups. We seek fundamental approaches that provide strong privacy guarantees for state-of-the-art machine learning models while preserving high utility of the overall system.