A multitude of research results has shown that slightly changing the inputs given to an ML algorithm can trick the algorithm into producing "wrong" outputs. Such research typically assumes that an attacker has complete control over the input but also wants to change the input as little as possible. In this talk I'll argue that practical threat models are different: attackers work under constraints and toward goals that most research typically doesn't consider. Using face recognition and malware detection as examples, I'll show that under more realistic constraints, defeating ML requires creating new attack methods. I'll also show that even assessing the risk of real-world uses of ML may require new definitions of robustness, which in turn enable better defenses but also more efficient attacks.
Short Bio:
Lujo Bauer is a Professor of Electrical and Computer Engineering, and of Computer Science, at Carnegie Mellon University. He is also a member of CyLab, Carnegie Mellon's computer security and privacy institute. Lujo received his Ph.D. in Computer Science from Princeton University in 2003. Lujo's research examines many aspects of computer security and privacy, including building systems in which usability and security co-exist and designing practical tools for identifying software vulnerabilities. His recent work focuses on developing tools and guidance to help users stay safer online and on examining how advances in machine learning can (or might not) lead to a more secure future. Lujo served as program (co-)chair for the flagship computer security conferences of the IEEE (S&P 2015) and the Internet Society (NDSS 2014), and is looking forward to doing so for USENIX in 2025.