"I'm looking forward to meeting new people with different backgrounds and experiences at CISPA and working together on projects that combine different expertise," says Schönherr, who mainly works on so-called adversarial machine learning.
What is it all about? One well-known problem in machine learning (ML), especially in deep learning with artificial neural networks, is so-called adversarial examples. These are minimally manipulated input data that deliberately mislead models into making incorrect judgments. For example, ML processes such as those used in autonomous driving can be manipulated by barely noticeable changes to the pixels of an image so that they recognize a speed limit instead of a stop sign. Attacks on voice-based digital assistants such as Siri or Alexa are also possible. Harmless-seeming audio recordings, for example, in the form of television or radio commercials, can thus enable unauthorized commands.
In her research, Lea Schönherr investigates machine learning from an attacker's point of view to uncover potential vulnerabilities, better understand today's limitations of intelligent systems, and develop secure models in a second step. "In the case of speech recognition, for example, we can use a human's perception of audio signals to develop more robust systems against adversarial examples," explains Schönherr, who completed her doctorate also on the topic of robust speech recognition. In addition, the researcher is working on the detection of fake audio files and images, so-called deepfakes.
Starting Oct. 1, the Würzburg native will join CISPA as a faculty member. "CISPA is a great place for security research in the middle of Europe, where many researchers with different backgrounds come together."