Know your enemy
The potential of machine learning (ML) for society is huge. Consider, for example, the area of mobility. According to a strategy paper by the German Federal Ministry of Transport, self-driving cars are expected to make our roads safer, significantly reduce traffic volume and emissions, and reduce congestion in the future. In order for vehicles to be able to recognize obstacles and road signs without any human assistance, they must use machine learning algorithms to evaluate sensor data and camera images while driving. How well they are able to do this can mean the difference between life and death. If autonomous vehicles are to be used on a truly large scale, it must be possible to guarantee their safety. "A major problem is so-called adversarial examples. Attackers can manipulate input data in a minimal way to cause models to make incorrect predictions. For example, ML systems such as those used in autonomous driving can be manipulated by tiny changes to the pixels of an image so that they recognize a speed limit instead of a stop sign," explains Lea Schönherr.
If researchers want to stay one step ahead of attackers, they have to switch sides, so to speak, and test machine learning models for every conceivable vulnerability. While fully autonomous vehicles are still a vision of the future, machine learning methods have long been in use elsewhere: For example, they are used for speech recognition and are incorporated in widely used voice assistants such as Alexa and Siri. This is what Lea Schönherr was working on during her doctoral studies and also during her time as a postdoc at the Ruhr University in Bochum in the DFG Cluster of Excellence "Cyber Security in the Age of Large-Scale Adversaries" (CASA). Together with fellow researchers, Schönherr was able to show that attackers can use manipulated audio signals to issue unrecognized commands to the Kaldi voice recognition system, which is used, for example, in Amazon's Alexa. "We took advantage of the fact that human hearing can be deceived: For example, if you play two sounds at a similar frequency at the same time, a louder sound can mask a quieterone," Schönherr explains. In this way, the necessary changes in the audio signal can be shifted into the frequency range that is imperceptible to humans.
According to Schönherr, the attacks on voice assistants that are possible in this way have not been a major security problem so far. "We fed our audio files directly into Kaldi at the beginning. Anyone who can get that close to the devices usually has completely different ways of manipulating them." If, on the other hand, attackers were to try to address the speech recognition systems via a loudspeaker, such as a radio, this would alter the signals and the machine would no longer recognize the commands. The researchers have succeeded in such an attack. "To create an audio file that could be used to fool the machine in this way, we simulated the transmission in the room and optimized the necessary signal changes taking this transmission into account." Since online purchases via Alexa and Co. are now secured with a second factor, the effort in practice is probably too high for attackers. "The potential for abuse is nevertheless there."
Having arrived at CISPA and in her new role as a senior scientist, Schönherr plans to expand her research on adversarial attacks to other areas together with her newly founded team. "Right now ChatGPT is very popular. The language model can produce text on demand. Such models also already exist for the automatic generation of code. We want to look for which inputs these models produce vulnerable code."
Another research topic her team plans on tackling is what's known as continuous learning of models. "Relatively speaking, it's more about catastrophic forgetting of models," Schönherr says, having to laugh at the term herself. "Deep-learning models can learn on their own without human supervision. When they get new data, they sometimes seem to forget old data in the process. That's what is meant by 'catastrophic forgetting.'" Her team plans to look at how this could become a problem in application areas such as malware detection or spam filters in email programs. "In such areas, attackers are very active. They try to trick the models so that something they have previously detected as spam or malware will still be passed through in the future. Constantly retraining the models to prevent that is very expensive and time-consuming."
As a senior scientist, Schönherr's daily work has changed quite a bit. "I don't do much programming myself now; before, that was one of my main jobs. I now supervise my team in their research and help my collaborators write up their research papers. And, of course, I've added some administrative tasks." The native of Würzburg has already settled in well in Saarbrücken. So far, she does not count homesickness among the adversaries she has to face.