Send email Copy Email Address
© CISPA/David Rohner

©CISPA/David Rohner

2026-03-24
Annabelle Theobald

Our New Faculty Dr. Thorsten Eisenhofer Introduces Himself: “In Practice, ML Models Are Rarely the Direct Target of Attacks.”

Machine learning—the data-driven training of statistical models—is a core building block of modern artificial intelligence and has long become a constant companion in our everyday lives. ML algorithms, for example, are embedded in our email programs to classify messages as wanted or spam. The operating systems of major providers such as Google, Apple, and Microsoft have also long been AI-supported, integrating voice assistants, writing assistance, and other services into everyday applications. It is precisely where ML models are embedded in real systems that Thorsten Eisenhofer sees his main challenge: “In security research, the attack surfaces and weaknesses of ML models are often examined in isolation. In reality, however, the models are part of a larger system—and that system is what attackers target.” Making entire systems more secure is what drives his work.

 

Eisenhofer’s perspective is strongly shaped by his previous research background. He originally comes from the field of classical systems security research, an environment focused on software used in everyday practice and on real-world attack vectors. “Colleagues working in adversarial machine learning spend a lot of time thinking about where and how an ML model can be attacked. However, they often examine the models in isolation rather than as part of a larger system—which is how they actually exist in practice,” Eisenhofer explains. “As a result, the real goal of attackers is often not to ‘take down’ the model itself. Instead, they aim at the bigger picture.”

Theoretical Threats Versus Real Attacks

To understand where machine learning systems are vulnerable, researchers work with so-called threat models. These involve making assumptions about attackers’ goals, the knowledge they possess, and the capabilities available to them. This is precisely where Eisenhofer sees a problem. “In many research papers, for example, it is assumed that attackers can fully control the input to a model. In real systems, however, that is rarely the case, because the model input is usually derived from real data through several processing steps.”

This becomes particularly clear in the detection of malicious software. In practice, this involves real programs that run on a computer. Before an ML model can decide whether a program is malicious, the program must first be translated into a form the model can understand. “In the real world, malware is a program—not a feature vector,” Eisenhofer says. “So you need a component that translates that program into a representation the model can work with.” That translation step is crucial. “I’m interested in how to incorporate this processing step into the attack model—so that an attack works not only in the model space, but also in the real world.”

Where Attacks Target Real Systems

From this systemic perspective, additional vulnerabilities emerge that have often been overlooked in research. Attacks can occur during preprocessing or post-processing, for example when data is prepared for the model or processed further afterwards. The technical environment can also play a role. Differences between computer systems, for instance, can be deliberately exploited to trigger attacks only on a specific system.

For Eisenhofer, identifying such vulnerabilities is not an end in itself. “The best outcome is when you discover a previously overlooked problem and it turns out there’s a simple, effective solution.” Often, he says, the key is not to look for security exclusively within the model itself, but in the design of the surrounding system. This becomes especially relevant as large language models are increasingly integrated into everyday software and operating systems. “We are not able to make these models 100 percent reliable,” he says. That makes it all the more important to design defenses in such a way that even successful attacks result in limited damage. “We’re going to see a real wave of attacks once models are deeply integrated and can interact with other systems.”

The Best of Two Worlds

Eisenhofer is particularly drawn to research at the intersection of machine learning and systems security because it combines these two worlds. “I enjoy working deep inside systems, but I also appreciate a clean formalism. Machine learning combines both: formal constructions—and then you see what actually happens in real systems.” At the same time, he tries not to be driven by the current hype surrounding AI. “The field is evolving incredibly fast right now. We’re especially interested in questions that remain relevant beyond individual technological developments.”

It was exactly this combination that brought Eisenhofer to CISPA. “CISPA is one of the few research centers where I don’t have to compromise on either side. There are experts here in both machine learning and systems security.” For his work, that is essential–because his goal is to bring both perspectives together in the long term. “What interests me most is this systems-level perspective on machine learning.”