The Network and Distributed System Security (NDSS) Symposium fosters information exchange among researchers and practitioners of network and distributed system security. Through the collaborative sharing of top-tier research on systems security, the NDSS Symposium helps the Internet community make the Internet more secure.
The paper addresses a key challenge in software security: many serious vulnerabilities do not stem from technical programming errors, but from flaws in business logic. These occur when an application behaves as programmed, yet allows actions that violate its intended rules. Together with Liwei Guo, a researcher at the University of Electronic Science and Technology, and Thorsten Holz from the Max Planck Institute for Security and Privacy, CISPA researchers Meng Wang, Philipp Görz, Joschua Schilling, Keno Hassler, and Ali Abbasi explain that widely used testing tools struggle to detect such issues because they lack insight into application-specific semantics. This is problematic, as a large share of real-world vulnerabilities falls into this category.
They introduce ANOTA, a framework that explicitly incorporates human domain knowledge into security testing. Developers can use lightweight annotations to describe the intended behavior of their applications. During execution, a monitoring component checks whether the observed behavior deviates from these specifications. Such deviations are treated as indicators of potential vulnerabilities. In an evaluation combined with a state-of-the-art fuzzer, ANOTA was able to reproduce more known vulnerabilities and uncover additional previously unknown ones compared to other compatible approaches.
The study suggests that combining automated testing with explicitly captured human knowledge can close an important gap in current security practices. From a societal perspective, this work contributes to improving the reliability of software systems, particularly in domains where logical errors can have significant practical consequences, such as finance, public services, or healthcare.
The paper systematically examines the methodological challenges and pitfalls that the use of large language models introduces into research on IT security and software engineering. In an extensive collaboration between CISPA, Max-Planck Institute for Security and Privacy, Karlsruhe Institute for Technology, Ruhr University Bochum, TU Wien, Sapienza University of Rome, and _fbeta, the researchers argue that the increasing prevalence of large language models in security research introduces challenges and risks potentially undermining established paradigms of reproducibility, rigor, and evaluation.
They identify nine recurring pitfalls that affect the entire research process—from data collection, pretraining, and fine-tuning to prompting and evaluation. To assess how prevalent these issues are, they analyze 72 peer-reviewed publications from leading conferences in security and software engineering published in 2023 and 2024. The findings are clear: every paper exhibits at least one of these pitfalls, and all nine occur repeatedly. At the same time, only about 16 percent of the identified issues are explicitly discussed in the papers themselves.
Through four case studies, the authors show how individual methodological weaknesses can lead to biased evaluations, overstated performance claims, or limited reproducibility. Based on these insights, they propose concrete recommendations for making future work more robust. From a societal perspective, the study contributes to scientific self-reflection by helping to contextualize research on large language models more critically and, in the long term, to provide more reliable foundations for security-relevant applications.
The paper examines increasing security risks affecting satellites, which underpin essential services of modern societies such as navigation systems. For a long time, satellites were assumed to be secure due to proprietary architectures and limited accessibility. Technological advances have weakened these assumptions, while reliable data on real-world attack techniques has remained limited.
Together with Efrén López-Morales from New Mexico State University, Carlos Gonzalez-Cortes from Universidad de Santiago de Chile, Jacob Hopkins and Carlos Rubio-Medrano from Texas A&M University – Corpus Christi, and Elías Obreque from Universidad de Chile, the CISPA researchers Ulysse Planta, Gabriele Marra, Majid Garoosi, and Ali Abbasi demonstrate how this gap can be addressed. They introduce HoneySat, the first high-interaction satellite honeypot framework capable of realistically simulating a CubeSat, a type of small satellite. The goal is to attract real attackers and systematically observe their behavior.
To assess realism and effectiveness, they surveyed small satellite operators and deployed HoneySat on the public Internet. About 90% of respondents considered the simulation realistic. HoneySat also successfully deceived adversaries in the wild, collecting 22 real-world attack interactions. In an additional experiment, the system communicated successfully with an operational small satellite already in orbit.
The study shows that realistic simulation environments can improve understanding of satellite threats. From a societal perspective, this research supports efforts to better protect space-based infrastructure that many critical technologies rely on.
The paper focuses on distributed systems that must remain reliable even when some components fail or behave unpredictably. Such systems are used in areas like blockchains and other critical digital infrastructures. Modern asynchronous Byzantine Fault Tolerance protocols aim to achieve high performance under favorable conditions while remaining robust under unfavorable ones. Existing approaches, however, typically sacrifice either throughput or latency when conditions deteriorate.
Together with Xiaohai Dai, Chaozheng Ding, and Hai Jin from Huazhong University of Science and Technology, and Ling Ren from the University of Illinois at Urbana-Champaign, the CISPA researcher Julian Loss demonstrates how this trade-off can be reduced. They introduce Ipotane, a new protocol designed to handle favorable and unfavorable network conditions simultaneously. Ipotane combines a fast optimistic execution path with a newly designed fully asynchronous mechanism that remains effective when problems arise.
The key idea is to run both paths in parallel and quickly determine which one performs better at any given time. When the optimistic path becomes inefficient, the system can promptly switch to the more robust path without incurring long delays or a major loss in throughput. Experimental results show that Ipotane achieves both high throughput and low latency across a wide range of conditions.
From a societal perspective, this research supports the development of more dependable distributed systems. Improving the reliability of such infrastructures is essential for digital services that require strong guarantees of correctness and availability.
The paper examines misconfigurations in cloud services, which remain a major cause of security and privacy incidents. A key contributing factor is the complexity of cloud platforms, which makes it difficult for operators and developers to configure systems securely. To gain insight into these challenges, the study draws on real-world discussions from practitioner communities.
Together with Shafay Kashif from the University of Auckland, and Lea Gröber and Mobin Javed from Lahore University of Management Sciences, the CISPA researchers Sumair Ijaz Hashmi and Katharina Krombholz analyze approximately 251,900 security- and privacy-related posts on Stack Overflow published between 2008 and 2024. Using topic modeling and qualitative analysis, they map common cloud use cases to the configuration challenges associated with them.
The analysis reveals a wide range of issues that span both technical and human factors. In addition to concrete configuration problems, insufficient documentation and the lack of context-aware tools emerge as recurring obstacles. Authentication and access control challenges appear across all identified use cases, affecting nearly every stage of cloud deployment, integration, and maintenance.
The study highlights that secure cloud operation depends not only on technical safeguards but also on usable support structures. From a societal perspective, this research helps identify where developers need better guidance and tools, contributing to more secure and privacy-aware cloud infrastructures that underpin many digital services today.
The paper examines security risks in embedded systems that integrate multiple CPUs into a single system-on-a-chip (SoC). Such architectures promise better performance and separated tasks, for example by running the application logic on one processor and the network logic on another. However, the security implications of this close integration have not been fully understood.
The CISPA researchers Simeon Hoffmann and Nils Ole Tippenhauer systematically analyze vulnerabilities that arise in these multi-CPU designs. They show that security mechanisms originally developed for single-CPU systems, such as memory protection units, are often reused without fully accounting for the new architectural context. As a result, additional attack surfaces can emerge.
They identify four major attack vectors that, under certain conditions, allow an attacker controlling one CPU to read from or write to protected memory of another CPU, potentially leading to arbitrary code execution. Their analysis suggests that a significant number of commercially available systems may be affected. They also find that a commonly used communication mechanism in the open-source real-time operating system FreeRTOS can introduce further code execution vulnerabilities in multi-CPU scenarios. The theoretical findings are validated through practical demonstrations of the attacks. In one case, the identified weaknesses could compromise a custom trusted execution environment implementation. The vulnerabilities were responsibly disclosed to vendors, leading to a security advisory and a fix.
From a societal perspective, the research contributes to strengthening the security of embedded devices that are widely deployed in industrial and consumer technologies, helping to make connected systems more resilient and trustworthy.
The paper revisits code reuse attacks, a fundamental technique in modern exploits targeting memory corruption vulnerabilities. In such attacks, small code fragments known as gadgets are combined to achieve malicious behavior. Although many tools have been proposed to automate this process, most are rarely used in practice due to poor performance, limited architectural support, or scalability issues.
Together with Kyle Zeng, Adam Doupé, Ruoyu Wang, Yan Shoshitaishvili, and Tiffany Bao from Arizona State University, and Christopher Salls from the University of California, Santa Barbara, the CISPA researcher Moritz Schloegel demonstrates how these limitations can be addressed. They introduce a new abstraction called ROPBlock. Unlike traditional gadgets, ROPBlocks are designed to be inherently chainable, which enables a fundamentally different approach to constructing attack chains.
Building on this concept, they propose a graph-based search strategy that replaces the common generate-and-test paradigm. This reduces the computational complexity of setting registers from exponential to linear time, leading to substantial speed-ups in practice. The approach also supports more complex code constructs and is independent of processor architecture, making it applicable across diverse systems.
Their prototype, ropbot, generates complex real-world attack chains significantly faster and for more binaries than existing tools, across multiple architectures. From a societal perspective, this work deepens the understanding of how powerful exploitation techniques can be automated, providing valuable insights for both offensive security research and the development of more effective defensive measures.
The paper examines confidential virtual machines based on trusted execution environments, which aim to enable privacy-preserving applications in cloud settings. While these technologies protect data from direct access, they largely exclude side-channel attacks from their threat model. As a result, developers are left without practical tools to assess and mitigate information leakage, especially since existing defenses are often too specialized or inefficient for real-world use.
Together with Albert Cheu, Adria Gascon, Daniel Moghimi, Phillipp Schoppmann, and Octavian Suciu from Google, the CISPA researchers Ruiyi Zhang and Michael Schwarz demonstrate how this gap can be addressed. They introduce SNPeek, an open-source toolkit that enables configurable side-channel measurements on production AMD SEV-SNP hardware. The toolkit combines these measurements with statistical and machine-learning-based analysis to automatically estimate and compare leakage.
They apply SNPeek to three representative workloads deployed on confidential virtual machines, including privacy-preserving data queries and user-defined WebAssembly functions. The analysis reveals previously unnoticed information leaks, among them a covert channel capable of exfiltrating data at high rates. At the same time, the results show how the tool can guide practical, low-overhead mitigations.
From a societal perspective, this work contributes to making privacy guarantees in modern cloud infrastructures more transparent and measurable. By enabling systematic evaluation of side-channel risks, it supports more trustworthy deployment of technologies intended to protect sensitive data.
The paper examines the increasing use of large language models in automated code analysis, for example for debugging or revising software. Such models abstract heavily from known programming patterns, which increases their efficiency but also creates new risks. The study shows that this abstraction can lead to small but functionally relevant code modifications being overlooked.
Together with Shir Bernstein, Daniel Ayzenshteyn, and Yisroel Mirsky from Ben Gurion University of the Negev, CISPA researchers David Beste and Lea Schönherr demonstrate how this weakness can be exploited. They describe a so-called familiar pattern attack, in which code is minimally altered, resulting in changed runtime behavior. For human developers, the code remains functionally comprehensible, while the language model is misled by familiar patterns and overlooks relevant changes.
They develop a fully automated attack that works without insight into the model and systematically inserts such manipulations into target code. Extensive testing shows that these attacks are effective on different model types, work across different providers, and occur independently of the programming language. Even explicit warnings to the models do not reliably prevent misinterpretations. In addition, they also discuss defensive applications of the technology, such as preventing plagiarism.
The work makes it clear that the use of large language models in security-critical development processes opens up new areas of vulnerability. This research is important for society because it helps to better understand the limitations of automated code analysis and to develop more realistic expectations of such systems.