Send email Copy Email Address

11 CISPA PAPERS AT NDSS 2025

The Network and Distributed System Security (NDSS) Symposium fosters information exchange among researchers and practitioners of network and distributed system security. Through the collaborative sharing of top-tier research on systems security, the NDSS Symposium helps the Internet community make the Internet more secure.

Modern computers depend on a critical software component called the bootloader, which bridges the low-level firmware and the operating system. When a computer powers up, the bootloader takes over from the firmware, setting up the early boot environment and then launching the operating system. Because it verifies the operating system—often using secure boot to block unauthorized code—the bootloader is essential for system security. However, as bootloaders have evolved to offer more features for end users, their growing code bases have expanded potential attack surfaces. Recent research presents the first comprehensive memory safety analysis of bootloaders, revealing that malicious inputs from peripherals like storage devices and networks can be exploited to compromise these systems. Using a custom fuzzing framework, researchers uncovered 39 vulnerabilities in nine different bootloaders, including 14 in the widely used Linux bootloader GRUB—some of which could allow attackers to bypass secure boot protections. With five of these vulnerabilities already assigned CVEs, the study highlights significant risks that could impact everything from personal computers to critical infrastructure. The societal implications are profound, underscoring the urgent need for improved security measures in the early boot process to protect sensitive data and maintain public trust in our digital systems.

Open-source software is a vital pillar of today’s digital infrastructure, serving as the backbone of the software supply chain. Its success depends not only on robust code but also on trustworthy contributions from developers. However, the very process that allows for widespread collaboration also creates vulnerabilities. Platforms like GitHub generate user profiles and project histories based on Git metadata—details such as names and email addresses that can be freely configured. This ease of manipulation enables attackers to forge commit authorship and misrepresent contributions through techniques like contributor spoofing, reputation hijacking, and contribution hijacking. In a comprehensive study of over 50,000 critical open-source projects and more than 26 million commits, researchers demonstrated that these manipulations are not isolated incidents but widespread issues. While technical countermeasures such as commit signing exist to authenticate contributions, the research shows that a large majority of commits remain unsigned, leaving many projects exposed. Furthermore, an analysis of online security advice indicates that while basic spoofing is acknowledged, the risks associated with untrustworthy Git metadata are often overlooked. The social impact of these findings is quantifiable in the following way: ensuring the authenticity of open-source contributions is essential for the reliability and security of software systems that underpin critical sectors. By addressing these vulnerabilities, stakeholders can strengthen the software supply chain, protect sensitive digital infrastructure, and maintain public trust in the digital ecosystem without resorting to alarmist measures.

Leon Trampert et al. have identified new methods that allow internet users to be tracked without their knowledge, even if they take privacy measures such as blocking cookies or disabling JavaScript. The techniques exploit Cascading Style Sheets (CSS), a fundamental web technology used for styling websites. By cleverly using certain CSS features, a browser or email application can be analyzed in a way that reveals unique characteristics of the device, allowing users to be identified without their consent. 

The methods were able to distinguish 97.95% of all tested browser and operating system combinations. Even in email communication, it was found that 8 out of 21 tested email clients were susceptible to state-of-the-art tracking techniques. This means that companies, advertisers, or even state actors could gather identifying information via emails.

This technique poses a threat to digital privacy, as it works without users' consent or awareness. It also contributes to the erosion of online anonymity, which is particularly problematic for journalists, activists, and people living under authoritarian regimes. If even basic web technologies can be used for surveillance, it is not only a challenge for developers and browser manufacturers but also for policymakers, who must consider how to regulate such methods. The researchers propose technical countermeasures, such as preloading CSS resources or using specialized email proxy services to prevent tracking. However, in the long term, new privacy regulations and web standards may be necessary to offer better protection for users. This study serves as a stark reminder that privacy in the digital age is not just a matter of individual caution but also of political responsibility.

Open redirects are one of the oldest threats to web applications, allowing attackers to reroute users to malicious websites by exploiting a web application’s redirection mechanism. The recent shift towards client-side task offloading has introduced JavaScript-based redirections, formerly handled server-side, thereby posing additional security risks to open redirections. In this paper, we re-assess the significance of open redirect vulnerabilities by focusing on client-side redirections, which despite their importance, have been largely understudied by the community due to open redirect’s long-standing low impact. To address this gap, we introduce a static-dynamic system, STORK, designed to extract vulnerability indicators for open redirects. Applying STORK to the Tranco top 10K sites, we conduct a large-scale measurement, uncovering 20.8K open redirect vulnerabilities across 623 sites and compiling a catalog of 184 vulnerability indicators. Afterwards, we use our indicators to mine vulnerabilities from snapshots of live webpages, Google search and Internet Archive, identifying additionally 326 vulnerable sites, including Google WebLight and DoubleClick. Then, we explore the extent to which their exploitation can lead to more critical threats, quantifying the impact of client-side open redirections in the wild. Our study finds that over 11.5% of the open redirect vulnerabilities across 38% of the affected sites could be escalated to XSS, CSRF and information leakage, including popular sites like Adobe, WebNovel, TP-Link, and UDN, which is alarming. Finally, we review and evaluate the adoption of mitigation techniques against open redirections.

Many mobile apps require permissions to access personal data or device functions. To help users understand why these permissions are needed, app developers can provide rationales—explanations given to users when apps request permissions. How these rationales are presented significantly impacts user decisions. This study examines the phrasing and design of permission rationales. The researchers analyzed 720 text samples and 428 screenshots from top Google Play apps to understand how developers phrase and design these rationales. They then conducted a user study with 960 participants to measure how different rationales influence users’ willingness to grant permissions, their satisfaction with their choices, and their sense of control.
 
The study found that certain phrasing choices make users more likely to approve permissions. Specifically, users feel more informed and satisfied when rationales clearly explain why a permission is needed. Negative phrasing, such as “Without this permission, you cannot use this feature,” increases satisfaction more than positive framing. Adding reassurance, like stating that no personal data is collected, increases trust and the likelihood of approval. Informing users that they can change their decision later also fosters a sense of control. However, the study also revealed that many users trust the mere mention of a privacy policy, even without verifying its content.
 Permission requests are a critical gateway between user privacy and app functionality. Poorly designed rationales can lead to uninformed decisions, while manipulative wording could push users into granting permissions they might otherwise refuse. This research highlights the responsibility of app developers to craft transparent, user-friendly explanations while emphasizing the need for stronger app store policies to ensure these assurances are truthful. As mobile privacy concerns grow, designing effective and ethical permission requests will be crucial in balancing security, usability, and consumer trust.

WordPress is the most widely used Content Management System (CMS) on the internet, powering millions of websites. However, many site owners fail to keep their CMS up to date, leaving vulnerabilities that can be exploited by cybercriminals. Despite efforts to notify website owners about these risks, outdated installations remain widespread. This study explores why website owners neglect updates, identifying key reasons beyond simple lack of awareness.
 
Through in-depth interviews with website owners and website professionals from the industry, the researchers uncovered new factors influencing non-update behavior. While some owners hesitate due to fear of technical failures or lack of time and money, two major insights stand out: First, many website owners assign low personal value to their sites, reducing their motivation to update. Second, website maintenance is often delegated to third parties, leading to misunderstandings about responsibility—resulting in neither party taking action. Additionally, owners tend to underestimate the broader risks of an outdated site, seeing vulnerabilities as personal risks rather than threats to the wider web ecosystem.
 
The societal impact is significant. Unpatched websites are not just a problem for their owners; they can be hijacked for phishing, malware distribution, and botnet operations, endangering internet users globally. The study suggests that traditional vulnerability notifications might be ineffective for owners who do not value their websites. Instead, alternative solutions are needed, such as better risk communication and more secure, low-maintenance web hosting solutions. Regulators, industry leaders, and policymakers should consider strategies to enforce security standards while supporting small website owners in keeping their systems secure.

Video generation models have advanced to the point where they can create coherent, high-quality videos on a wide range of themes. However, with these improvements comes a growing concern that such models may also produce unsafe content, including violent, sexual, or otherwise disturbing imagery. The authors examined this issue by collecting prompts from online communities and generating a large dataset of videos. Through careful analysis and human evaluation, nearly 1,000 videos were consistently identified as unsafe and classified into categories such as distorted, terrifying, explicit, violent, and politically charged. Recognizing the challenge of detecting unsafe content in videos—given their complex spatial and temporal information—we developed a new defense approach called Latent Variable Defense (LVD). Unlike traditional methods that only assess the final video output or require extensive modifications to the model, LVD monitors intermediate stages of the video generation process. By analyzing the evolving content within the model’s diffusion process, LVD can detect potentially unsafe outputs early, thereby reducing computational time by up to ten times while maintaining a detection accuracy of around 92%. Tests on three state-of-the-art video generation models showed that this method can effectively prevent the generation of unsafe content without interfering with the overall creative process. This research offers significant benefits for society by enhancing the safety and trustworthiness of emerging video generation technologies. It supports responsible innovation in digital media, helps protect audiences from potentially harmful content, and contributes to the development of stronger safeguards in the field of artificial intelligence.

Hardware accelerators are widely used to improve performance and efficiency for various computing tasks in different environments, from consumer devices to supercomputers. These specialized components can run specific algorithms more effectively than general-purpose CPUs, but outsourcing tasks to hardware introduces security challenges. Limited visibility into the operations of hardware accelerators, along with their multi-layer technology stack, makes flaw detection difficult. Although fuzzing has been effective in testing software, applying this technique to hardware acceleration has remained underexplored, mainly due to the complexity of hardware stacks. To address this gap, this study introduces a novel differential testing approach and create a prototype called TWINFUZZ to detect vulnerabilities within hardware-accelerated video decoding stacks. The method leverages dynamic testing to identify discrepancies between software and hardware decoding processes—providing a way to pinpoint faults even without introspecting the hardware layers directly. This approach was tested on different hardware platforms, revealing five security-relevant vulnerabilities such as buffer overflows and wild pointer dereferencing. The significance of this research lies in its new method for identifying security and functional flaws in hardware-accelerated systems using a proxy approach for indirect fuzz testing. By uncovering new vulnerabilities in widely used software decoders, this work emphasizes the need for continued attention to security in hardware acceleration. The societal impact underscores cybersecurity risks in performance-enhancing technologies, as unmonitored or poorly secured hardware components could undermine the trustworthiness of crucial systems. Through the disclosure of findings to vendors, this research not only contributes to improving system integrity but also promotes ongoing collaboration to safeguard the future of secure hardware development.

This paper shows that the very data that powers machine learning models may also be their Achilles’ heel. The study finds that the data samples which most enhance a model’s performance—its high importance data—are surprisingly more vulnerable to machine learning attacks such as membership inference, model stealing, and backdoor attacks. This means that while high-quality data is essential for creating effective artificial intelligence, it also presents new security challenges. In sensitive fields like medical diagnostics, for example, patient records with rare yet critical information could be more easily exploited, leading to privacy breaches, discrimination, or unfair insurance practices. Moreover, the research reveals that “privacy onion effect” holds for sample importance distribution, where removing highly important data unexpectedly elevates the significance of previously overlooked samples, further complicating defense strategies and providing a new attack surface for designing advanced attacks. By demonstrating that not all data is equally secure, the study calls for innovative measures that balance technological progress with robust privacy protection. Ultimately, these findings have profound societal implications: as our reliance on digital systems grows, safeguarding data, particularly the most valuable, is key to maintaining public trust and ensuring fairness in sectors ranging from healthcare to finance. Institutions like CISPA-Faculty are crucial in guiding policymakers and industry leaders toward implementing these essential security measures.

Deno is a new JavaScript runtime that aims to provide a more secure alternative to Node.js, the widely used platform for running JavaScript outside of web browsers. Unlike Node.js, which has faced many security issues, Deno was designed with a permission system that requires developers to explicitly grant access to sensitive features such as file systems, network connections, and environmental variables. This study evaluates whether Deno truly delivers on its promise of increased security and finds that while Deno has a smaller attack surface than Node.js, it still has significant vulnerabilities.
 
The research highlights three major concerns. First, some permissions in Deno are too broad, allowing attackers to exploit them for unauthorized access. Second, Deno’s method of importing third-party code via URLs introduces risks such as outdated dependencies and security flaws due to domain takeovers. Third, the permission system does not fully protect against supply chain attacks, where malicious code is injected into widely used software components. The study led to two security advisories for Deno and prompted changes to improve its security model.
 
Many online services depend on JavaScript runtimes like Deno and Node.js, meaning that security flaws can affect millions of users and organizations. Supply chain attacks, for example, have been used to distribute malware through trusted software. This study underscores the need for better security mechanisms, such as more fine-grained permissions and improved dependency management. Policymakers and industry leaders should prioritize security standards for software ecosystems to reduce the risks associated with open-source development and ensure the resilience of digital infrastructure.

Modern web applications are increasingly complex, making them difficult to fully test for security vulnerabilities. Traditional web scanners, which automate vulnerability detection, struggle to navigate deep application states due to their limited understanding of workflows. This paper introduces YuraScanner, a new AI-driven web application scanner that leverages Large Language Models (LLMs) to autonomously execute workflows and uncover deeper security vulnerabilities.
 
 Unlike conventional scanners that rely on predefined navigation patterns, YuraScanner interprets webpage structures and predicts the correct sequence of user actions, enabling it to navigate web applications more effectively. The scanner was tested on 20 real-world web applications and significantly outperformed existing tools in discovering vulnerabilities. While a conventional scanner identified only three zero-day vulnerabilities, YuraScanner uncovered 12 zero-day cross-site scripting (XSS) vulnerabilities, demonstrating its ability to detect security flaws that would otherwise remain hidden.
 
 The societal impact of this research is substantial. As web applications handle sensitive data, security vulnerabilities can lead to data breaches, financial fraud, and identity theft. By improving vulnerability detection, YuraScanner enhances cybersecurity for businesses, governments, and users. However, such powerful tools must be responsibly managed to prevent misuse, such as automated fake account creation and scraping. This research highlights the need for ethical guidelines for AI-driven security tools while advocating for their adoption to strengthen the resilience of digital infrastructure.