The IEEE Symposium on Security and Privacy is the premier forum for presenting developments in computer security and electronic privacy, and for bringing together researchers and practitioners in the field. The 45th IEEE Symposium on Security and Privacy will be held on May 12-14, 2025 at the Hyatt Regency San Francisco.
The researchers conducted 21 interviews with experienced developers of cryptographic libraries to understand how cryptographic API design decisions are made and what challenges are involved. They found that design choices are influenced by cryptographic standards, existing libraries, legacy code, and developers' personal intuition. Developers face major challenges in balancing security, usability, and flexibility. Often, there is no systematic approach to defining or evaluating usability; instead, developers rely on personal experience, user feedback, and informal testing. Although cryptographic standards provide guidance, they often leave critical design decisions open, requiring developers to make subjective choices. The researchers also observed that pressures to maintain backward compatibility can lock developers into outdated or less secure design patterns. There is a lack of empirical, practical research that directly supports developers in designing usable cryptographic APIs.
This study contributes to society by identifying concrete ways to improve the development of cryptographic libraries—an essential building block for secure software. By highlighting the need for better usability guidance and suggesting the integration of usability considerations into standardization efforts, the research points toward a future where cryptographic tools are both safer and easier for developers to use, ultimately leading to more secure digital systems for everyone.
This study investigates how digital security and privacy advice can be more effectively communicated to everyday users. Recognizing that people often feel overwhelmed by security advice, researchers designed a mobile app called the "Security App" that delivers one short, actionable task per day over a 30-day period. These tasks are based on expert-reviewed advice and aim to build secure habits through repetition and simplicity.
A controlled study with 74 participants showed that the app format was well received. Most users found the tasks understandable, manageable, and relevant. They also reported increased confidence and security awareness. Participants who used the app were significantly more likely to adopt secure behaviors—such as backing up data, updating software, or using two-factor authentication—compared to a control group. In some cases, these behaviors persisted even 30 days after the study ended.
However, not all advice was adopted equally. Tasks involving password managers, for example, were more often rejected or rated less helpful, suggesting that trust and usability issues still affect adoption of some security tools. Additionally, some participants found parts of the app too rigid or wished for more personalization.
Overall, the findings suggest that breaking down digital security advice into small, concrete steps and delivering them in a familiar habit-building format can help users develop lasting security habits. For society, this approach offers a practical way to strengthen everyday digital resilience without requiring technical expertise, thereby addressing one of the common barriers to better personal cybersecurity.
This research examines the iOS local network permission introduced in iOS 14, which aims to protect devices within a user’s local network from unauthorized app access. The study evaluates both the technical security of this permission and how effectively users understand and respond to it.
From a technical perspective, the analysis reveals several shortcomings. The permission can be bypassed through certain app components like webviews, and it does not cover all relevant network configurations—especially in more complex local networks or when devices are connected via VPN. This means apps may access networked devices without triggering user consent.
To assess how frequently apps use local network access, over 10,000 apps on both iOS and Android platforms were analyzed. The results show that 1–1.4% of apps on each platform perform local network communications. On iOS, a greater number of apps delayed this access until after user interaction, possibly due to the visible permission prompt, whereas Android currently lacks a similar permission mechanism.
The study also examined the permission prompts shown to users. These messages often include vague or misleading language, such as “your network,” and some incorrectly suggest that permission is needed for basic internet access. A user survey involving 150 iOS users showed that while many recognize potential privacy threats, misconceptions are widespread. For instance, many participants incorrectly believed the permission is necessary for Bluetooth or Internet usage.
This research underscores the need to improve both the technical enforcement of privacy controls and the clarity of user communication. A permission mechanism can only be effective if it reliably restricts access and enables users to make informed decisions. For society, this study highlights the importance of aligning privacy protections with users’ understanding—helping to ensure that control over digital environments is meaningful and not merely procedural.
This research introduces TokenWeaver, a novel protocol designed to enhance the security and privacy of trusted hardware environments, especially in the face of compromises. Trusted Execution Environments (TEEs), used in devices like smartphones or secure cloud services, help protect sensitive information. However, they can become targets for attackers. Once compromised, it's very difficult to restore trust in their security.
The key idea behind TokenWeaver is to allow a compromised device to “heal” itself—regain its secure status—without sacrificing the user's privacy. Traditionally, these two goals have been hard to achieve together: detecting a breach often requires identifying the device, which compromises privacy.
TokenWeaver solves this dilemma by using two interconnected systems: a linkable chain, which allows the provider (e.g., Intel or Google) to detect and respond to attacks, and an unlinkable chain, which allows the user to interact with services anonymously. The innovation lies in how these two chains work together to detect compromises and restore security, all while ensuring that no one—including the provider—can track the user's activity across services.
The protocol was not only designed but also formally verified using advanced tools to prove that it works as intended, even under attack. It has been implemented in a working prototype, showing that it is practical in terms of speed and storage.
For society, this work contributes to making digital systems more resilient and privacy-friendly. It provides a blueprint for how devices can regain trust after being hacked, without exposing users to surveillance. This is especially valuable as more critical services—from banking to healthcare—rely on secure digital interactions.
This study explores how researchers in the field of usable privacy and security (UPS) perceive and practice transparency in their work. Transparency, in this context, refers to sharing all relevant research details—such as methods, data, and materials—so that others can understand, evaluate, and attempt replicating the research.
The authors conducted in-depth interviews with 24 UPS researchers from different backgrounds and levels of experience. The results reveal that while researchers generally value transparency and consider it a hallmark of good scientific practice, there are significant barriers to its consistent implementation. These barriers include a lack of clear, formal guidelines, time and resource constraints, and concerns about peer review vulnerability—some researchers fear that being overly transparent may expose flaws and lead to harsher critique.
The study found that many transparency efforts currently rely on individual motivation and unwritten community norms, rather than standardized expectations. Researchers do employ various transparency practices—like sharing study instruments or code—but they often do so without external incentives or formal support structures. Ethical concerns, especially around publishing data involving human participants, further complicate matters.
Participants called for better support, clearer guidance, and incentives to encourage transparent reporting. Suggestions included formal transparency guidelines, incentives for transparency, and improved review processes that recognize and fairly evaluate transparent research. There was also support for adapting artifact evaluation processes to better fit the nature of UPS research.
In a broader societal context, this research highlights how transparency practices can strengthen the credibility and reproducibility of scientific knowledge in privacy and security. Encouraging more transparent research helps ensure that findings are trustworthy and accessible, fostering progress in both academic and applied domains. However, achieving this requires thoughtful adjustments to community norms, publication processes, and institutional support.
In this research, the authors tackle a practical problem faced by many blockchain-based services that operate using a "threshold" model: how to fairly and securely pay only the subset of servers that contribute to completing a requested task. While blockchains like Ethereum allow this through smart contracts, Bitcoin and similar systems pose a challenge due to their simpler scripting capabilities.
To address this, the researchers introduce **VITĀRIT**, a novel protocol that enables secure, fair payments for threshold services directly on Bitcoin. These services might include, for instance, generating random numbers using verifiable random functions (VRFs), where at least *t+1 out of n* servers must respond to fulfill the request. The key goal is to ensure that only the contributing servers get paid—and only once—without any central authority or smart contract.
The core innovation is a lightweight transaction mechanism using only standard Bitcoin scripts. The protocol introduces new cryptographic tools, such as verifiable non-committing encryption and adaptor signatures, to ensure that clients and servers can securely exchange partial results and payments. VITĀRIT prevents dishonest behavior, such as a server trying to claim payment multiple times or without doing the work.
A prototype of VITĀRIT shows that it is efficient and deployable, with performance results demonstrating low processing times for both clients and servers. Importantly, compared to traditional smart contract methods, VITĀRIT significantly reduces computational and monetary costs (e.g., Ethereum gas usage).
From a societal perspective, this work makes decentralized services more accessible and secure on a wider range of blockchain platforms, particularly those like Bitcoin, that lack advanced scripting. It promotes privacy, efficiency, and trust without requiring central authorities—principles that underpin the broader vision of decentralized digital infrastructures.