In this paper, we present and extensively evaluate a set of novel and generic obfuscation techniques that, in combination, succeed to thwart automated deobfuscation attacks. We propose four obfuscation techniques, including a novel and generic approach to synthesize and formally verify MBAs of arbitrary complexity and a new approach to increase obfuscation’s semantic complexity, based on an investigation of the limits of program synthesis. In conclusion, we show that a comprehensive and effective intellectual property protection can be achieved without excessive overheads.
Moritz Schloegel, Tim Blazytko, Moritz Contag, Cornelius Aschermann, and Julius Basler, Ruhr-Universität Bochum; Thorsten Holz, CISPA Helmholtz Center for Information Security; Ali Abbasi, Ruhr-Universität Bochum
We propose a lightweight mitigation focused on a special variant of Load Value Injection, LVI-NULL in SGX. Our systematic analysis reveals that previously proposed mitigations are ineffective, and with up to 1220% overhead also inefficient. Our novel mitigation repurposes segmentation, a fast legacy hardware mechanism that x86 already uses for every memory operation, leading to an effective mitigation of LVI-NULL with a worse-case overhead below 10%.
Lukas Giner, Andreas Kogler, and Claudio Canella, Graz University of Technology; Michael Schwarz, CISPA Helmholtz Center for Information Security; Daniel Gruss, Graz University of Technology
In this work, we present a novel method for synthesizing binary input structures with nested pointers that enables coverage-driven fuzz testing of SGX enclaves without access to source code. To obtain coverage feedback from otherwise non-introspectable enclaves, we present enclave extraction methods and an enclave runner for running enclaves in user space at native speed. We tested the prototype implementation called SGXFuzz on 30 open- and closed-source enclaves and found a total of 79 new bugs and vulnerabilities.
Tobias Cloosters, University of Duisburg-Essen; Johannes Willbold, Ruhr-Universität Bochum; Thorsten Holz, CISPA Helmholtz Center for Information Security; Lucas Davi, University of Duisburg-Essen
In this paper, we present a first explorative study of eleven experts’ and seven non-experts’ mental models in the context of corporate VPNs. We found that experts have a deeper technical understanding of VPN technology although they sometimes hold false beliefs on security aspects of VPNs. Our study lays important foundations to develop recommendations for secure use of VPN technology (through training interventions, better communication, and system design changes in terms of device management).
Veroniek Binkhorst, Technical University of Delft; Tobias Fiebig, Max-Planck-Institut für Informatik and Technical University of Delft; Katharina Krombholz, CISPA Helmholtz Center for Information Security; Wolter Pieters, Radboud University; Katsiaryna Labunets, Utrecht University
We investigated the model privacy exposure problem in the transfer learning paradigm. To this end, we proposed a teacher model fingerprinting attack, which can infer which teacher model has been adopted by the victim model. With extensive evaluations, we showed that our attack could still achieve a high inference accuracy even when the victim model only returned top-1 prediction labels. We also showed that our attack could help to facilitate further adversarial attacks like model stealing.
Yufei Chen, Xi'an Jiaotong University & City University of Hong Kong; Chao Shen, Xi'an Jiaotong University; Cong Wang, City University of Hong Kong; Yang Zhang, CISPA Helmholtz Center for Information Security
Catherine Easdon, Dynatrace Research and Graz University of Technology; Michael Schwarz, CISPA Helmholtz Center for Information Security; Martin Schwarzl and Daniel Gruss, Graz University of Technology
We systematically analyze existing CPU vulnerabilities, showing that CPUs suffer from vulnerabilities whose root causes match those in complex software. This structural approach led to our discovery of [NAME-UNDER-EMBARGO], a new architectural CPU bug which can leak data without using a side channel. [NAME-UNDER-EMBARGO] works on the latest CPUs from one of the major CPU vendors, and allows attacks against Trusted Execution Environments, leaking data in use, e.g., register values and memory loads, as well as data at rest, e.g., data pages.
Pietro Borrello, Sapienza University of Rome; Andreas Kogler and Martin Schwarzl, Graz University of Technology; Moritz Lipp, Amazon Web Services; Daniel Gruss, Graz University of Technology; Michael Schwarz, CISPA Helmholtz Center for Information Security
Protocol verification tools like ProVerif, Tamarin and Deepsec have been highly successful, finding attacks and proving the correctness of many protocols in our daily lives, like TLS (used when browser communicate with websites in confidentially), 5G (the newest mobile phone standard) or SSH (used to control remote machines). The problem is that they have different strengths and weaknesses and it is hard (sometime impossible) to know which is the right one for which job. With SAPIC+, we developeda language in which we can state the protocol and then translate them to use any or all of these tools – and we proved mathematically that this translation is correct.
Vincent Cheval, Inria Paris; Charlie Jacomme, CISPA Helmholtz Center for Information Security; Steve Kremer, Université de Lorraine LORIA & Inria Nancy; Robert Künnemann, CISPA Helmholtz Center for Information Security
"In our work, we aim to understand the opportunities and challenges of different platforms to recruit participant with development experience, in order to help other researchers choose a fitting recruitment platform for their security user studies. We analyzed 59 papers studying security expert studies, identified common recruitment channels, recruitment requirements and collected topics of interested in surveys, then conducted a survey with 706 participants on six platforms. We report our findings regarding differences and similarities between platforms, and finally give recommendations for fellow researchers on where to recruit for security development studies."
Harjot Kaur, Leibniz University Hannover; Sabrina Amft, CISPA Helmholtz Center for Information Security; Daniel Votipka, Tufts University; Yasemin Acar, Max Planck Institute for Security and Privacy and George Washington University; Sascha Fahl, CISPA Helmholtz Center for Information Security and Leibniz University Hannover
Prior work showed that manipulating CPU voltage or frequency can fault instructions, breaking the confidentiality and integrity of trusted-execution environments, such as Intel SGX. We propose Minefield, the first software-level defense against such DVFS attacks, with a tunable security parameter between 0% and almost 100%, yielding a fine-grained security-performance trade-off. The idea of Minefield is not to prevent DVFS faults but to deflect faults to trap instructions and handle them before they lead to harmful behavior, effectively protecting SGX enclaves without slowing down the remaining system.
Andreas Kogler and Daniel Gruss, Graz University of Technology; Michael Schwarz, CISPA Helmholtz Center for Information Security
We integrated some inference attacks and defenses into a re-usable, modular software called ML-Doctor, which offers a platform to holistically analyze the privacy risks caused by inference attacks against machine learning. We found that the complexity of the training dataset plays an essential role in the attack's performance, while the effectiveness of model stealing and membership inference attacks are negatively correlated. We also showed that defenses such as DP-SGD and Knowledge Distillation could only hope to mitigate some inference attacks.
Yugeng Liu, Rui Wen, Xinlei He, Ahmed Salem, Zhikun Zhang, and Michael Backes, CISPA Helmholtz Center for Information Security; Emiliano De Cristofaro, UCL and Alan Turing Institute; Mario Fritz and Yang Zhang, CISPA Helmholtz Center for Information Security
Graph embeddings are not safe! Zhikun Zhang, Min Chen, Yun Shen, Michael Backes, and Yang Zhang reveal that the graph embeddings generated by graph neural networks can leak a lot of private information about the original graphs by conducting three novel inference attacks.
Zhikun Zhang, Min Chen, and Michael Backes, CISPA Helmholtz Center for Information Security; Yun Shen, Norton Research Group; Yang Zhang, CISPA Helmholtz Center for Information Security