The relevance of CISPA's research activities is reflected in a large number of externally funded projects in which our scientists explore specific questions of information security. CISPA operates in a network of excellent national and international partners, which forms the basis for interdisciplinary innovation projects. Our global cooperations also focus on the mobility of researchers of all career levels. The researchers are actively and directly involved in current projects during their guest stays.
Part of CISPA's research is covered by third-party funding. Our researchers recruit these funds in competitive procedures - alone or in cooperation with other applicants.
Along with releases of new web standards in browsers (WebAssembly, WebGPU, WebUSB, etc.), more and more features of connected devices are directly usable from the web. While these specifications hold great promise from a performance perspective, they keep raising significant security concerns. In this project, we aim to analyze the security implications of new features that provide direct or indirect access to low-level hardware features. Building on our previous research, we will (1) investigate the impact of directly mounting native side-channel attacks from the web, (2) develop new methods to efficiently port attacks to browsers to facilitate a faster risk assessment for novel attacks, (3) explore how side-channel attacks can leak secrets or enable user tracking via hardware fingerprints, and (4) lay the foundations for secure low-level web standards by exploring the effectiveness of existing and novel countermeasures (eg. sandboxing) through the lens of hardware/software contracts.
Management
Duration
01.09.2022 – 31.08.2025
Members
Funding Code
RO 5251/1-1; SCHW 2104/2-1
In ILLUMINATION wird ein Werkzeugkasten mit technischen Privatsphäre-Methoden und interdisziplinären Handlungsempfehlungen für die privatsphäreschonende Nutzung von zentralen LLMs im Gesundheitsbereich entwickelt. Die technischen Methoden erlauben es LLM-Anwender:innen angemessenen Datenschutz im Sinne eines „Privacy by Design“ für ihre Nutzer:innen zu implementieren. Die Handlungsempfehlungen, die auf technischen, rechtlichen, menschzentrierten und anwendungsspezifischen Perspektiven beruhen, tragen zur Stärkung verantwortungsvoller und datenschutzkonformer LLM-Praktiken bei, unterstützen bei der Navigation durch die komplexe Landschaft der LLM-Implementierung und legen den Grundstein für gesetzeskonforme Privatsphäre-Methoden in LLM-basierten Anwendungen.
While there is a long-standing tradition of training various machine learning models for different application tasks on visual data, only the most recent advances in the domain of foundation models managed to unify those endeavors into obtaining highly powerful multi-purpose deep learning models. These models, such as DINO_v21, or SAM,2 are pre-trained on large amounts of public data which turns them into efficient feature extractors. Using only small amounts of sensitive downstream data and reduced compute resources in comparison to a from-scratch training, these feature extractors can then be adapted (through full or partial fine-tuning, transfer learning, or prompting approaches) to solve a wide range of downstream applications. Our goal is to bring foundation models and the new and powerful learning paradigms for their adaptation to the complex and sensitive medical domain with a focus on CT and MRI data.
Management
Founded
2024
Duration
01.01.2024-31.12.2027
Members
Funding Code
ZT-I-PF-5-227
Das Projekt hat zum Ziel, Softwareentwickler:innen die Möglichkeit zu geben, schnell Informationen über Code-Snippets zu sammeln, die sie in ihrer Codebasis wiederverwenden. Dazu gehören Benachrichtigungen über Änderungen an den Codequellen, Warnungen vor Sicherheitsproblemen und Fehlern oder Zusammenfassungen von Diskussionen über solche Code-Snippets. Die Field Study Fellowship zielt darauf ab, die Software an die Bedürfnisse der Entwickler anzupassen und die Effektivität der Code-Wiederverwendung zu verbessern.
Das Projekt Aletheia zielt auf eine innovative technische und interaktive Erkennung von Deepfakes in Bildern, Videos und Audiofrequenzen ab.
Das Ziel einer Erkennung besteht darin, Desinformation zu bekämpfen und Authentizität zu wahren. Basierend auf maschinellem Lernen werden Anomalien gefunden, die in authentischen Inhalten nicht vorkommen. Die Ergebnisse werden für den Nutzer forensisch detailliert aufbereitet. Damit die Entscheidung auf Fake oder Nicht-Fake für den Anwender nachvollziehbar ist, wird das Ergebnis mittels erklärbarer künstlicher Intelligenz verständlich gestaltet. Dabei werden innovative lnterpretationsmodelle verwendet, die Anomalien und Auffälligkeiten hervorheben, um eine neuartige forensische Analyse durch den Endnutzer zu ermöglichen. Es ergibt sich ein präzises, detailliertes und interpretierbares Ergebnis, das erklärt, warum ein bestimmter Inhalt als Deepfake eingestuft wurde. Hierdurch wird ein nutzerzentriertes Umfeld des Vertrauens und der Transparenz geschaffen.
Darüber hinaus liegt ein Fokus auf einer multimodalen und skalierbaren Analyse. Hierbei werden Videos zunächst getrennt hinsichtlich Tons und Bild analysiert, anschließend auf ihre Kohärenz. Die Motivation hinter dem StartUp Secure Förderprogramm ist die schnelle Umsetzung marktrelevanter Lösungen. Somit steht als Gesamtziel dieses Vorhabens die Entwicklung eines Technologiedemonstrators im Vordergrund, um das Wissen aus Forschung und Entwicklung am Markt anbieten zu können. Aus dem Projekt soll somit letztlich eine Unternehmensgründung hervorgehen, die die hier entwickelten Technologien anbietet.
Communication efficiency is one of the central challenges for cryptography. Modern distributed computing techniques work on large quantities of data, and critically depend on keeping the amount of information exchanged between parties as low as possible. However, classical cryptographic protocols for secure distributed computation cause a prohibitive blow-up of communication in this setting. Laconic cryptography is an emerging paradigm in cryptography aiming to realize protocols for complex tasks with a minimal amount of interaction and a sub-linear overall communication complexity. If we manage to construct truly efficient laconic protocols, we could add a cryptographic layer of protection to modern data-driven techniques in computing. My initial results in laconic cryptography did not just demonstrate the potential of this paradigm, but proved to be a game-changer in solving several long standing open problems in cryptography, e.g. enabling me to construct identity-based encryption from weak assumptions. However, the field faces two major challenges: (a) Current constructions employ techniques that are inherently inefficient. (b) The most advanced notions in laconic cryptography are only known from very specific combinations of assumptions, and are therefore just one cryptanalytic breakthrough away from becoming void. This project will make a leap forward in both challenges. I will systematically address these challenges in a work program which pursues the following objectives: (i) Develop new tools and mechanisms to realize crucial cryptographic primitives in a compact way. (ii) Design efficient protocols for advanced laconic functionalities which sidestep the need for inherently inefficient low-level techniques and widen the foundation of underlying assumptions. (iii) Strengthen the conceptual bridge between laconic cryptography and cryptographically secure obfuscation, transferring new techniques and ideas between these domains.
Management
Founded
2022
Duration
01.07.2022-30.06.2027
Funding Code
HORIZON-ERC (ERC-2021-StG)
Research Area
Cyber-Physische Systeme (CPS), welche die physische Umgebung und zahlreiche eingebettete Computersysteme mittels digitaler Netzwerke zu einem eng gekoppelten Gesamtsystem verknüpfen, bilden die Schlüsseltechnologie der immer zahlreicher werdenden smarten Umgebungen. Die meisten dieser Anwendungen sind hochgradig sicherheitskritisch, da Fehlfunktionen der cyber-physischen Systeme unmittelbare Gefährdungen für Leib, Leben, Umwelt oder Güter nach sich ziehen. Die durchgängige Laufzeitüberwachung der Systemfunktionen durch geeignete Monitorprozesse stellt ein wichtiges Element zur Sicherstellung zuverlässigen, vorhersagbaren und sicheren Systemverhaltens dar. Die Anforderungen an die ein CPS überwachenden Monitorprozesse sind dabei extrem hoch, da fehlende Detektion von Ausnahmesituationen die vorgenannten Gefährdungen induziert, überflüssige Signalisierung umgekehrt die Systemleistung massiv herabsetzen kann. Das Projekt PreCePT leistet mittels eines Brückenschlags zwischen formalen Methoden der Informatik und Fehlermodellen der Messtechnik entscheidende Grundlagenforschung zur Deckung des Bedarfs an entsprechend zuverlässigen, nachweislich korrekten Monitorprozessen, indem es aus formalen Spezifikationen automatisch Laufzeitmonitore synthetisiert, welche die bei sensorischer Überwachung der Umwelt unvermeidbaren messtechnischen Ungenauigkeiten und partielle Beobachtbarkeit berücksichtigen. Die generierten Monitoringalgorithmen vereinen maximale Exaktheit mit harten Echtzeitgarantien und sind aufgrund stringenter Ableitung aus formalen semantischen Modellen von CPS sowie Nutzung fortschrittlicher arithmetischer Constraintsolvingtechniken in dieser Hinsicht nachweislich optimal.
Management
Founded
2023
Duration
01.08.2024 - 31.07.2026
Funding Code
FI 936/7-1
Research Area
The quest for a science of perspicuous computing continues. With the results that were achieved in the first funding period, we are spearheading a larger movement towards building and reasoning about software-based systems that are understandable and predictable. As a result, CPEC is gradually enlarging its scope in its second funding period. This pertains to three interrelated facets of our research:
· Deepening investigation of various feedback loops within the system lifecycle which are required to feed system analysis insights – in particular, insights from inspection-time justification – back into the design-time engineering of perspicuous systems.
· Emphasising human-centred and psychological research regarding the human-in-the-loop, reflecting the need to investigate the interaction of perspicuous systems with various groups of human stakeholders.
· Interfacing to the societal dimension of perspicuity – society-in-the-loop – echoing the increasing number of regulatory requirements regarding perspicuity put forward in recent years.
CPEC joins the pertinent forces at Saarbrücken and Dresden that are apt to master the challenge of putting perspicuous computing research into the loop. It comprises computer science experts and links to psychology and juridical expertise at Universität des Saarlandes and Technische Universität Dresden as well as the Max Planck Institute for Software Systems and the CISPA Helmholtz Center for Information Security. The participating institutions have developed a joint research agenda to deepen the transregional network of experts in perspicuous systems. It will serve our society in its need to stay in well-informed control over the computerised systems we all interact with. It enables comprehension and control in a cyber-physical world.
Source: https://www.perspicuous-computing.science/research/ (03.05.2023)
Management
Duration
01.01.2023 bis 31.12.2026
Funding Code
TRR248/2
Research Area
The central role of information technology in all aspects of our private and professional lives has led to a fundamental change in the type of program properties we care about. Up to now, the focus has been on functional correctness; in the future, requirements that reflect our societal values, like privacy, fairness, The central role of information technology in all aspects of our private and professional lives has led to a fundamental change in the type of program properties we care about. Up to now, the focus has been on functional correctness; in the future, requirements that reflect our societal values, like privacy, fairness, and explainability will be far more important. These properties belong to the class of hyperproperties, which represent sets of sets of execution traces and can therefore specify the relationship between different computations of a reactive system. Previous work has focussed on individual hyperproperties like noninterference or restricted classes such as k-hypersafety; this project sets out to develop a unified theory for general hyperproperties. We will develop a formal specification language and effective algorithms for logical reasoning, verification, and program synthesis. The central idea is to use the type and alternation structure of the logical quantifiers, ranging from classic firstorder and second-order quantification to quantifiers over rich data domains and quantitative operators for statistical analysis, as the fundamental structure that partitions the broad concept of hyperproperties into specific property classes; each particular class is then supported by algorithms that provide a uniform solution for all the properties within the class. The project will bring the analysis of hyperproperties to the level of traditional notions of safety and reliability, and provide a rigorous foundation for the debate about standards for privacy, fairness, and explainability that future software-based systems will be measured against.
Management
Duration
01.11.2022-31.10.2027
Funding Code
HORIZON-ERC (ERC-2021-ADG)
Research Area
That will address major challenges hampering the deployment of AI technology. These grand challenges are fundamental in nature. Addressing them in a sustainable manner requires a lighthouse rooted in scientific excellence and rigorous methods. We will develop a strategic research agenda which is supported by research programmes that focus on “technical robustness and safety”,“privacy preserving techniques and infrastructures” and “human agency and oversight”. Furthermore, we focus our efforts to detect, prevent, and mitigate threats and enable recovery from harm by 3 grand challenges: “Robustness guarantees and certification”,“Private and robust collaborative learning at scale” and “Human-in-the-loop decision making: Integrated governance to ensure meaningful oversight” that cut across 6 use cases: health, autonomous driving, robotics, cybersecurity, multi-media, and document intelligence. Throughout our project, we seek to integrate robust technical approaches with legal and ethical principles supported by meaningful and effective governance architectures to nurture and sustain the development and deployment of AI technology that serves and promotes foundational European values. Our initiative builds on and expands the internationally recognized, highly successful and fully operational network of excellence ELLIS. We build on its 3 pillars: research programmes, a set of research units, and a PhD/PostDoc programme, thereby connecting a network of over 100 organizations and more than 337 ELLIS Fellows and Scholars (113 ERC grants) committed to shared standards of excellence. Not only will we establish a virtual center of excellence, but all our activities will also be inclusive and open to input, interactions, and collaboration of AI researchers and industrial partners in order to drive the entire field forward.
Management
Founded
2022
Duration
01.09.2022-31.08.2025
Funding Code
HORIZON-CL4-2021-HUMAN-01-03
Ever since the last Coronavirus epidemic caused by SARS-CoV-1, plans and tools for the containment of epidemics are being developed. However, an appropriate early warning system for local health authorities addressing this need on a regional, targeted level is not available. In the current SARS-CoV-2 pandemic, the need for such a system becomes increasingly obvious. The heterogeneity of different regions and localized outbreaks require a locally adapted monitoring and evaluation of infection dynamics.
Early recognition of an emerging epidemic is a crucial part of a successful intervention. The comparison of Germany to other European nations illustrates how crucial a timely implementation of non-pharmaceutical interventions is for the containment of an epidemic. Hence, continuous monitoring of infection processes is indispensable. For strategic planning of political interventions, epidemiological modelling and scenario calculations for forecasting and evaluation of interventions and scenarios have shown their importance. The accuracy of such forecasts largely depends on the robustness and broadness of the underlying data. Further, there is a need for an intelligible presentation of often complex simulation results without oversimplification of their interpretation and inherent uncertainty.
In this proposal, we develop a platform that integrates data streams from various sources in a privacy preserving manner. For their analysis, a variety of methods from machine learning to epidemiological modeling are employed to detect local outbreaks early on and enable an evaluation for different assumptions and on different scales. These models will be integrated into automatized workflows and presented in an interactive web application with custom scenario simulations. The platform will be based on insights gained by retrospective and prospective evaluation of the COVID-19 pandemic, using SARS-CoV-2 as a blueprint for the prevention and containment of future respiratory virus epidemics. The platform will be transferred to the Academy for Public Health Services and optimized in pilot projects with selected local health authorities under real-world conditions.
Das Ziel von PrivateAIM ist die Entwicklung einer föderierten Plattform für maschinelles Lernen (ML) und Datenanalyse im Rahmen der Medizininformatik-Initiative (MII), bei der die Analysen zu den Daten gebracht werden und nicht die Daten zu den Analysen. Methoden, die eine verteilte Datenverarbeitung in den von der MII eingerichteten Datenintegrationszentren ermöglichen, sind aus mehreren Gründen wichtig: 1) Daten von Patientinnen und Patienten dürfen ohne Einwilligung nur verwendet werden, wenn Anonymität gewährleistet ist; 2) föderierte Technologien können dazu beitragen, die MII mit anderen Gesundheitsdatennetzen zu verbinden. Die derzeit in der MII etablierten Mechanismen weisen jedoch erhebliche Einschränkungen auf und sind beispielsweise nicht für komplexe ML- und Data-Science-Tasks geeignet. Darüber hinaus sind föderierte Plattformen, die in anderen Zusammenhängen entwickelt wurden, 1) kompliziert einzurichten und zu betreiben, 2) unterstützen eine begrenzte Anzahl von Analyse- oder ML-Methoden, 3) implementieren keine modernen Technologien zur Wahrung der Privatheit und 4) sind nicht skalierbar oder ausgereift. Das PrivateAIM-Konsortium, das von allen MII-Konsortien unterstützt wird, bringt Experten zusammen, um die nächste Generation der föderierten Analyse- und ML-Plattform für die MII zu entwickeln. Die Plattform für föderierte Lern- und Analysemethoden (FLAME) wird modernste Föderierungsmethoden mit innovativen Datenschutzmodellen für multimodale Daten kombinieren. Unter Verwendung von Komponenten zur Überwachung und Kontrolle des Schutzniveaus werden diese in eine verteilte Infrastruktur integriert, die von den Datenintegrationszentren leicht übernommen werden kann. Die Umsetzung in die Praxis wird durch die Berücksichtigung von Herausforderungen an der Schnittstelle von Technologie und Recht, die Entwicklung von Konzepten für den Betrieb durch die IT-Abteilungen der Krankenhäuser und die Abstimmung mit Ethikkommissionen sowie Datenschutzbeauftragten begleitet.
Genetic data is highly privacy sensitive information and therefore is protected under stringent legal regulations, making them burdensome to share. However, leveraging genetic information bears great potential in diagnosis and treatment of diseases and is essential for personalized medicine to become a reality. While privacy preserving mechanisms have been introduced, they either pose significant overheads or fail to fully protect the privacy of sensitive patient data. This reduces the ability to share data with the research community which hinders scientific discovery as well as reproducibility of results. Hence, we propose a different approach using synthetic data sets that share the properties of patient data sets while respecting the privacy. We achieve this by leveraging the latest advances in generative modeling to synthesize virtual cohorts. Such synthetic data can be analyzed with established tool chains, repeated access does not affect the privacy budget and can even be shared openly with the research community. While generative modeling of high dimensional data like genetic data has been prohibitive, latest developments in deep generative models have shown a series of success stories on a wide range of domains. The project will provide tools for generative modeling of genetic data as well as insights into the long-term perspective of this technology to address open domain problems. The approaches will be validated against existing analysis that are not privacy preserving. We will closely collaborate with the scientific community and propose guidelines how to deploy and experiment with approaches that are practical in the overall process of scientific discovery. This unique project will be the first to allow the generation of synthetic high-dimensional genomic information to boost privacy compliant data sharing in the medical community.
Management
Duration
01.08.2020-31.07.2023
Members
Funding Code
ZT-1-PF-5-23
Website
Das Projekt "AlgenCY" verfolgt das Ziel, die vielfältigen Möglichkeiten und Herausforderungen, die generative KI-Methoden für die Cybersicherheit mit sich bringen, gründlich zu untersuchen und zu bewerten. Wir wollen herausfinden, wie diese Technologien wirksam zur Abwehr von Cyberbedrohungen genutzt werden können, aber auch, welche potenziellen Schwachstellen und Risiken sie möglicherweise selbst darstellen. Als Ergebnis dieser Forschung streben wir an, fundierte Vorhersagen über die zukünftigen Einflüsse generativer KI auf die Cybersicherheit zu treffen. Auf dieser Basis sollen gezielte Strategien und Lösungen entwickelt werden, um die digitale Sicherheit zu stärken. Im Rahmen dieses Teilprojekts wird das CISPA insbesondere die Sicherheitsaspekte von großen Sprachmodellen (Large Language Models, LLMs) sowie andere generative KI-Methoden untersuchen. Der Schwerpunkt liegt dabei sowohl auf der Absicherung dieser Methoden selbst als auch auf den Möglichkeiten, wie sie in der IT-Sicherheit eingesetzt werden können.
Management
Founded
2023
Duration
01.11.2023 - 31.10.2026
Funding Code
16KIS2012
In KMU-Fuzz werden neue Konzepte erforscht und umgesetzt, um Fuzz-Testing - eine besonders vielversprechende Form von automatischem Softwaretesting - entscheidend zu verbessern. Dabei Iiegt der Fokus vor allem auf dem effizienten Testen von Netzwerkschnittstellen von Applikationen, da existierende Fuzzing-Tools in diesem Bereich momentan noch eine unzureichende Testabdeckung bieten. Durch die Erforschung von neuartigen Methoden, z.B. zustandsbasiertes Netzwerkfuzzing, Fuzz-Testingbasierend auf effizienten Checkpunktmechanismen und effizientes Protokollfuzzing, werden neuartigen Methoden entwickelt, umkomplexe Softwaresysteme automatisiert und effizient testen zu können. Der Fokus dieses Teilprojekts liegt auf der effizientenVerwendung von verschiedenen Methoden aus dem Bereich der Programmanalyse, um Fuzzing effizienter durchführen zu können.
Die Vision von GAIA-X ist die Schaffung einer sicheren, vernetzten, föderierten Dateninfrastruktur, um in Datenökosystemen Datensouveränität herzustellen. Das TELLUS-Vorhaben erweitert diese Dateninfrastruktur der verschiedenen Cloud-Ökosysteme um eine leistungsfähige Anbindung und Integration von heterogener Netzwerkinfrastruktur. Es existieren verschiedene Use Cases, die nicht nur hohe Anforderungen an Cloud-Dienste stellen, sondern insbesondere auch an Netzwerke in Bezug auf Latenz, Bandbreite, Sicherheit, Resilienz und Dynamik.
TELLUS entwickelt basierend auf solchen Use Cases ein Overlay über Kaskaden von Cloud-Anbietern, Vernetzungsdienstleistern und Cloud-Anwendern, um unter Berücksichtigung kritischer Anforderungen eine Ende-zu-Ende Vernetzung mit Garantien für Hybrid-Cloud-Szenarien zu ermöglichen. Dem GAIA-X Gedanken folgend werden durch Integration auf Basis von Standards/Schnittstellen und Systemen, Domänengrenzen überbrückt, Interoperabilität und Portabilität sichergestellt und somit dynamische Netzwerke mit variablen Bandbreiten, geringeren Latenzen, erhöhter Sicherheit und Kontrolle über den Datenfluss im Netzwerk geschaffen.
Im Rahmen dieses Teilprojekts untersucht CISPA vorrangig Sicherheits- und Compliance-Aspekte des Gesamtsystems. Dazu wird eine umfassende Risiko- und Bedrohungsanalyse durchgeführt und basierend auf den Ergebnissen werden entsprechende Schutzkonzepte entwickelt und umgesetzt. Darüber hinaus ist CISPA an einigen anderen Arbeitspaketen beteiligt und bringt dort vor allem ebenfalls Expertise im Bereiche Security mit ein.
Computersysteme in Banken und Versicherungsgesellschaften, aber auch in autonomen Fahrzeugen oder Satelliten sind vielversprechende Ziele für Cyberattacken und müssen geschützt werden um solchen Attacken zu widerstehen, oder um sie auszuhalten und dennoch sicher zu funktionieren. Leider gewinnen Angreifer mit zunehmender Komplexität dieser Systeme immer mehr Möglichkeiten, weshalb wir bei ihrer Verteidigung annehmen müssen, dass manche Attacken erfolgreich sein könnten. Glücklicherweise existieren bereits Resilienz-Techniken, wie etwa die dreifache Replikation des Computersystems mitsamt des Protokolls, um dann durch Abstimmung das korrekte Ergebnis zu bestimmen, auch wenn eine der Instanzen wegen einer erfolgreichen Cyberattacke ein fehlerhaftes Ergebnis liefert. Damit diese Techniken angewendet werden können, müssen sie aber auf die Struktur des Systems abgestimmt sein, insbesondere darauf wie die Komponenten des Systems miteinander interagieren. Die Resilienz-Techniken, die bisher entwickelt wurden, sind auf gewisse Formen der Interaktion eingeschränkt, und die Entwicklung neuer Techniken für aufwendigere Formen der Interaktion ist immer noch eine schwierige und fehleranfällige Aufgabe. Insbesondere können Tools, die die Entwicklung solcher Protokolle durch Korrektheitsprüfungen unterstützen, üblicherweise nur auf komplett fertiggestellte Protokolle angewandt werden, und ihre Anwendung benötigt darüber hinaus seltenes Expertenwissen. Im FM-CReST Projekt haben sich Forscher vom CISPA, Deutschland, und aus dem SnT der Universität Luxembourg zusammengeschlossen, um eine neue Klasse in hohem Maße automatisierter und leicht bedienbarer Tools zu entwickeln, die beim Design beweisbar korrekter Resilienz-Protokolle Unterstützung bieten. Um dies zu erreichen, setzen wir auf Co-Design von Protokollen mit komplexen Interaktionsformen, und basieren die Entwicklung unserer Tools auf den Beobachtungen während der Protokollentwicklung, mit dem Ziel ähnliche Aufgaben in Zukunft zu vereinfachen.
Funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) – 513487900.
Management
Founded
2023
Duration
01.12.2023-30.11.2026
Funding Code
JA 2357/4-1; Projektnummer 513487900
In this project we want to establish Global Synchronization Protocols (GSPs) as a new computational model for concurrent systems with a parametric number of components. Besides local updates of a component, GSPs support synchronous global updates of the system, which may be guarded with global conditions on the state of the system. With this combination, they can be used to model applications that depend on global synchronization between components, e.g. by consensus or leader election, at an abstraction level that hides the internal implementation of the agreement protocol, while faithfully preserving its pre- and postconditions.We will identify conditions under which parameterized safety verification of GSPs remains decidable, even though this problem is in general undecidable for the combination of communication primitives that GSPs support. A preliminary version of GSPs already supports both global synchronization and global transition guards, and we plan to further extend the system model to include asynchronous message-passing and different extensions for fault tolerance, while preserving decidability of parameterized verification.Moreover, we will identify conditions for small cutoffs for safety verification, i.e., small bounds on the number of components that need to be considered to provide parameterized correctness guarantees. Based on these cutoffs, we will also develop an approach to automatically synthesize GSPs that satisfy given properties by construction. Finally, we will also investigate a refinement-based synthesis approach for GSPs and compare its properties to the cutoff-based approach.Our research into decidable fragments of GSPs will be guided by applications from different areas, such as sensor networks, robot swarms, or blockchain-based applications.
Management
Duration
01.07.2023-30.06.2026
Funding Code
JA 2357/3-1; Project number 497132954
Research Area
Neuartige, am Körper getragene Geräte bieten neue, skalierbare Benutzerschnittstellen, die intuitiver und direkter zu bedienen sind. Allerdings birgt die körpernahe Ein- und Ausgabe ernsthafte neue Risiken für die Privatsphäre der Nutzer: Die großen Hand- und Fingergesten, die typischerweise für die Eingabe verwendet werden, sind wesentlich anfälliger für Beobachtung durch Dritte als die etablierten Formen der Toucheingabe. Das gilt in noch größerem Maße für visuelle Ausgabe am Körper. Dies ist besonders problematisch, da am Körper getragene Geräte typischerweise bei mobilen Aktivitäten in nicht-privaten Umgebungen verwendet werden. Das primäre Ziel dieses Projekts ist es, einen Beitrag zur Skalierbarkeit von On-Body-Computing in öffentlichen Umgebungen zu leisten, indem Interaktionstechniken für die Eingabe und Ausgabe privater Informationen entwickelt werden, die eine verbesserte Widerstandsfähigkeit gegenüber Verletzungen der Privatsphäre bieten. Im Mittelpunkt unseres Ansatzes steht das Ziel, die einzigartigen Interaktionseigenschaften des menschlichen Körpers zu nutzen: hohe manuelle Geschicklichkeit, hohe taktile Sensibilität und eine große verfügbare Oberfläche für Ein- und Ausgabe, gepaart mit der Möglichkeit, Ein- und Ausgabe durch variable Körperhaltung flexibel abzuschirmen. Diese Eigenschaften können die Grundlage für neue körperbasierte Ein- und Ausgabetechniken bilden, die skalierbar und (praktisch) unbeobachtbar sind. Dieses Ziel ist bisher weitgehend unerforscht. Es ist sehr anspruchsvoll aufgrund der neuen und höchst unterschiedlichen Formen und Skalierungen von körpernahen Geräten sowie der neuartigen Formen multimodaler Ein- und Ausgabe. Diese werden durch die inhärente Komplexität sozialer Umgebungen, die jeweilige Proxemik und die Aufmerksamkeit von Nutzern und Umstehenden weiter erschwert. Um einen Design-Raum für die Interaktionen zu erstellen, werden wir die Privatsphäre von taktiler Eingabe, visueller und haptischer Ausgabe an verschiedenen Körperstellen empirisch untersuchen, abhängig von der Körperhaltung und den proxemischen Konfigurationen. Anschließend werden wir systematisch körperbezogene Eingabegesten sowie skalierbare Techniken für multimodale Interaktion konzipieren und implementieren, die die Privatsphäre in sozialen Umgebungen hinsichtlich eines generalisierten Bedrohungsmodells wahren. Wir verwenden hierbei Aufmerksamkeitsmodelle, die den menschlichen Körper beinhalten. Die neuen Interaktionstechniken werden empirisch mit Nutzern in realistischen Szenarien und im Labor evaluiert, um zu bewerten, wie ihre Eigenschaften die Benutzerfreundlichkeit, den Datenschutz und die Skalierbarkeit beeinflussen. Beides wird uns helfen, die interne und externe Validität unseres Ansatzes zu verstehen. Wir erwarten, dass die Ergebnisse dieses Projekts wesentlich dazu beitragen werden, die Grundlagen für skalierbare körperbasierte Interaktionen zu schaffen, die die Privatsphäre wahren.
Funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) – 521601028
Management
Founded
2023
Duration
01.04.2024-31.03.2027
Members
Funding Code
KR 5384/2-1; Projektnummer 521601028
Many of today’s critical infrastructures such as power grids or cellular networks are distributed systems which are comprised of autonomous nodes connected over a network. To avoid single points of failure, a central goal in distributed systems is to implement such systems in a faulttolerant manner. Fault-tolerant systems remain secure and available even if some minority of nodes crash or spread incorrect information. To improve robustness and scalability, cuttingedge systems frequently rely on cryptography. In spite of its benefits, careless use of cryptography can incur performance penalties or lead to vulnerabilities. Both of these aspects significantly complicate its use in practice. As a result, many real-world systems use cryptography sparingly and therefore lack both robustness and scalability. To improve this unsatisfying state of affairs, the objectives of CRYPTOSYSTEMS are as follows:
• New Formal Models. Established formal models from cryptography and distributed systems pursue independent security goals and therefore lack compatibility. CRYPTOSYSTEMS will develop new, more appropriate formal models for analyzing the security of cryptographic distributed systems.
• Efficient and Robust Distributed Algorithms. The use of cryptography is currently underexplored in distributed systems. CRYPTOSYSTEMS will present exciting new applications of cryptography that will lead to the development of more robust and scalable distributed systems.
• Cryptography for Distributed Algorithms. Cryptography is seldom developed with distributed algorithms as its primary use-case in mind. This results in inefficient or unwieldy cryptography which hampers the efficiency of distributed algorithms. To counteract this, CRYPTOSYSTEMS will develop cutting-edge cryptography such as compact signatures and communication efficient distributed randomness generation routines. Importantly, these tools will be specifically geared toward use in distributed algorithms.
Management
Duration
01.09.2023-31.08.2028
Funding Code
HORIZON-ERC (ERC-2023-StG)
Research Area
Digital signatures are a fundamental and versatile cryptographic tool. In a digital signature scheme, a signer holding a secret key can sign a message in such a way that anyone can efficiently verify the signature using a corresponding public key. On the other hand, it should be impossible to create a signature in the signer's name (so long as its secret key indeed remains secret). An important variant of signature scheme is a multi-signer version where multiple signers can jointly create a compact signature on a message. Later on the resulting signature can be efficiently verified against (an aggregate of) all of their public keys. This allows to create a storage efficient proof that a certain number, say half of all parties in the system, has signed a message. This intriguing aspect of multi-signer signatures has recently received an enormous amount of attention in the context of blockchain and consensus protocols and.
The aim of this project is to improve the security and understanding of multi-signer signatures to 1) develop modular frameworks for building multi-signer signatures from weaker primitives such as identification schemes. This type of design approach has seen much success in the construction of basic signatures and will lead to multi-signer signature schemes from a wider array of mathematical hardness assumptions. 2) To revisit and improve the security guarantees of existing multi-signer schemes used in practice. Our main aim is to prove existing constructions secure with respect to a powerful adversary that can dynamically corrupt signers over the course of time. This type of security guarantee is often required in practical applications, e.g., consensus protocols, yet it is not satisfied by most efficient schemes. And 3) to improve the robustness of distributed key generation (DKG) protocols. Many multi-signer schemes rely on a trusted dealer to set up correlated keys among the signers. This is problematic for many natural applications such as blockchain protocols, where such a dealer might not be available. Therefore, parties can instead use a DKG protocol, to jointly set up such a correlated set of keys. This makes DKG protocols a crucial tool for running multi-signer protocols in a trust free manner. Unfortunately, existing DKG protocols rely on unrealistic network assumptions or tolerate only a small number of corruptions. The goal of this project is to improve their robustness in both of these regards.
Management
Duration
01.09.2022 – 31.08.2025
Members
Funding Code
LO 3021/1-1
Detecting vulnerabilities in web applications is a daunting problem that does not have a general solution yet. Existing ad-hoc solutions can only identify simple forms of vulnerabilities that are present on the web application surface. In this project, we propose Yuri, a goal-oriented security testing agent that can synthesize semantic models and program representations closer to the way humans perceive and understand the program behaviors. Yuri can use these models to drive the attack surface exploration and execute security testing tasks, greatly expanding modern web-based application software coverage.
Die Entwicklung verlässlicher und sicherer Software-Systeme verlangt systematisches und umfassendes Testen. Da strukturierte und standardisierte Datenformate über viele Software-Anwendungen hinweg zum Austausch von Daten genutzt werden, müssen die verwendeten Systeme robust und sicher auch mit manipulierten oder fehlerhaften Datensätzen umgehen können. Die für die Tests benötigten Testdaten können für strukturierte Formate, die unter anderem in elektronischen Rechnungen verwendet werden, bislang nur händisch erzeugt werden. Dadurch sind diese in ihrer Verfügbarkeit begrenzt und entsprechend kostenintensiv.
Die Forschenden im Vorhaben „InputLab“ entwickeln Verfahren zur automatischen Erzeugung von Testdaten für Datenformate, für die ein Datenschema vorliegt. Solche Schemas werden in Standardisierungsverfahren für digitale Formate mitdefiniert und tragen zur Interoperabilität verschiedener Software-Systeme bei. Die im Vorhaben generierten Testdaten können genutzt werden, um Fehlverhalten in Anwendungen auszulösen, zu diagnostizieren und reparieren. Dabei können auch subtile Fehler erkannt werden, die sich nicht durch drastisches Verhalten wie Software-Abstürze zeigen. So können schwerwiegende Probleme und Kosten vermieden werden. Damit Entwicklungsteams die Datensätze für ihre Zwecke einfacher verwenden können, sollen diese flexibel an die Eigenschaften von Beispieldatensätzen anpassbar sein.
Durch die Projektentwicklungen wird der vielfältige Bedarf an hochwertigen Testdaten für Software-Systeme mit strukturierten Formaten adressiert. Dabei soll eine möglichst kleine Anzahl an Datensätzen eine große Bandbreite fehlerhafter oder manipulierter Datenpunkte abdecken, um effektive und kosteneffiziente Tests zu ermöglichen. Gleichzeitig kann mit den erzeugten Testdaten eine Vielzahl unterschiedlicher Software-Anwendungen auf Schwachstellen untersucht und somit nachhaltig sicherer gestaltet werden.
The project "Automated Security Analysis of RTOS and MCU-based IoT Firmware (ASRIOT)" aims at exploring automated security analyses of firmware that are based on real-time operating systems (RTOS) and single-chip computer systems, so-called microcontrollers (MCU), in order to create trustworthy control systems. Such control systems are used, for example, to monitor manufacturing processes or to control vehicles. The platform we intend to create will be able to automatically analyze proprietary firmware in order to register manufacturer-specific components and libraries. By analyzing so-called binary files, which will be structured according to a predefined scheme, the additional or adapted components are to be detected automatically. Furthermore, the platform will automatically detect typical security vulnerabilities in communication, encryption and memory usage and summarize the findings in detailed reports. The measures will be tested using an integrated demonstrator, which will enable us to present directly applicable technologies at the end of the project.
Management
Founded
2023
Duration
01.04.2023 - 31.03.2026
Funding Code
16KIS1807K
Autonome Fahrzeuge treffen auf den Straßen autonom Steuerentscheidungen basierend auf KI-basierter Verarbeitung diverser Sensordaten. Mögliche böswillige Angriffe können zu Unfällen aufgrund falscher Manöver autonomer Fahrzeuge führen und müssen daher bei der Diskussion um die Zuverlässigkeit solcher Systeme zusammen mit Gegenmaßnahmen systematisch erforscht werden.
In diesem Projekt werden die Auswirkungen von Manipulationen auf aktuelle Sensoren und Sensorverarbeitungspipelines analysiert. Darüber hinaus werden sichere Plattformen für Sensordatenverarbeitung entworfen und implementiert sowie deren Wirksamkeit bei der zuverlässigen Abwehr von Manipulations- und Kompromissversuchen demonstriert. Das resultierende Design einer sicherheitsorientierten Plattform wird als Referenz für zukünftige Forschung und Produktentwicklung in diesem Bereich dienen.
Wir gehen das Problem aus drei komplementären Forschungsrichtungen an: Verteidigung gegen physische Sensormanipulationen, Erkennung und Verhinderung von Manipulationen in der Sensorfusion-Pipeline und Vertrauenswürdige Verarbeitungsplattformen für die Automobil-branche. Lösungen zu einzelnen Themen wer-den zuerst theoretisch analysiert, dann in Simulationen auf ihre Wirksamkeit überprüft. Parallel zu der Bedrohungsanalyse und der Entwicklung der Gegenmaßnahmen werden wir unsere praktische Demonstratorenplattform entwickeln und umsetzen
Management
Founded
2024
Duration
02.10.2024 - 31.12.2027
Funding Code
45AVF5A011
Das Ziel von ProSeCA ist es moderne Cybersicherheits-Architekturen für Fahrzeuge zu erforschen und umzusetzen. ProSeCA setzt dabei zur Gewährleistung größtmöglicher Funktions- und Datensicherheit auf einen ganzheitlichen Ansatz, mit dem auf fundamentaler Ebene typische Sicherheitsprobleme wie Speicherfehler gar nicht erst entstehen. Die Cybersicherheit von vernetzten autonomen Fahren (z.B. eine sichere interne/externe Kommunikation) ist kritisch für die Sicherheit der Passagiere und anderen Verkehrsteilnehmer. Die Vielfalt heterogener Elemente und Funktionen heutiger Automotive-Architekturen erzeugt bei steigenden Anforderungen große Angriffsflächen für CyberAngriffe.
Zwar fordert die Richtlinie UNECE R155 für künftige Neuzulassungen Cybersecurity-Management, doch fehlt bisher Erfahrung mit geeigneten Architekturen und Sicherheitsbausteinen. In diese Lücke zielt ProSeCA: Nach dem Leitgedanken „Ein System ist nur so sicher wie sein schwächstes Glied" geht es, ausgerichtet an ISO/SAE 21434, um ein Sicherheitskonzept als modularisierbare und standardisierbare vertrauenswürdige Hardware-/Softwarearchitektur für Fahrzeugsteuergeräte. Es beinhaltet als Sicherheitsbausteine neben Hardware-basierten Schutzmaßnahmen die Programmiersprache Rust und Lösungen zum automatisierten Testen der Softwarekomponenten. Ein Demonstrator zeigt die Realisierbarkeit derart neuer Architekturen als OEM-offene Lösung. Das Konsortium aus acht Partnern ist ein gezielter Ausschnitt der Automotive-Wertschöpfungskette.
Management
Founded
2023
Duration
01.09.2023-30.06.2026
Funding
PDIR
Funding Code
19A23009G
Antimicrobial Resistance (AMR) is perhaps the most urgent threat to human health. Since their discovery over a century ago, antibiotics have greatly improved human life expectancy and quality: many diseases went from life-threatening to mild inconveniences. Miss- and over-usage of these drugs, however, has caused microbes to develop resistance to even the most advanced drugs; diseases once considered conquered are becoming devastating again. While individual resistance mutations are well-researched, knowing which new mutations can cause antimicrobial resistance is key to developing drugs that reliably sidestep microbial defenses. In this project we propose to gain this knowledge via explainable artificial intelligence, by developing and applying novel methods for discovering easily interpretable local patterns that are significant with regard to one or multiple classes of resistance. That is, we propose to learn a small set of easily interpretable models that together explain the resistance mechanisms in the data, using statistically robust methods for discovering significant subgroups, as well as information theoretic approaches to discovering succinct sets of noise-robust rules. Key to our success will be the tight integration of domain expertise into the development of the new algorithms, early evaluation on real-world data, and the potential available in the host institute to evaluate particularly promising results in the lab.
All program behavior is triggered by some program input. Which parts of the input do trigger program behaviors, and how? In the EMPEROR project, we aim to automatically produce explanations for program behaviors—notably program failures. To this end, we (1) use grammars that separate inputs into individual elements; (2) learn statistical relations between features of input elements and program behavior; and (3) use systematic tests to strengthen or refute inferred associations, including internal features of the execution. As a result, we obtain an approach that (1) automatically infers the (input) conditions under which a specific behavior occurs: “The program fails whenever the mail address contains a quote character”; (2) automatically (re)produces behaviors of interest via generated test inputs: “andr'e@foo.com”; and (3) refines and produces cause-effect relationships via generated test cases, involving execution features: “The input ''''''''@bar.com” causes a recursion depth of more than 128, leading to a crash”. EMPEROR is the successor to EMPRESS, in which we showed that statistical relations between input elements and program behavior exist, and how prototypical implementations would exploit them for testing and debugging. EMPEROR takes the approaches from EMPRESS and unifies and extends them in a single modular approach, going way beyond simple statistical relations. By learning and refining predictive and generative models, EMPEROR will be able to infer and refine relationships involving arbitrary input features and thus boost our understanding of how and why software behaves as it does.
Management
Duration
01.10.2021-30.09.2024
Members
Funding Code
ZE 509/7-2
Research Area
Programme nutzen Zeichenketten (Strings), um alle Arten von Textdaten darzustellen: Namen, Kreditkartennummern, E-Mail-Adressen, URLs, Bankkonten, Farbcodes und vieles mehr.
Programmiersprachen bieten jedoch nur wenig Unterstützung, um zu überprüfen, ob der Inhalt dieser Strings auch tatsächlich den Erwartungen entspricht. Dies kann nicht nur zu Funktionsfehlern, sondern auch zu häufigen Angriffen wie Skript- oder SQL-Injektionen führen. In diesem Antrag führen wir String-Typen ein; ein Mittel, um die gültigen Werte von Strings mit Hilfe von formalen Sprachen wie regulären Ausdrücken und Grammatiken auszudrücken. Wir führen Verfahren ein, um zu spezifizieren, welche Mengen von Strings als Werte akzeptabel sind, und um dynamisch und statisch zu prüfen, ob das Programm in Bezug auf die spezifizierten String-Typen korrekt ist. Da es sich um formale Sprachen handelt, ermöglichen String-Typen auch das Erzeugen von Instanzen aus diesen Spezifikationen.
Hiermit wird ein massives automatisiertes Testen von String-Verarbeitungsfunktionen mit gültigen Eingaben möglich, wobei String-Typen wiederum String-Ergebnisse auf lexikalische, syntaktische und semantische Korrektheit prüfen. Schließlich führen wir Mittel ein, um solche Spezifikationen aus dem Code und seinen Ausführungen zu erlernen, so dass String-Typen leicht einzuführen sind. Das Konsortium bringt umfangreiche Erfahrung in der statischen Analyse von Parsing-Code, der Generierung von Unit-Tests und Orakeln sowie sprachbasierter Spezifikation und Testen mit. Ihre gemeinsame Expertise wird dieses Vorhaben zum Erfolg führen.
Management
Founded
2024
Duration
01.09.2024 - 31.08.2027
Members
Funding Code
ZE 509/10-1
Hepatitis D is by far the most severe form of chronic viral hepatitis frequently leading to liver failure, hepatocellular carcinoma and death. Hepatitis D is caused by coinfection of hepatitis B patients with the hepatitis D virus (HDV). Up to 20 Million individuals are infected with HDV worldwide including about 250.000 patients in the European Union. There is very limited knowledge on disease pathophysiology and host-virus interactions explaining the large interindividual variability in the course of hepatitis D. It is inparticular unknown why 20-50% are spontaneously able to control HDV replication, why the majority but not all patients progress to advanced stages of liver disease and why only some patients show off-treatment responses to antiviral treatment with either pegylated interferon alpha or the novel HBV/HDV entry inhibitor bulevirtide. As HDV is an orphan disease, no multicenter cohorts of HDV infected patients are available with appropriate biobanking. There is also no reliable animal model available allowing to study host responses. Thus, there is an urgent clinical, social and economic need to better understand individual factors determining the outcome of infection and to identify subjects benefitting from currently available treatments. Hepatitis D is a protype infection which could hugely benefit from a novel individualized infectious medicine approach. We here aim to perform an unbiased screening of a large multicenter cohort of well-defined HDV-infected patients followed by mechanistic studies to determine the functional role of distinct molecules. Identified specific parameters could have an immediate impact on the personalized surveillance strategies and antiviral treatment approaches. D-SOLVE aims to reduce disease burden, improve patient?s quality of life and safe direct and indirect costs caused by HDV infection by combining exceptional clinical, immunological, bioinformatical and virological expertise from leading centers in Europe.
Management
Duration
01.10.2022-30.09.2026
Members
Funding Code
HORIZON-HLTH-2021-DISEASE-04-07
Research Area
Researchers at CISPA contribute their knowledge and resources to alliances and networks with national and international partners in order to achieve mutual goals with shared resources. In these long-term partnerships CISPA often engages in a coordinating role.
Helmholtz Medical Security, Privacy, and AI Research Center (HMSP)
The Helmholtz Medical Security, Privacy and AI Research Center (HMSP) constitutes a joint initiative of six Helmholtz Centers – CISPA, DZNE, DKFZ, HMGU, HZI, and MDC – that brings together leading experts from the field of IT-security, privacy and AI/machine learning as well as the medical domain to enable secure and privacy-preserving processing of medical data using next-generation technologies. It aims for scientific breakthroughs at the intersection of security, privacy and AI/machine learning with medicine, and it aims to develop enabling technology that provides new forms of efficient medical analytics while offering trustworthy security and privacy guarantees to patients as well as compliance with today’s and future legislative regulations.
Management
Founded
2019
Members
Research Area
Website
The CISPA-Stanford Center for Cybersecurity is a joint center for cybersecurity research between CISPA Helmholtz Center for Information Security and Stanford University which recognizes the high potential of a mutually beneficial cooperation between CISPA and Stanford in the field of cybersecurity and follows from the desire of the scientists of the parties to conduct joint research.
The collaboration has the following two intertwined goals:
To achieve these goals, CISPA and Stanford University established this collaboration to form the CISPA-Stanford Center for Cybersecurity, with corresponding research activities at CISPA and at Stanford University. The center addresses the particular career-development needs of young German scientists in cyber-security. It fosters the professional development of a small number of selected, outstanding individuals by providing them with the opportunity to work at Stanford University as Visiting Assistant Professors in the area of cybersecurity for two years and to then return to Germany to continue their research as a senior researcher at the CISPA and ultimately as a professor at a German university or a research leader in industry.
Website
This initiative is aimed to combine the strengths of the two largest and the most renowned institu-tions:CISPA Helmholtz Center for Information Security and the INRIA/Loria in Nancy will jointly strengthen cyber security research and corresponding transfer and innovation activities between Franceand Germany.Along the strong Franco-German axis, the center will enable focused-research on disruptive innovations for digital sovereignty. The central topics include self-determination in Data Profiling, European Internet and Cryptography Standards, Operating Systems developed and thor-oughly evaluated in Europe to secure critical infrastructures, the protection of privacy and security guarantees in automated AI processes, and Secure Networking in Industry 4.0 and autonomous sys-tems.The center will establish bilateral Franco-German research groups. In addition to transnational cooperation, the promotion of young researchers will be a core element. Young scientists will be given the opportunity to advance their research interests independently early in their academic ca-reer through individual mentoring.
CISPA, KAIST and KIST Europe MoU aim to initiate joint research projects in the fields of cyber security technologies. The partners support scientific collaboration and the exchange between the academic members of the institutions. The main fields of collaboration are: System Security, Web Security, and Cryptography.
The partners already acquired third party funding in order to be able to focus on specific research topics of joint interests. In annual meetings, the researchers from Germany and Korea discuss ongoing projects among each other and with researchers form other German or Korean institutions.
With the signing of the Letter of Intent in October 2018, Dcypher and CISPA identified the need for joint research projects and for creating synergies, particularly in the field of:
System and service security by design
Defense of newly developed and legacy systems
Decision making in socio-economic environments
Protection against misuse and abuse of personal data
With the discontinuation of DCYPHER in October 2020, the cooperation with the Netherlands will be continued in a different format.
Management
Founded
2018
Duration
October 2018 – October 2020
Members
CISPA Helmholtz Center for Information Security (CISPA) and Leibniz University Hannover joined forces to advance research in the areas of cybersecurity and privacy. The initiative with an initial fo-cus on Usable Security, Privacy, and Industrial Security is financially supported by the state of Lower Saxony. The cooperation includes the joint appointment of a professor by CISPA and LUH, the estab-lishment of a junior research group as well as of a dependent CISPA research branch in Hannover. To further develop the collaboration, the partners founded a committee with representatives of Lower Saxony’ Ministry of Science and the Ministry of Economics. In addition to research, the transfer of research findings, e.g. through industry cooperations, as well as the formation of future cybersecurity experts take on a key role in the cooperation.
The cooperation between the CISPA Helmholtz Center for Information Security in Saarbrücken and Inria, French National Institute for Research in Digital Science and Technology, the France-wide national research institute for digital science and technology, aims to strengthen French-German scientific cooperation by both centers contributing their respective national perspectives and reflecting them in the European environment. The collaboration includes, among other activities, joint research projects, the exchange of researchers and academic staff, as well as workshops and conferences.
Research Areas:
Algorithmic Foundations and Cryptography, Trustworthy Information Processing, Reliable Security Guarantees, Threat Detection and Defenses, Secure Connected and Mobile Systems, Empirical and Behavioral Security
The Internet has evolved from a mere communication network used by tens of millions of people two decades ago, to a global multimedia platform for communication, social networking, entertainment, education, trade and political activism with more than two billion users. This transformation has brought tremendous benefits to society, but has also created entirely new threats to privacy, safety, law enforcement, freedom of information and freedom of speech. In today’s Internet, principals are amorphous, identities can be fluid, users participate and exchange information as peers, and data is processed on global third-party platforms. Existing models and techniques for security and privacy, which assume trusted infrastructure and well-defined policies, principals and roles, fail to fully address this challenge.
The imPACT project addresses the challenge of providing privacy, accountability, compliance and trust (PACT) in tomorrow’s Internet, using a cross-disciplinary and synergistic approach to understanding and mastering the different roles, interactions and relationships of users and their joint effect on the four PACT properties. The focus is on principles and methodologies that are relevant to the needs of individual Internet users, have a strong potential to lead to practical solutions and address the funda-mental long-term needs of the future Internet. We take on this challenge with a team of researchers from relevant subdisciplines within computer science, and with input from outside experts in law, so-cial sciences, economics and business. The team of PIs consists of international leaders in privacy and security, experimental distributed systems, formal methods, program analysis and verification, and database systems. By teaming up and committing ourselves to this joint research, we are in a unique position to meet the grand challenge of unifying the PACT properties and laying a new foundation for their holistic treatment.
Management
Duration
01.02.2015-31.01.2021
Funding Code
Grant agreement ID: 610150
Research Area
CISPA's research topics contain an enormous potential for technology transfer into industrial application. CISPA has already been in active exchange with partners from industry and business for several years. Under the assumption that utilization in newly founded companies is the most direct form of knowledge and technology transfer, the expansion of the start-up incubator provides the opportunity to expand specialized structures for the explicit support of spin-offs.
The aim of the project is therefore to expand these initiatives conceptually and anchor them structurally in order to create a highly creative environment in the immediate vicinity of CISPA and in the vicinity of the Saarland Informatics Campus.
The BMBF's funding of measures primarily provides for the following areas: Raising awareness, project initiation, project funding, scaling and the overall management of the incubator.
We propose to bring together two historically disjoint lines of research: the epistemic analysis of distributed systems on the one hand, which aims at understanding the evolution of the knowledge of the components of a distributed system; and reactive synthesis, which aims at constructing such systems automatically from a formal specification given as a formula of a temporal logic.
Reactive synthesis has the potential to revolutionize the development of distributed systems. From a given logical specification, the synthesis algorithm automatically constructs an implementation that is correct-by-design. This allows the developer to focus on “what” the system should do instead of “how” it should be done. There has been a lot of success in the last years in synthesizing individual components of a distributed system. However, the complete synthesis of distributed protocols is, with currently available methods, far too expensive for practical applications.
Recent advances in the study of knowledge in distributed systems, such as the Knowledge of Preconditions principle, offer a path to significantly improve the situation. Our vision is a new class of synthesis algorithms that gainfully use this potential by constructing the distributed protocol in terms of the evolving knowledge of the components rather than the low-level evolution of the states.
We bring to the project complementing skills and expertise in the two respective fields. The proposed project will begin by carrying out a study on epistemic arguments for the correctness of existing distributed protocols. The goal is to develop a formalization of these arguments in the form of a diagrammatic proof that can be verified automatically. We will then develop systematic methods for the construction of such proofs, based on insights like the Knowledge of Preconditions principle. Finally, we will integrate our formalization of epistemic proofs into a synthesis algorithm that automatically constructs such a proof for a given specification, and then translates the proof into an actual implementation.
Management
Duration
01.04.2020-01.03.2023
Members
Funding Code
I-1513-407./2019
Research Area
Reactive synthesis has the potential to revolutionize the development of distributed embedded systems. From a given logical specification, the synthesis algorithm automatically constructs an implementation that is correct-by-design. The vision is that a designer analyzes the design objectives with a synthesis tool, automatically identifies competing or contradictory requirements and obtains an error-free prototype implementation. Coding and testing, the most expensive stages of development, are eliminated from the development process. Recent case studies from robotic control and from hardware design, such as the automatic synthesis of the AMBA AHB bus controller, demonstrate that this vision is in principle feasible. So far, however, synthesis does not scale to large systems. Even if successful, it produces code that is much larger and much more complicated than the code produced by human programmers for the same specification. Our goal is to address both of these fundamental shortcomings at the same time. We will develop output-sensitive synthesis algorithms, i.e. algorithms that, in addition to optimal performance in the size of the specification, also perform optimally in the size and structural complexity of the implementation. Target applications for our algorithms come from both the classic areas of reactive synthesis, such as hardware circuits, and from new and much more challenging application areas such as the distributed control and coordination of autonomous vehicles and manufacturing robots, which are far beyond the reach of the currently available synthesis algorithms.
The focus of the project is the development of a monitoring system for the highly critical VTOL operation. Advances in electromobility and automation technology enable the commercial use of highly automated aircraft with distributed electric propulsion systems.
Safety is an important success factor for such aircraft. To achieve this, the inherent complexity of the overall system must be identified in the form of precise requirements and consistently monitored during operation. In addition, development, operating and maintenance costs must be kept low in order to ensure economical operation of increasingly automated aircraft. The aim of the project is the automatic monitoring of parameters that are important for the safe commercial operation of an autonomous system. To increase the confidence in safety monitoring, the executable monitor is automatically generated from a formal specification of the desired behavior. The resulting transparency promises advantages for certification and economical operation. Analysis of the feedback for certification by secure, independent monitoring components is an essential topic.
The formal specification is separate from the control code and easier to understand, thus saving development and maintenance costs. Furthermore, conventional centralized monitoring procedures require the availability of all relevant data. In highly distributed avionics like that of the Volocopter it is necessary to execute the monitoring process at different system nodes, for which algorithms for monitoring have to be developed. In the project, the system monitoring approach is integrated on the basis of a formal specification for a Volocopter. This promises substantial improvements both in terms of security and from an economic point of view.
To solve future grand challenges, data, computational power and analytics expertise need to be brought together at unprecedented scale. The need for data has become even larger in the context of recent advances in machine learning. Therefore, data-centric digital systems commonly exhibit a strong tendency towards centralized structures. While data centralization can greatly facilitate analy-sis, it also comes with several intrinsic disadvantages and threats not only from a technical but more importantly also from a legal, political and ethical perspective. Rooting in sophisticated security or trust requirements, overcoming these issues is cumbersome and time consuming. As a consequence, many research projects are substantially hindered, fail or are simply not addressed. In this interdisci-plinary project we aim at facilitating the implementation of decentralized, cooperative data analytics architectures within and beyond Helmholtz by addressing the most relevant issues in such scenarios.
Trustworthy Federated Data Analytics (TFDA) will facilitate bringing the algorithms to the data in a trustworthy and regulatory compliant way instead of going a data-centric way. TFDA will address the technical, methodical and legal aspects when ensuring trustworthiness of analysis and transparency regarding the analysis in- and outputs without violating privacy constraints. To demonstrate applica-bility and to ensure the adaptability of the methodological concepts, we will validate our develop-ments in the use case “Federated radiation therapy study” (Health) before disseminating the results.
Management
Duration
01.12.2019–30.11.2022
Members
Funding Code
ZT-I-0014
Research Area
Website
Cryptology is a foundation of information security in the digital world. Today's internet is protected by a form of cryptography based on complexity theoretic hardness assumptions. Ideally, they should be strong to ensure security and versatile to offer a wide range of functionalities and allow efficient implementations. However, these assumptions are largely untested and internet security could be built on sand. The main ambition of Almacrypt is to remedy this issue by challenging the assumptions through an advanced algorithmic analysis.
In particular, this proposal questions the two pillars of public-key encryption: factoring and discrete logarithms. Recently, the PI contributed to show that in some cases, the discrete logarithm problem is considerably weaker than previously assumed. A main objective is to ponder the security of other cases of the discrete logarithm problem, including elliptic curves, and of factoring. We will study the generalization of the recent techniques and search for new algorithmic options with comparable or better efficiency. We will also study hardness assumptions based on codes and subset-sum, two candidates for post-quantum cryptography. We will consider the applicability of recent algorithmic and mathematical techniques to the resolution of the corresponding putative hard problems, refine the analysis of the algorithms and design new algorithm tools. Cryptology is not limited to the above assumptions: other hard problems have been proposed to aim at post-quantum security and/or to offer extra functionalities. Should the security of these other assumptions become critical, they would be added to Almacrypt's scope. They could also serve to demonstrate other applications of our algorithmic progress. In addition to its scientific goal, Almacrypt also aims at seeding a strengthened research community dedicated to algorithmic and mathematical cryptology.
Management
Duration
01.01.2016-31.12.2021
Funding Code
ERC Advanced Grants 669891
Research Area
The goal of the SYSTEMATICGRAPH project is to put the search for tractable algorithmic graph prob-lems into a systematic and methodological framework: instead of focusing on specific sporadic prob-lems, we intend to obtain a unified algorithmic understanding by mapping the entire complexity landscape of a particular problem domain. A dichotomy theorem is a complete classification result that characterizes the complexity of each member of a family of problems: it identifies all the cases that admit efficient algorithms and proves that all the other cases are computationally hard. The pro-ject will demonstrate that such a complete classification is feasible for a wide range of graph prob-lems coming from areas such as finding patterns, routing, and survivable network design, and novel algorithmic results and new levels of algorithmic understanding can be achieved even for classic and well-studied problems.
Management
Duration
01.07.2017-30.06.2022
Funding Code
Grant agreement ID: 725978
Research Area
TESTABLE addresses the grand challenge of building and maintaining modern web-based and AI-powered application software secure and privacy-friendly. TESTABLE intends to lay the foundations for a new integration of security and privacy into the software development lifecycle (SDLC), by proposing a novel combination of two metrics to quantify the security and privacy risks of a program, i.e. the code testability and vulnerable behavior indicators. Based on the novel concept of ""testability patterns,"" TESTABLE will empower the SDLC actors (e.g. software/AI developers, managers, testers, and auditors) to reduce the risk by building better security and privacy testing techniques for classical and AI-powered web applications, and removing or mitigating the impact of the patterns causing the high-risk levels.
To achieve these goals, TESTABLE will develop new algorithms, techniques, and tools to analyze, test, and study web-based application software. First, TESTABLE will deliver algorithms and techniques to calculate the risk levels of the web application's code. Second, TESTABLE will provide new testing techniques to improve software testability. It will do so with novel static and dynamic program analysis techniques by tackling the shortcomings of existing approaches to detect complex and hard-to-detect web vulnerabilities, and combining ideas from the security testing and adversarial machine learning fields. TESTABLE will also pioneer the creation of a new generation of techniques tailored to test and study privacy problems in web applications. Finally, TESTABLE will deliver novel techniques to assist software/AI developers, managers, testers, and auditors to remove or mitigate the patterns associated with the high risk.
TESTABLE relies on a long-standing team of nine European partners with strong expertise in security testing, privacy testing, machine learning security, and program analysis, and who strive for excellence with a proven strong track record and impact in the security communities.
This project has received funding from the European Union's H2020-SU-DS-2020 Grant Agreement No. 101019206.
Management
Duration
01.09.2021-31.08.2024
Funding
4 835 135 €, of which 721 138,75 € for CISPA
Funding Code
101019206
Traditionally, cybersecurity has been viewed as a technical problem, for which software and hardware solutions were key. However, in recent years, the focus has moved from the technical to the human aspect of cyber security. People are more and more considered ‘the weakest link’, or light-heartedly referred to as PEBCAK (problem exists between chair and keyboard). With human error and cyber-attacks aimed at individuals rather than machines becoming every-day occurrences, there is a strong need to solve cybersecurity issues on this level. Coming from a programming background, computer scientists usually aim to solve these weaknesses in the architecture of software. However, a piece of software can ask for a strong password, but if the employee who needs to create the strong password, writes it down on a post-it that is left on his desk, the ‘improved’ software security is easily becoming obsolete. Instead of trying to solve human problems with technological solutions, or reinventing the wheel, a better solution is to look at existing scientific knowledge and work with experts on human behaviour. Knowledge in the field of psychology can create more effective awareness campaigns, improve compliance with security policies through tried and tested behavioural change interventions, and train people in detecting social cyber-attacks through the use of existing knowledge in the cognitive psychology domain. These collaborations lead to improved individual cybersecurity, safer organisations, and a better functioning (international) society. To achieve this, working with psychologists is key as they are trained to describe, understand and solve human behaviour issues. By bringing psychologists into the cybersecurity field, they can apply existing psychological theories and best practices to cybersecurity problems, as well as develop new psychological theories on the specifics of cyberattacks and cyber resilience.
Management
Founded
2020
Duration
01.09.2020-31.08.2023
Funding Code
ID: 2020-1-DE01-KA203-005726
Kamaeleon deals with the adaptation of light electric vehicles through software, so that the vehicles are able to adapt automatically to different areas driven over or to means of transport. Different requirements of the currently still firmly prescribed admission requirements have to be fulfilled by adaptation. The safety is primarily ensured by the speed driven in relation to the respective traffic area, but also due to the proximity to other road users. However, the maximum speed and continuous output is also a criterion for the approval of vehicles for a specific place of use (e.g. pedelec sidewalk, e-bike street). Technically, the maximum speed that can be driven is regulated first and foremost. This will create a completely new class of vehicles that are not defined by fixed characteristics, such as performance, maximum speed, equipment, etc., but whose functions are controllable by software.
Management
Founded
2019
Duration
01.04.2019 – 31.12.2022
Members
Funding Code
16SV8210
Fuzzing – testing software through randomly generated inputs – is one of the premier methods to discover software vulnerabilities. Fuzzing is easy to deploy; once set up, it can be left running for days and weeks, continuously testing the system with one input after another. Fuzzing also has no false positives: any input that crashes the program triggers a real vulnerability that can be exploited by attackers, if only for a denial of service attack.
Fuzzing is slow, though. The overwhelming majority of randomly generated inputs is invalid and will thus be rejected by the program under test. This can still detect errors, notably in the routines for parsing and rejecting inputs. In order to reach deeper functionality after input processing, though, it is necessary to have inputs that are syntactically valid.
The traditional means to produce valid inputs is to formally specify the input language using formal languages such as regular expressions and grammars – well-established and well-understood formal-isms with a sound and detailed theoretical foundation and plenty of applications in practice. Specifying an input language, however, is a huge manual effort, ranging from days for simple data formats to months for complex input languages.
In the past years, the group of PI Zeller has developed a number of techniques that can automatically extract grammars from a given program and a set of sample inputs and shown how to construct extremely efficient fuzzers from these grammars. These techniques are so mature they are even avail-able as open source in a recently published textbook. Yet, the grammar learners still depend on a comprehensive set of samples that cover every feature of the input space.
Therefore the aim of the project is to create test generators that specifically target input processors – that is, lexers (tools that compose input characters into words) as well as parsers (tools that compose sequences of words into syntactical structures, such as sentences in natural language). His approach is to feed some trivial invalid input (say, the string “x”) into a program and then to dynamically track the comparisons this program undertakes before rejecting the input as invalid.
Management
Duration
01.06.2019 – 30.11.2021
Research Area