RESEARCH GROUPS
Our Faculty are organized in six dedicated research areas that correspond to their research interests and foci. Under this umbrella, each CISPA-Faculty leads their own research group, bringing together experts and innovators to investigate, develop, and share new insights. Click on each Faculty to learn about their respective research group, their specific focus areas and their ongoing projects.
Many of our research projects are supported by prestigious third-party funding. Our researchers apply for these grants in competitive procedures – either individually or in cooperation with other applicants.
In ILLUMINATION, a toolbox of technical privacy methods and interdisciplinary recommendations for privacy-preserving use of centralized LLMs in the healthcare sector is being developed. The technical methods enable LLM users to implement appropriate data protection for their users in the spirit of “Privacy by Design.” The recommendations—based on technical, legal, human-centered, and application-specific perspectives—contribute to strengthening responsible and privacy-compliant LLM practices, support navigation through the complex landscape of LLM implementation, and lay the foundation for legally compliant privacy methods in LLM-based applications.
While there is a long-standing tradition of training various machine learning models for different application tasks on visual data, only the most recent advances in the domain of foundation models managed to unify those endeavors into obtaining highly powerful multi-purpose deep learning models. These models, such as DINO_v21, or SAM,2 are pre-trained on large amounts of public data which turns them into efficient feature extractors. Using only small amounts of sensitive downstream data and reduced compute resources in comparison to a from-scratch training, these feature extractors can then be adapted (through full or partial fine-tuning, transfer learning, or prompting approaches) to solve a wide range of downstream applications. Our goal is to bring foundation models and the new and powerful learning paradigms for their adaptation to the complex and sensitive medical domain with a focus on CT and MRI data.
Management
Founded
2024
Duration
01.01.2024-31.12.2027
Members
Funding Code
ZT-I-PF-5-227
The project aims to enable software developers to quickly gather information about code snippets they reuse in their codebase. This includes notifications about changes to the code sources, alerts about security issues and bugs, or summaries of discussions related to such code snippets. The Field Study Fellowship is intended to adapt the software to the needs of developers and to improve the effectiveness of code reuse.
The Aletheia project aims to develop innovative technical and interactive methods for detecting deepfakes in images, videos, and audio recordings.
The goal of detection is to combat disinformation and preserve authenticity. Based on machine learning, anomalies are identified that do not occur in authentic content. The results are prepared in a forensically detailed manner for the user. To ensure that users can understand the decision of whether something is fake or not, the outcome is presented in an accessible way using explainable artificial intelligence. Innovative interpretation models are used to highlight anomalies and irregularities, enabling a novel form of forensic analysis by the end user. This results in a precise, detailed, and interpretable output that explains why a particular piece of content was classified as a deepfake. This fosters a user-centered environment of trust and transparency.
In addition, the project focuses on multimodal and scalable analysis. Videos are first analyzed separately in terms of audio and visuals, and then assessed for coherence. The motivation behind the StartUp Secure funding program is the rapid implementation of market-relevant solutions. Therefore, the overall objective of this project is the development of a technology demonstrator in order to bring the knowledge from research and development to market. Ultimately, the project aims to lead to the founding of a company that offers the technologies developed here.
Communication efficiency is one of the central challenges for cryptography. Modern distributed computing techniques work on large quantities of data, and critically depend on keeping the amount of information exchanged between parties as low as possible. However, classical cryptographic protocols for secure distributed computation cause a prohibitive blow-up of communication in this setting. Laconic cryptography is an emerging paradigm in cryptography aiming to realize protocols for complex tasks with a minimal amount of interaction and a sub-linear overall communication complexity. If we manage to construct truly efficient laconic protocols, we could add a cryptographic layer of protection to modern data-driven techniques in computing. My initial results in laconic cryptography did not just demonstrate the potential of this paradigm, but proved to be a game-changer in solving several long standing open problems in cryptography, e.g. enabling me to construct identity-based encryption from weak assumptions. However, the field faces two major challenges: (a) Current constructions employ techniques that are inherently inefficient. (b) The most advanced notions in laconic cryptography are only known from very specific combinations of assumptions, and are therefore just one cryptanalytic breakthrough away from becoming void. This project will make a leap forward in both challenges. I will systematically address these challenges in a work program which pursues the following objectives: (i) Develop new tools and mechanisms to realize crucial cryptographic primitives in a compact way. (ii) Design efficient protocols for advanced laconic functionalities which sidestep the need for inherently inefficient low-level techniques and widen the foundation of underlying assumptions. (iii) Strengthen the conceptual bridge between laconic cryptography and cryptographically secure obfuscation, transferring new techniques and ideas between these domains.
Management
Founded
2022
Duration
01.07.2022-30.06.2027
Funding Code
HORIZON-ERC (ERC-2021-StG)
Research Area
Cyber-physical systems (CPS), which integrate the physical environment with numerous embedded computing systems via digital networks into a tightly coupled overall system, are the key technology behind the growing number of smart environments. Most of these applications are highly safety-critical, as malfunctions in cyber-physical systems can pose immediate risks to human life, the environment, or valuable assets. Continuous runtime monitoring of system functions through suitable monitoring processes is a crucial element in ensuring reliable, predictable, and safe system behavior.
The requirements for the monitoring processes that oversee a CPS are extremely high: failure to detect exceptional situations can lead to the aforementioned hazards, while excessive signaling of false alarms can significantly degrade system performance. The PreCePT project contributes essential foundational research to meet the demand for reliable, provably correct monitoring processes by bridging formal methods from computer science with fault models from measurement engineering. It does so by automatically synthesizing runtime monitors from formal specifications, taking into account the inevitable measurement inaccuracies and partial observability that come with sensor-based environmental monitoring.
The resulting monitoring algorithms combine maximum precision with hard real-time guarantees. Owing to their rigorous derivation from formal semantic models of CPS and the use of advanced arithmetic constraint-solving techniques, they are provably optimal in this regard.
Management
Founded
2023
Duration
01.08.2024 - 31.07.2026
Funding Code
FI 936/7-1
Research Area
The quest for a science of perspicuous computing continues. With the results that were achieved in the first funding period, we are spearheading a larger movement towards building and reasoning about software-based systems that are understandable and predictable. As a result, CPEC is gradually enlarging its scope in its second funding period. This pertains to three interrelated facets of our research:
· Deepening investigation of various feedback loops within the system lifecycle which are required to feed system analysis insights – in particular, insights from inspection-time justification – back into the design-time engineering of perspicuous systems.
· Emphasising human-centred and psychological research regarding the human-in-the-loop, reflecting the need to investigate the interaction of perspicuous systems with various groups of human stakeholders.
· Interfacing to the societal dimension of perspicuity – society-in-the-loop – echoing the increasing number of regulatory requirements regarding perspicuity put forward in recent years.
CPEC joins the pertinent forces at Saarbrücken and Dresden that are apt to master the challenge of putting perspicuous computing research into the loop. It comprises computer science experts and links to psychology and juridical expertise at Universität des Saarlandes and Technische Universität Dresden as well as the Max Planck Institute for Software Systems and the CISPA Helmholtz Center for Information Security. The participating institutions have developed a joint research agenda to deepen the transregional network of experts in perspicuous systems. It will serve our society in its need to stay in well-informed control over the computerised systems we all interact with. It enables comprehension and control in a cyber-physical world.
Source: https://www.perspicuous-computing.science/research/ (03.05.2023)
Management
Duration
01.01.2023 bis 31.12.2026
Funding Code
TRR248/2
Research Area
The central role of information technology in all aspects of our private and professional lives has led to a fundamental change in the type of program properties we care about. Up to now, the focus has been on functional correctness; in the future, requirements that reflect our societal values, like privacy, fairness, The central role of information technology in all aspects of our private and professional lives has led to a fundamental change in the type of program properties we care about. Up to now, the focus has been on functional correctness; in the future, requirements that reflect our societal values, like privacy, fairness, and explainability will be far more important. These properties belong to the class of hyperproperties, which represent sets of sets of execution traces and can therefore specify the relationship between different computations of a reactive system. Previous work has focussed on individual hyperproperties like noninterference or restricted classes such as k-hypersafety; this project sets out to develop a unified theory for general hyperproperties. We will develop a formal specification language and effective algorithms for logical reasoning, verification, and program synthesis. The central idea is to use the type and alternation structure of the logical quantifiers, ranging from classic firstorder and second-order quantification to quantifiers over rich data domains and quantitative operators for statistical analysis, as the fundamental structure that partitions the broad concept of hyperproperties into specific property classes; each particular class is then supported by algorithms that provide a uniform solution for all the properties within the class. The project will bring the analysis of hyperproperties to the level of traditional notions of safety and reliability, and provide a rigorous foundation for the debate about standards for privacy, fairness, and explainability that future software-based systems will be measured against.
Management
Duration
01.11.2022-31.10.2027
Funding Code
HORIZON-ERC (ERC-2021-ADG)
Research Area
For improving the capabilities of general-purpose AI models and for extending their applicability to domains where the temporal dimension – among several others – is of importance, we will target the development of the next generation of Multimodal Space-Time Foundation Models (MSTFMs). These will combine spatio-temporal understanding, which is important even for modalities such as the
visual one that have already been introduced in large generative models, with the effective management of new time-relevant modalities that are yet to be supported in foundation models, such as industrial time series data, remote sensing data and health-related measurements. Real and synthetic data, to mitigate data scarcity, will be leveraged for training general-purpose MSTFMs and for further adapting them for specific downstream tasks. Real data used for training will include data directly provided by members of the consortium as well as data from relevant European Data Spaces, while complementary synthetic data will be generated by exploiting existing generative AI capabilities as well as new ones developed in the project. European HPC infrastructure is directly included in the consortium to ensure the availability of the necessary computing resources.
28 Partner inkl. CISPA (UT, FZJ, UvA, TU/e, UNITN, CVC-CERCA, JSI, LOBA, BSC, CSC, CINECA, LMU, UVEG, UNIMORE, AALTO, ELLIS Alicante, ELLIS Tübingen, CTU, KUL, Voxist, VALEO, Robotwin, OPENCHIP, DEIMOS, GENCAT, VRT Associated Partner: EPFL, ETH Zürich |
Management
Founded
2025
Duration
01.07.2025-30.06.2027
Funding Code
HORIZON-CL4-2024-HUMAN-03-01
That will address major challenges hampering the deployment of AI technology. These grand challenges are fundamental in nature. Addressing them in a sustainable manner requires a lighthouse rooted in scientific excellence and rigorous methods. We will develop a strategic research agenda which is supported by research programmes that focus on “technical robustness and safety”,“privacy preserving techniques and infrastructures” and “human agency and oversight”. Furthermore, we focus our efforts to detect, prevent, and mitigate threats and enable recovery from harm by 3 grand challenges: “Robustness guarantees and certification”,“Private and robust collaborative learning at scale” and “Human-in-the-loop decision making: Integrated governance to ensure meaningful oversight” that cut across 6 use cases: health, autonomous driving, robotics, cybersecurity, multi-media, and document intelligence. Throughout our project, we seek to integrate robust technical approaches with legal and ethical principles supported by meaningful and effective governance architectures to nurture and sustain the development and deployment of AI technology that serves and promotes foundational European values. Our initiative builds on and expands the internationally recognized, highly successful and fully operational network of excellence ELLIS. We build on its 3 pillars: research programmes, a set of research units, and a PhD/PostDoc programme, thereby connecting a network of over 100 organizations and more than 337 ELLIS Fellows and Scholars (113 ERC grants) committed to shared standards of excellence. Not only will we establish a virtual center of excellence, but all our activities will also be inclusive and open to input, interactions, and collaboration of AI researchers and industrial partners in order to drive the entire field forward.
Management
Founded
2022
Duration
01.09.2022-31.08.2025
Funding Code
HORIZON-CL4-2021-HUMAN-01-03
Ever since the last Coronavirus epidemic caused by SARS-CoV-1, plans and tools for the containment of epidemics are being developed. However, an appropriate early warning system for local health authorities addressing this need on a regional, targeted level is not available. In the current SARS-CoV-2 pandemic, the need for such a system becomes increasingly obvious. The heterogeneity of different regions and localized outbreaks require a locally adapted monitoring and evaluation of infection dynamics.
Early recognition of an emerging epidemic is a crucial part of a successful intervention. The comparison of Germany to other European nations illustrates how crucial a timely implementation of non-pharmaceutical interventions is for the containment of an epidemic. Hence, continuous monitoring of infection processes is indispensable. For strategic planning of political interventions, epidemiological modelling and scenario calculations for forecasting and evaluation of interventions and scenarios have shown their importance. The accuracy of such forecasts largely depends on the robustness and broadness of the underlying data. Further, there is a need for an intelligible presentation of often complex simulation results without oversimplification of their interpretation and inherent uncertainty.
In this proposal, we develop a platform that integrates data streams from various sources in a privacy preserving manner. For their analysis, a variety of methods from machine learning to epidemiological modeling are employed to detect local outbreaks early on and enable an evaluation for different assumptions and on different scales. These models will be integrated into automatized workflows and presented in an interactive web application with custom scenario simulations. The platform will be based on insights gained by retrospective and prospective evaluation of the COVID-19 pandemic, using SARS-CoV-2 as a blueprint for the prevention and containment of future respiratory virus epidemics. The platform will be transferred to the Academy for Public Health Services and optimized in pilot projects with selected local health authorities under real-world conditions.
Subproject: Federated Analytics and Machine Learning, Privacy Models, and Benchmarking: CISPA
The goal of PrivateAIM is to develop a federated platform for machine learning (ML) and data analysis within the framework of the Medical Informatics Initiative (MII), in which analyses are brought to the data rather than the data being brought to the analyses. Methods that enable distributed data processing in the Data Integration Centers (DIZ) established by the MII are important for several reasons:
Patient data may only be used without consent if anonymity is ensured;
Federated technologies can help connect the MII with other health data networks.
However, the mechanisms currently established within the MII have significant limitations and, for example, are not suitable for complex ML and data science tasks. Moreover, federated platforms developed in other contexts are:
complicated to set up and operate,
support only a limited number of analysis or ML methods,
do not implement modern privacy-preserving technologies, and
are not scalable or mature.
The PrivateAIM consortium, supported by all MII consortia, brings together experts to develop the next generation of federated analysis and ML platforms for the MII. The Federated Learning and Analysis Methods Platform (FLAME) will combine state-of-the-art federated methods with innovative privacy models for multimodal data. Using components for monitoring and controlling the level of protection, these will be integrated into a distributed infrastructure that can be easily adopted by the Data Integration Centers.
The practical implementation will be supported by addressing challenges at the intersection of technology and law, developing concepts for operation by hospital IT departments, and coordinating with ethics committees and data protection officers.
The "AlgenCY" project aims to thoroughly explore and evaluate the diverse possibilities and challenges that generative AI methods bring to the field of cybersecurity. We seek to understand how these technologies can be effectively used to defend against cyber threats, while also identifying the potential vulnerabilities and risks they may pose themselves. The goal of this research is to make well-founded predictions about the future impact of generative AI on cybersecurity. Based on these insights, targeted strategies and solutions will be developed to strengthen digital security. As part of this subproject, CISPA will specifically investigate the security aspects of large language models (LLMs) and other generative AI methods. The focus will be both on securing these technologies themselves and on exploring how they can be applied within the context of IT security.
Management
Founded
2023
Duration
01.11.2023 - 31.10.2026
Funding Code
16KIS2012
Computer systems in banks and insurance companies, as well as in autonomous vehicles or satellites, are promising targets for cyberattacks and must be protected to withstand such attacks or to endure them while continuing to operate safely. Unfortunately, attackers gain increasing opportunities as these systems grow more complex, which means that in defending them, we must assume that some attacks could be successful. Fortunately, resilience techniques already exist, such as triple replication of the computer system along with the protocol, allowing the correct result to be determined through consensus even if one instance delivers a faulty result due to a successful cyberattack. However, for these techniques to be applied, they must be adapted to the system’s structure, especially how the components of the system interact with each other. The resilience techniques developed so far are limited to certain forms of interaction, and developing new techniques for more complex forms of interaction remains a difficult and error-prone task. In particular, tools that support the development of such protocols through correctness checks can usually only be applied to fully completed protocols, and their use typically requires rare expert knowledge. In the FM-CReST project, researchers from CISPA, Germany, and the SnT at the University of Luxembourg have joined forces to develop a new class of highly automated and easy-to-use tools that assist in designing provably correct resilience protocols. To achieve this, we rely on co-design of protocols with complex interaction patterns and base the development of our tools on observations made during protocol design, aiming to simplify similar tasks in the future.
Funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) – 513487900.
Management
Founded
2023
Duration
01.12.2023-30.11.2026
Funding Code
JA 2357/4-1; Projektnummer 513487900
In this project we want to establish Global Synchronization Protocols (GSPs) as a new computational model for concurrent systems with a parametric number of components. Besides local updates of a component, GSPs support synchronous global updates of the system, which may be guarded with global conditions on the state of the system. With this combination, they can be used to model applications that depend on global synchronization between components, e.g. by consensus or leader election, at an abstraction level that hides the internal implementation of the agreement protocol, while faithfully preserving its pre- and postconditions.We will identify conditions under which parameterized safety verification of GSPs remains decidable, even though this problem is in general undecidable for the combination of communication primitives that GSPs support. A preliminary version of GSPs already supports both global synchronization and global transition guards, and we plan to further extend the system model to include asynchronous message-passing and different extensions for fault tolerance, while preserving decidability of parameterized verification.Moreover, we will identify conditions for small cutoffs for safety verification, i.e., small bounds on the number of components that need to be considered to provide parameterized correctness guarantees. Based on these cutoffs, we will also develop an approach to automatically synthesize GSPs that satisfy given properties by construction. Finally, we will also investigate a refinement-based synthesis approach for GSPs and compare its properties to the cutoff-based approach.Our research into decidable fragments of GSPs will be guided by applications from different areas, such as sensor networks, robot swarms, or blockchain-based applications.
Management
Duration
01.07.2023-30.06.2026
Funding Code
JA 2357/3-1; Project number 497132954
Research Area
Novel on-body devices offer new, scalable user interfaces that are more intuitive and direct to use. However, the close proximity of input and output to the body introduces serious new privacy risks for users: the large hand and finger gestures typically used for input are significantly more susceptible to observation by third parties than established forms of touch input. This is even more true for visual output on the body. This is particularly problematic since on-body devices are typically used during mobile activities in non-private environments. The primary goal of this project is to contribute to the scalability of on-body computing in public spaces by developing interaction techniques for input and output of private information that provide improved resistance to privacy violations. Our approach focuses on leveraging the unique interaction properties of the human body: high manual dexterity, high tactile sensitivity, and a large available surface for input and output, combined with the ability to flexibly shield input and output through variable body posture. These properties can form the basis for new body-based input and output techniques that are scalable and (practically) unobservable. This goal is largely unexplored so far. It is very challenging due to the new and highly diverse forms and scales of on-body devices as well as novel forms of multimodal input and output. These challenges are further complicated by the inherent complexity of social environments, respective proxemics, and the attention of users and bystanders. To create a design space for these interactions, we will empirically investigate the privacy of tactile input, visual, and haptic output at various body locations, depending on body posture and proxemic configurations. Subsequently, we will systematically design and implement body-related input gestures and scalable techniques for multimodal interaction that preserve privacy in social environments according to a generalized threat model. We will use attention models that incorporate the human body. The new interaction techniques will be empirically evaluated with users in realistic scenarios and in the laboratory to assess how their properties affect usability, privacy, and scalability. Both will help us understand the internal and external validity of our approach. We expect the results of this project to make a significant contribution to establishing the foundations for scalable body-based interactions that preserve privacy.
Funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) – 521601028
Management
Founded
2023
Duration
01.04.2024-31.03.2027
Members
Funding Code
KR 5384/2-1; Projektnummer 521601028
Many of today’s critical infrastructures such as power grids or cellular networks are distributed systems which are comprised of autonomous nodes connected over a network. To avoid single points of failure, a central goal in distributed systems is to implement such systems in a faulttolerant manner. Fault-tolerant systems remain secure and available even if some minority of nodes crash or spread incorrect information. To improve robustness and scalability, cuttingedge systems frequently rely on cryptography. In spite of its benefits, careless use of cryptography can incur performance penalties or lead to vulnerabilities. Both of these aspects significantly complicate its use in practice. As a result, many real-world systems use cryptography sparingly and therefore lack both robustness and scalability. To improve this unsatisfying state of affairs, the objectives of CRYPTOSYSTEMS are as follows:
• New Formal Models. Established formal models from cryptography and distributed systems pursue independent security goals and therefore lack compatibility. CRYPTOSYSTEMS will develop new, more appropriate formal models for analyzing the security of cryptographic distributed systems.
• Efficient and Robust Distributed Algorithms. The use of cryptography is currently underexplored in distributed systems. CRYPTOSYSTEMS will present exciting new applications of cryptography that will lead to the development of more robust and scalable distributed systems.
• Cryptography for Distributed Algorithms. Cryptography is seldom developed with distributed algorithms as its primary use-case in mind. This results in inefficient or unwieldy cryptography which hampers the efficiency of distributed algorithms. To counteract this, CRYPTOSYSTEMS will develop cutting-edge cryptography such as compact signatures and communication efficient distributed randomness generation routines. Importantly, these tools will be specifically geared toward use in distributed algorithms.
Management
Duration
01.09.2023-31.08.2028
Funding Code
HORIZON-ERC (ERC-2023-StG)
Research Area
Digital signatures are a fundamental and versatile cryptographic tool. In a digital signature scheme, a signer holding a secret key can sign a message in such a way that anyone can efficiently verify the signature using a corresponding public key. On the other hand, it should be impossible to create a signature in the signer's name (so long as its secret key indeed remains secret). An important variant of signature scheme is a multi-signer version where multiple signers can jointly create a compact signature on a message. Later on the resulting signature can be efficiently verified against (an aggregate of) all of their public keys. This allows to create a storage efficient proof that a certain number, say half of all parties in the system, has signed a message. This intriguing aspect of multi-signer signatures has recently received an enormous amount of attention in the context of blockchain and consensus protocols and.
The aim of this project is to improve the security and understanding of multi-signer signatures to 1) develop modular frameworks for building multi-signer signatures from weaker primitives such as identification schemes. This type of design approach has seen much success in the construction of basic signatures and will lead to multi-signer signature schemes from a wider array of mathematical hardness assumptions. 2) To revisit and improve the security guarantees of existing multi-signer schemes used in practice. Our main aim is to prove existing constructions secure with respect to a powerful adversary that can dynamically corrupt signers over the course of time. This type of security guarantee is often required in practical applications, e.g., consensus protocols, yet it is not satisfied by most efficient schemes. And 3) to improve the robustness of distributed key generation (DKG) protocols. Many multi-signer schemes rely on a trusted dealer to set up correlated keys among the signers. This is problematic for many natural applications such as blockchain protocols, where such a dealer might not be available. Therefore, parties can instead use a DKG protocol, to jointly set up such a correlated set of keys. This makes DKG protocols a crucial tool for running multi-signer protocols in a trust free manner. Unfortunately, existing DKG protocols rely on unrealistic network assumptions or tolerate only a small number of corruptions. The goal of this project is to improve their robustness in both of these regards.
Management
Duration
01.09.2022 – 31.08.2025
Members
Funding Code
LO 3021/1-1
Along with releases of new web standards in browsers (WebAssembly, WebGPU, WebUSB, etc.), more and more features of connected devices are directly usable from the web. While these specifications hold great promise from a performance perspective, they keep raising significant security concerns. In this project, we aim to analyze the security implications of new features that provide direct or indirect access to low-level hardware features. Building on our previous research, we will (1) investigate the impact of directly mounting native side-channel attacks from the web, (2) develop new methods to efficiently port attacks to browsers to facilitate a faster risk assessment for novel attacks, (3) explore how side-channel attacks can leak secrets or enable user tracking via hardware fingerprints, and (4) lay the foundations for secure low-level web standards by exploring the effectiveness of existing and novel countermeasures (eg. sandboxing) through the lens of hardware/software contracts.
Management
Duration
01.09.2022 – 31.08.2025
Members
Funding Code
RO 5251/1-1; SCHW 2104/2-1
Stable critical infrastructures are the foundation of a functioning society. Embedded systems—such as those used in vehicles, energy grids, and medical devices—play a central role in this context. Their increasing interconnectivity offers many advantages but also introduces new vulnerabilities. Existing security testing techniques cannot yet be effectively applied to embedded systems. This is mainly due to the limited computational power of embedded devices, which prevents thorough testing.
The research project Fuzzware aims to enable scalable, automated security testing of embedded systems. To achieve this, it employs a technique known as rehosting, which allows firmware to be efficiently executed on servers without the need for the original hardware. This setup enables the injection of thousands of unexpected inputs per second into the embedded system. As a result, Fuzzware can systematically and at scale identify vulnerabilities in embedded software.
This allows developers and integrators to detect and fix security issues early in the development process. The project has the potential to revolutionize how embedded systems are tested for robustness and security. The core innovation lies in the scalability of the testing approach, paving the way for systematic and large-scale hardening of embedded systems.
The development of reliable and secure software systems requires systematic and comprehensive testing. Since structured and standardized data formats are used across many software applications to exchange data, the systems involved must be robust and secure enough to handle manipulated or faulty datasets. Until now, the test data needed for testing structured formats—such as those used in electronic invoices—can only be generated manually. As a result, their availability is limited and correspondingly costly.
The researchers involved in the “InputLab” project are developing methods for the automatic generation of test data for data formats for which a data schema exists. Such schemas are defined as part of standardization processes for digital formats and contribute to the interoperability of different software systems. The test data generated in this project can be used to trigger, diagnose, and repair malfunctions in applications. This includes detecting subtle errors that do not manifest in drastic behaviors such as software crashes. Thus, serious problems and costs can be avoided. To make it easier for development teams to use the datasets for their purposes, these data sets should be flexibly adaptable to the characteristics of example datasets.
Through the developments in this project, the diverse demand for high-quality test data for software systems with structured formats will be addressed. A minimal number of datasets should cover a wide range of faulty or manipulated data points to enable effective and cost-efficient testing. At the same time, the generated test data can be used to examine a variety of software applications for vulnerabilities and thus sustainably improve their security.
Verbundprojekt: Supply Chain Security-Identifikation von (absichtlichen und unabsichtlichen) Sicherheitsschwachstellen - ASRIOT
The project "Automated Security Analysis of RTOS and MCU-based IoT Firmware (ASRIOT)" aims at exploring automated security analyses of firmware that are based on real-time operating systems (RTOS) and single-chip computer systems, so-called microcontrollers (MCU), in order to create trustworthy control systems. Such control systems are used, for example, to monitor manufacturing processes or to control vehicles. The platform we intend to create will be able to automatically analyze proprietary firmware in order to register manufacturer-specific components and libraries. By analyzing so-called binary files, which will be structured according to a predefined scheme, the additional or adapted components are to be detected automatically. Furthermore, the platform will automatically detect typical security vulnerabilities in communication, encryption and memory usage and summarize the findings in detailed reports. The measures will be tested using an integrated demonstrator, which will enable us to present directly applicable technologies at the end of the project.
Management
Founded
2023
Duration
01.04.2023 - 31.03.2026
Funding Code
16KIS1807K
Autonomous vehicles make steering decisions on the road independently based on AI-driven processing of various sensor data. Potential malicious attacks can lead to accidents due to incorrect maneuvers by autonomous vehicles and must therefore be systematically researched along with countermeasures when discussing the reliability of such systems.
This project analyzes the effects of manipulations on current sensors and sensor processing pipelines. Furthermore, secure platforms for sensor data processing will be designed and implemented, and their effectiveness in reliably defending against manipulation and compromise attempts will be demonstrated. The resulting design of a security-focused platform will serve as a reference for future research and product development in this area.
We approach the problem from three complementary research directions: defense against physical sensor manipulations, detection and prevention of manipulations in the sensor fusion pipeline, and trustworthy processing platforms for the automotive industry. Solutions for each topic will first be theoretically analyzed and then tested for effectiveness in simulations. In parallel with threat analysis and the development of countermeasures, we will develop and implement our practical demonstrator platform.
Management
Founded
2024
Duration
02.10.2024 - 31.12.2027
Funding Code
45AVF5A011
Subproject: Research on Security-Relevant Cyberattacks, Threat Scenarios, and Attack Detectors
The goal of ProSeCA is to research and implement modern cybersecurity architectures for vehicles. To ensure the highest possible functional and data security, ProSeCA adopts a holistic approach that prevents typical security issues—such as memory errors—at a fundamental level. Cybersecurity in connected autonomous driving (e.g., secure internal/external communication) is critical for the safety of passengers and other road users. The variety of heterogeneous components and functions in today’s automotive architectures creates large attack surfaces for cyberattacks as demands increase.
While the UNECE R155 directive mandates cybersecurity management for future new vehicle approvals, experience with suitable architectures and security components is still lacking. ProSeCA aims to fill this gap: Following the principle "A system is only as secure as its weakest link," and aligned with ISO/SAE 21434, it focuses on developing a security concept as a modular and standardizable trusted hardware/software architecture for vehicle control units. This includes hardware-based protection measures, the Rust programming language, and solutions for automated testing of software components as security building blocks. A demonstrator showcases the feasibility of such new architectures as an OEM-independent solution. The consortium of eight partners represents a targeted cross-section of the automotive value chain.
Management
Founded
2023
Duration
01.09.2023-30.06.2026
Funding
PDIR
Funding Code
19A23009G
Programs use strings to represent all kinds of textual data: names, credit card numbers, email addresses, URLs, bank accounts, color codes, and much more.
However, programming languages offer only limited support for verifying whether the contents of these strings actually meet expectations. This can lead not only to functional errors but also to frequent attacks such as script or SQL injections. In this proposal, we introduce string types—a means to express the valid values of strings using formal languages such as regular expressions and grammars. We develop methods to specify which sets of strings are acceptable as values and to dynamically and statically check whether the program is correct with respect to the specified string types. Since these are formal languages, string types also enable the generation of instances from these specifications.
This makes massive automated testing of string-processing functions with valid inputs possible, while string types, in turn, verify string outputs for lexical, syntactic, and semantic correctness. Finally, we introduce means to learn such specifications from code and its executions so that string types can be introduced easily. The consortium brings extensive experience in static analysis of parsing code, unit test and oracle generation, as well as language-based specification and testing. Their combined expertise will ensure the success of this project.
Management
Founded
2024
Duration
01.09.2024 - 31.08.2027
Members
Funding Code
ZE 509/10-1
Hepatitis D is by far the most severe form of chronic viral hepatitis frequently leading to liver failure, hepatocellular carcinoma and death. Hepatitis D is caused by coinfection of hepatitis B patients with the hepatitis D virus (HDV). Up to 20 Million individuals are infected with HDV worldwide including about 250.000 patients in the European Union. There is very limited knowledge on disease pathophysiology and host-virus interactions explaining the large interindividual variability in the course of hepatitis D. It is inparticular unknown why 20-50% are spontaneously able to control HDV replication, why the majority but not all patients progress to advanced stages of liver disease and why only some patients show off-treatment responses to antiviral treatment with either pegylated interferon alpha or the novel HBV/HDV entry inhibitor bulevirtide. As HDV is an orphan disease, no multicenter cohorts of HDV infected patients are available with appropriate biobanking. There is also no reliable animal model available allowing to study host responses. Thus, there is an urgent clinical, social and economic need to better understand individual factors determining the outcome of infection and to identify subjects benefitting from currently available treatments. Hepatitis D is a protype infection which could hugely benefit from a novel individualized infectious medicine approach. We here aim to perform an unbiased screening of a large multicenter cohort of well-defined HDV-infected patients followed by mechanistic studies to determine the functional role of distinct molecules. Identified specific parameters could have an immediate impact on the personalized surveillance strategies and antiviral treatment approaches. D-SOLVE aims to reduce disease burden, improve patient?s quality of life and safe direct and indirect costs caused by HDV infection by combining exceptional clinical, immunological, bioinformatical and virological expertise from leading centers in Europe.
Management
Duration
01.10.2022-30.09.2026
Members
Funding Code
HORIZON-HLTH-2021-DISEASE-04-07
Research Area