Send email Copy Email Address

DIGITAL CISPA Summer School 2020

Digital event of 1.5 weeks featuring public talks, workshops, discussion and breakout sessions, informal sessions, and Poster Presentation Slots.

Due to Covid-19, we combined our Young Researcher Security Convention (SeCon, initially scheduled for spring 2020) and our upcoming Summer School to a larger digital event of 1.5 weeks featuring public talks, workshops, discussion and breakout sessions, informal sessions, and Poster Presentation Slots providing participants with the opportunity to present and discuss their own work during. You can either apply to the whole school or to one or several modules and will receive a certificate of completion after the event.

When

August 19-28, 2020

Where

Online

Fee

Free of charge

Program as PDF: Download
Questions? Contact our Summer School team at: summer-school@cispa.saarland

Application Deadline: August 19, 2020

Registration closed

TOPICS

PRIVACY AND MACHINE LEARNING

Prof. Dr. Mario Fritz,
CISPA

With the advance of machine learning techniques, we have seen a quick adoption of this technology in a broad range of application scenarios. As machine learning is becoming more broadly deployed, such approaches become part of the attack surface of modern IT systems and infrastructures. The class will cover several attack vectors and defenses of today's and future intelligent systems build on AI and machine learning technology.

Machine Learning Overview Machine Learning is a quickly advancing research area that has led to several breakthroughs in the past years. We will give a short introduction into some of the most relevant concepts -- including Deep Learning techniques.

Evasion attacks While there has been a leap in performance of machine learning systems in the past decade, still many open issues remain in order to deploy such models in critical systems with guarantees on robustness. In particular, Deep Learning techniques have shown strong performance on a wide range of tasks, but are equally highly susceptible to adversarial manipulation of the input data. Successful attacks that change the output and behavior of an intelligent system can have severe consequences ranging from accidents of autonomous driving systems to by-passing malware or intrusion detection. We cover techniques in the domain of adversarial machine learning that aim at manipulation the predictions of machine learning models and show defenses in order to protect against such attacks.

Inference attacks Machine Learning services are offered by a range of providers that make it easy for clients e.g to enable intelligent services for their business. Based on a dataset, a machine learning model is trained that then can be access e.g. via an online API. The data and the machine learning model itself are important assets and often constitute intellectual property. Our recent research has revealed that such assets leak to customers that use the service. Hence, an adversary can exploit the leaked information to gain access to data and/or the machine learning model by only using the service. We will cover novel inference attacks on machine learning models and show defenses that allow secure and protected deployment of machine learning models.

Privacy The success of today’s machine learning algorithms is largely fueled by large datasets. Many domains of practical interest are human centric and are target at operating under real-world conditions. Therefore, gathering real-world data is often key the success of such methods. This is frequently achieved by leveraging user data or crowdsourcing efforts. We will present privacy preserving machine learning techniques that prevent leakage of private information or linking attacks.

Aug 20, 11.00–12.30 pm + Aug 21, 1.30–3.00 pm

Dr. Yang Zhang,
CISPA

The past decade has witnessed the fast development of machine learning techniques, and the key factor that drives the current progress is the unprecedented large-scale data. On the one hand, machine learning and big data can help improve various domains of people's life quality. On the other hand, they can also cause severe risks to people's privacy. In this talk, I will present our research on the intersection of data privacy and machine learning. First, I will show how to use machine learning techniques to assess and mitigate privacy risks for various types of data, including social network data, location data, and biomedical data. Then, I will present our research on quantifying privacy risks caused by machine learning models. In particular, I will discuss our newest results on membership inference and data reconstruction.

Aug 21, 11.00–12.30 pm

Dr. Mathias Humbert,
Cyber-Defence Campus

Introduction to Privacy and Machine Learning
Machine Learning for Privacy, Privacy for Machine Learning

Aug 19, 1.00–2.30 pm + 3.30–5.00 pm

Prof. Dr. Vitaly Shmatikov,
Cornell Tech

This lecture will describe how machine learning models memorize training data and learn sensitive properties unrelated to the training objective.  I will also explain how to infer the presence of individual data points in the training data and how to use this to audit the provenance of the model’s training dataset.

This lecture will focus on privacy issues in the emerging paradigm of federated learning, which aims to avoid centralized collection and processing of training data by distributing the learning process to individual users and devices.

Aug 20, 3.30–5.00 pm + Aug 21, 3.30–5.00 pm

CYBERPHYSICAL SECURITY

Dr. Nils Ole Tippenhauer,
CISPA

Cyber-Physical Systems consist of networked embedded devices that measure and actuate physical processes. Examples for such systems include industrial control systems, drones, and cars. Such systems face novel security challenges through physical-layer attacks, e.g., attacks that aim to damage the system. Nevertheless, the physical layer can also be leveraged to detect attacks and anomalies.

We will take a deep dive into recent research in the area of physical-layer ICS and wireless security, and show exciting offensive and defensive research to bring security into these engineering domains. In particular, challenges related to industrial protocols, veracity of sensor readings, and host security are considered. In addition, options for (complementary) security countermeasures to address those challenges are proposed and compared.

Aug 24, 11.00–12.30 + 1.30–3.00 pm

USABLE SECURITY

Dr. Elissa Redmiles,
Microsoft Research

The security field is increasingly leveraging measurements of both technical and human problems. From evaluating the usability of a new security tool to understanding why people refuse to enable two factor authentication, human problems are intertwined with security issues. In this seminar, we will cover a crash course of approaches to designing careful human measurements, such as how to construct and test a validated survey and how to design economic experiments to evaluate security behavior.

Aug 25, 4.00–5.30 pm + Aug 26, 4.00–5.30 pm

Prof. Dr. Sascha Fahl,
Leibniz University Hannover

The history of information security and privacy has taught us that it takes more than technological innovation to develop functional and practical security and privacy mechanisms. Many aspects of information security and privacy depend on both technical and human factors. As a result, in the digitalization age, we see a persistent gap between theoretical security and privacy and real-world vulnerabilities, strong authentication mechanisms, and the use of weak passwords and data-breaches and possible-attacks. Human factor challenges in security and privacy impact all actors involved in creating and using security and privacy-preserving technology, ranging from system designers across administrators and software developers to end-users.
In this seminar, we will identify and discuss opportunities and challenges when studying usable security for software developers and learn from previous work.

Aug 25, 11.00–12.30 pm + 1.30–3.00 pm

SECURITY TESTING

Prof. Dr. Andreas Zeller,
CISPA

Fuzzing, or testing systems with myriads of randomly generated inputs, is one of the most cost-effective ways to find bugs and vulnerabilities in real-world programs: Setup the fuzzer, start it, and let it do its job for hours and days. To make fuzzers effective, though, they need to produce inputs that reach and test functionality beyond mere input processing.

In this course, we explore how to build effective fuzzers that make use of language specifications, coverage feedback, program semantics, and more – using the fuzzingbook.org toolkit that allows to create, integrate, and extend such fuzzers within minutes. When you’re done with the course, you will be able to (a) use fuzzers effectively for security testing and (b) build your own innovative or domain-specific fuzzer. Includes lots of live coding!

Aug 27, 11.00–12.30 + 1.30–3.00 pm

Dr. Hamed Nemati,
CISPA

Modern computer architectures include complex features that make it infeasible to analyze their effects on channels that may compromise program security. Abstract side channel models have been proposed to approximate these flows in terms of system state observations, thus making the analysis tractable. However, using these models to verify security properties relies on the assumption that states with equivalent observations would be indistinguishable to the attacker on real hardware. In this talk, we explore a methodology and tool to validate side-channel models, testing program inputs that lead to equivalent observations in automatically-generated programs, and measuring against channels on the hardware. Further, we will see how this methodology led us to discover unknown vulnerabilities on ARMv8 architecture.

Aug 28, 11.00–12.30 pm

IMPRESSIONS OF OUR SUMMER SCHOOL 2018