CISPA Summer School on Trustworthy Artificial Intelligence
We are inviting applications from undergraduate, graduate students and researchers in the area of Cybersecurity and related fields.
During our annual scientific event, students will have the opportunity to follow one week of scientific talks and workshops, present their own work during poster sessions and discuss relevant topics in trustworthy AI with fellow researchers and expert speakers. The program will be complemented by social activities.
The Summer School is a full-week event, kicking off with a Check-in from 10-11 am on Monday, September 5. The program will be running daily from 9am to 6pm, providing enough room for sessions, discussions and socializing during coffee and lunch breaks. The event is finishing on Friday, September 9 around 2:30 pm.
Application Process: Please apply by filling in our application form and uploading your CV and a short Motivation Letter.
Application Deadline: (CEST) August 26, 2022.
Notification of Acceptance: Several rounds of acceptance, the last round will inform participants by August 29, 2022 at the latest.
Fee: 200,-€ (includes full program, food and beverages during the week and social activities)
Travel Grant: We are offering travel grants. With your application, you can apply for a grant of up to 500,-€ per person (actual travel cost spent, economy flight, train ticket 2nd class). Please do not book any travel arrangements before you have been selected by our jury and accepted to our Summer School. After acceptance, please send your travel arrangements to science-outreach@cispa.de for approval prior to booking anything. We will confirm if/that your expenses will be covered and you will be reimbursed after completing the Summer School successfully.
Presentations: There will be two sessions, during which participants can present their own work / scientific poster. The presentation is not mandatory to receive a certificate of attendance, but we are highly encouraging you to contribute your work to this session and as it will provide you with valuable feedback from an expert audience and might kindle interesting discussions.
Antti Honkela, University of Helsinki
Until December 2018, he was an Assistant Professor of Statistics at the Department of Mathematics and Statistics and the Department of Public Health, University of Helsinki. Before that, he was an Academy Research Fellow at the Helsinki Institute for Information Technology HIIT, Department of Computer Science, University of Helsinki.
His research interests include Bayesian machine learning and probabilistic modelling, privacy-preserving machine learning and differential privacy as well as computational systems biology. He is interested in modelling and analysis of quantitative genomic sequencing data and especially genomic time series data.
Sessions:
Deep learning with differential privacy
Differential privacy and Bayesian inference
Catuscia Palamidessi, INRIA
Short bio: Catuscia Palamidessi is Director of Research at INRIA Saclay (since 2002), where she leads the team COMETE. She has been Full Professor at the University of Genova, Italy (1994-1997) and at Penn State University, USA (1998-2002). Palamidessi's research interests include Privacy, Machine Learning, Fairness, Secure Information Flow, Formal Methods, and Concurrency. In 2019 she has obtained an ERC advanced grant to conduct research on Privacy and Machine Learning. She has been PC chair of various conferences including LICS and ICALP, and PC member of more than 120 international conferences. She is in the Editorial board of several journals, including the IEEE Transactions in Dependable and Secure Computing, Mathematical Structures in Computer Science, Theoretics, the Journal of Logical and Algebraic Methods in Programming and Acta Informatica. She is president of ACM SIGLOG and member of the steering committees of CONCUR and CSL.
Sessions:
Differential privacy
Fairness notions in ML
Martin Jaggi, EPFL
Short bio: Martin Jaggi is a Tenure Track Assistant Professor at EPFL, heading the Machine Learning and Optimization Laboratory. Before that, he was a post-doctoral researcher at ETH Zurich, at the Simons Institute in Berkeley, and at École Polytechnique in Paris. He has earned his PhD in Machine Learning and Optimization from ETH Zurich in 2011, and a MSc in Mathematics also from ETH Zurich.
Session:
Training with adversaries, from Federated Learning to decentralized
Battista Biggio, PRA Lab
Short bio: Battista Biggio received the MSc degree in Electronic Engineering, with honors, and the PhD in Electronic Engineering and Computer Science, respectively in 2006 and 2010, from the University of Cagliari (Italy). Since 2007 he has been working for the Department of Electrical and Electronic Engineering of the same University, where he currently is an Assistant Professor. From May 12th, 2011 to November 12th, 2011, he visited the University of Tuebingen (Germany), and worked on the security of machine learning algorithms to contamination of training data.
His research interests currently include: secure / robust machine learning and pattern recognition methods (adversarial learning); multiple classifier systems; and kernel methods; with applications in biometric recognition, spam filtering, malware detection, and intrusion detection in computer networks.
Sessions:
Machine Learning Security: Adversarial Attacks and Defenses (Session I and II)
Emiliano De Cristofaro, UCL
Short bio: He is a (Full) Professor of Security and Privacy Enhancing Technologies at University College London (UCL). He is affiliated with the Computer Science Department and currently serves as Head of the Information Security Research Group and Director of the Academic Center of Excellence in Cyber Security Research (ACE-CSR). Before joining UCL in 2013, he was a Research Scientist at Xerox PARC. He is also a Faculty Fellow at the Alan Turing Institute and on the Technology Advisory Panel at the UK Information Commissioner’s Office (ICO). In 2016, he co-founded the International Data-driven Research for Advanced Modeling and Analysis Lab (iDRAMA Lab).
Sessions:
What is Privacy in Machine Learning Anyway?
In this lecture, I will provide a formal treatment of privacy leakage from
machine learning models. I will discuss how we should reason around privacy
in ML and focus on (relatively) understudied contexts, such as 1) generative
models and 2) federated learning. For the former, I will present the first
membership inference attacks against DCGAN, BEGAN, and VAE models,
showing that inferring membership here is more challenging than for
discriminative ones. For the latter, I will show how model updates leak
unintended information about the participants’ training data and leave the
door open to several attacks. More precisely, I will formalize and present
property inference attacks against federated learning, showing that an
adversary can infer properties that hold only for a subset of the training data
and are independent of the properties that the joint model aims to capture.
Privacy in ML Case Studies: Synthetic Data and Genomics
In this lecture, I will discuss privacy in synthetic data. I will start by providing a
simple solution for privately training Generative Neural Networks via
differentially private mixtures, aiming to generate images and location traces.
I will then examine synthetic data approaches for genomics, and present a
(re-usable) framework supporting extensive evaluations of their utility and
privacy guarantees. Finally, I will discuss the disparate effect of training
generative models with differential privacy vis-a-vis underrepresented classes
and subgroups of data.
Session: Welcome and Introduction
Session:
Introduction to Federated Learning and an Overview of Federated
Optimization Techniques
Session:
Title: Computer Security and ML - Challenges and Opportunities Ahead
Abstract: In recent years, machine learning has progressed tremendously. Tasks such as image classification, machine translation of a text, or automatic speech recognition can now be handled by algorithms on par with human performance. There are also some promising applications in the area of computer security, particularly in the detection of intrusions and malware. Unfortunately, there are also many drawbacks, and adversarial examples show how fragile these algorithms are in practice. In this talk, I will give an overview of recent work in this area and identify opportunities for future research in this area.
Session:
Title: The Lecture of Why
Abstract: Machine learning is a great tool to discover and exploit associations in data. What about causation? Murder rates do go hand in hand with ice cream sales, but we wouldn’t want an AI to falsely recommend prohibiting ice cream in an effort to reduce crime. How do we achieve AI that can learn that it is the hot temperature during summer that drives both ice cream sales and murder rates? In this talk, I'll give a quick introduction to the basics of causality, will explain why data alone is not enough to determine causation, and will explain under what conditions we can discover causal relationships from data.
More information on topics, sessions and a detailed schedule will be published during the course of July.
For any questions or support, please send an e-mail to science-outreach@cispa.de.