Send email Copy Email Address

Laura Jane Jahke

First year anniversary: ELSA gathers for first General Assembly Meeting

The European Lighthouse on Secure and Safe AI (ELSA), a recently established AI network of excellence consisting of top European researchers, met in Sestri Levante for the first General Assembly Meeting for three days from September 25-27. The large and growing network aims to promote the development and deployment of cutting-edge AI solutions in the future and to make Europe a beacon of trustworthy AI. The goal of the meeting, at which the 26 partners came together, was to present their latest research results and plan the future of the network.

On the first day of the meeting, the participants received a project update and reviewed the results achieved so far. During a poster session, ELSA partners presented and discussed their work on artificial intelligence and machine learning and discussed new collaboration opportunities. ELSA project coordinator and CISPA Faculty, Professor Dr. Mario Fritz commented, "In ELSA, we are working on innovative methodology and solutions for pressing challenges in secure and safe AI. I am excited about the progress and momentum that we have already achieved in the first year.”

On the second day, a workshop on Technical Robustness and Safety of AI was held by Battista Biggio, Associate Professor at the University of Cagliari and co-founder of the cybersecurity company Pluribus One. Professor Dr. Biggio said, “This workshop witnesses that ELSA’s work on secure and safe AI is progressing at a fast pace, especially concerning testing, verification, and certifiable robustness of AI, with a focus also on large language models. We discussed some of the most recent contributions from University of Cagliari, University of Oxford, CISPA, and University of Genoa, and identified several interesting and challenging research directions to further improve and strengthen our consortium as well as the ELSA European network of excellence.”

On the third and final day, there was a workshop focusing on privacy-preserving collaborative learning led by Professor Dr. Antti Honkela from the Department of Computer Science at the University of Helsinki. He is also the coordinating professor of the Research Programme in Privacy-preserving and Secure AI at the Finnish Center for Artificial Intelligence (FCAI). Professor Dr. Honkela summarized, “We had fascinating discussions on research conducted by the project partners, focusing especially on privacy in the context of large AI models called foundation models, such as GPT-4 and other large language models.”

For more information on the project, please visit: