Send email Copy Email Address

2023-07-05
Annabelle Theobald

Security of Large Language Models with Dr Lea Schönherr

A new episode of our CISPA-TL;DR podcast is online! Episode 22 is part of our special series #WomenInCybersecurity. In this episode, we discuss the security of large language models such as ChatGPT with CISPA Faculty Dr. Lea Schönherr. Lea tells us how she goes about uncovering vulnerabilities in these models. As somebody trained in Electrical Engineering and Information Technology, she also reveals what it is like to work among computer scientists. This episode was recorded in German and is now available wherever you get your podcasts.

Hardly any other digital topic has been as popular this year as ChatGPT. Technically speaking, ChatGPT is a so-called large language model (LLM). LLMs are large generative language models that use artificial intelligence to understand and process inputs in natural language. They can often handle complex questions and instructions and they produce text of unprecedented quality. However, the neural networks and deep learning algorithms underlying these models can be used for much more than just chatting. There are code generative models, for example, which - as the name suggests - can generate code and thus entire computer programs.

But no matter whether these models generate language or code, they all have vulnerabilities. Detecting these vulnerabilities and improving the applications is what CISPA Faculty Lea Schönherr has set her mind upon. "When there are new systems such as large language models, the first thing you do is try to find out where the vulnerabilities might be", says Schönherr. "If you go about this from the attackers’ point of view, you can also directly look at a worst-case scenario." This approach is called "adversarial machine learning." Especially in the area of large-language models, this is still a fairly young research branch, Schönherr tells us. “It's really part of the research to figure out what the best practices might be," she says.

To fill this and other research gaps, Lea Schönherr joined CISPA in 2022 as Tenure-Track Faculty. Previously, Lea was at Ruhr University in Bochum, where she worked first as a doctoral student and then a post-doc. At CISPA and in Saarbruecken, she "felt very quickly well settled." A diverse team is important to Schönherr, as she explains: "It's simply much more fun if you're not the only woman.” She thinks that more commitment is necessary to promote women in IT professions: "Creating interest as early as possible, that would be a very good thing." She uses applications like ChatGPT in her private life too, for example when looking for a definition or a short birthday poem. The focus, however, is on satisfying her curiosity about these systems. This is where her personal and professional interests coincide.

TL;DR, short for "Too long didn't read," is the CISPA podcast; #WomenInCybersecurity is a special series within this format. TL;DR has been on air since 2022 and is available on all major podcast platforms. Every month, we talk to CISPA researchers about their research in cybersecurity and artificial intelligence, trying to ask them all the questions that our listeners too might ask themselves. It is our goal to explain complex topics in simple language. AT CISPA, we have colleagues from 43 nations, which is why the podcast is sometimes recorded in German and sometimes in English.