Large language models can prioritize patients more quickly: ILLUMINATION research project is launched today
Medical care requires extensive speech and text processing: Recording patient data and symptoms, inquiring about medical histories and classifying complaints – medical staff have to document and assess all this information, often under time pressure. In theory, large language models (LLMs) are already capable of performing these tasks far more efficiently than humans. In practice, however, this poses considerable risks to data protection and privacy. In order to reliably protect sensitive patient data while harnessing the advantages of LLMs for medical care, ILLUMINATION is developing legally compliant privacy methods for LLM-based applications.
LLMs can support medical staff in emergency units
ILLUMINATION focuses on developing a privacy-friendly LLM-based application to support medical staff in pre-triaging patients in emergency units. The idea is for patients to be able to share their complaints with the LLM via interactive chats. The LLM will then analyze the symptoms and suggest how to prioritize the patients. Based on this preliminary work, the medical staff will establish further vital parameters and decide on further treatment. Franziska Boenisch, project coordinator of ILLUMINATION and Faculty at CISPA, explains the project’s overall objective: “We want to use LLMs to relieve the burden on doctors in the time-critical environment of emergency units. Due to privacy risks, the use of LLMs in pre-triage has not been possible to date.”
Combining prediction quality with privacy-friendliness
In very specific contexts such as emergency care, large language models can only deliver high-quality predictions if they have been specifically trained for the respective area of application. For this reason, the project team will have to feed its application with real patient data from previous triage processes. To prevent this sensitive data from leaking to either the LLM operators or the application’s subsequent users, the project team is developing privacy methods based on Differential Privacy. Franziska Boenisch explains: “Differential Privacy is a mathematical framework which can guarantee that an individual’s data remains private. This means that the LLM can learn from the population, i.e. from the sum of all the training data it is fed, but not from the individual data of a single patient. Conversely, this also guarantees that no individual has too great an influence on the system and its predictions.”
Interdisciplinarity offers a holistic perspective
The German Federal Ministry of Education and Research is sponsoring ILLUMINATION with a total of around 1.7 million euros over a period of almost three years. In order to achieve a holistic perspective on the use of large language models in the medical field, ILLUMINATION brings together Charité – Universitätsmedizin Berlin, Heidelberg University, Freie Universität Berlin, as well as the startup algonaut under the coordination of CISPA. The interdisciplinary project team unites experts and perspectives from the fields of medicine, law, human-centered computing, LLM-based prototype development, and machine learning.
Scientific contact:
Dr. Franziska Boenisch, Dr. Adam Dziedzic
CISPA Helmholtz Center for Information Security
Stuhlsatzenhaus 5, 66123 Saarbrücken
boenisch@cispa.de, dziedzic@cispa.de
ILLUMINATION is sponsored by the Federal Ministry of Education and Research.