ELSA meeting in London yields important results on the control of AI systems
How much can we trust the predictions of artificial intelligence (AI) and how do we measure their reliability? Where and how should humans intervene to control and regulate widespread AI systems such as chatbots? How can we create a sensible legal framework for the use of AI systems? How can deepfakes be recognized and flagged? These are just some of the questions that ELSA partners from renowned institutions such as the Alan Turing Institute, the University of Birmingham and Lancaster University are tackling on a daily basis. Plamen Angelov, Professor at Lancaster University, leads ELSA's work package three, which, along with five other work units, ensures that the ELSA network can focus on tackling the major challenges of secure AI in a wide range of application areas.
Trust, but verify
Work package three is dedicated to "Human Agency and Oversight", specifically the question of how the autonomy and decision-making power of humans is not undermined by AI systems despite the rapid pace of development. New developments such as the planned EU AI Act, which will define the legal framework for the use of AI in the EU in the future, and the enormous increase in the spread of AI systems in everyday life and creative work, for example in the form of chatbots such as ChatGPT, make this question even more explosive. Angelov explains the impact of these developments on his work: "Important questions are now arising. These are, for example: How can we share knowledge and benefit from it while guaranteeing the security and trustworthiness of systems? How do we deal with the legitimate concerns about data protection when sharing large amounts of data currently used by generative AI? And what requirements does this approach set for copyright law? Current developments have not fundamentally changed our research, but they reinforce the paramount importance of human agency and oversight in the use of AI."
ELSA creates sustainable results
The major challenges of safe AI cannot just be tackled in individual research projects. They need to be understood in their wider context and properly defined so that experts across Europe can collaborate on them. This is why ELSA researchers have organized various workshops in which researchers can share their knowledge with each other and with political decision-makers. They have also organized challenges and competitions to challenge the AI research community to contribute their ideas and knowledge. In addition, the researchers in work package three have presented a range of methods aimed at developing AI systems in such a way that their results can be interpreted "by design". The researchers are focusing on the application areas of robotics and multimedia. "By linking the question of how we can involve humans in the decision-making of artificial intelligence with these specific use cases, we can precisely define the "Grand Challenge", i.e. the complex problem of human supervision of the systems," says Angelov. CISPA Faculty and ELSA coordinator Professor Mario Fritz describes how the work of the ELSA network of excellence is thus making a lasting contribution to not only naming the major problems of safe artificial intelligence, but also solving them: "With our work, we are laying the foundation for researchers throughout Europe to address specific problems of safe AI and use their results to help shape Europe's digital future. Ultimately, it is about making the capabilities of the systems usable for us, but at the same time designing them in such a way that we can also trust them when they are used in critical areas. Not an easy task. But we are on the right track with ELSA."