Send email Copy Email Address
2026-02-19

The science and practice of proportionality in AI risk evaluations

Summary

A global challenge in artificial intelligence (AI) regulation lies in achieving effective risk management without compromising innovation and technical progress (1). The European Union (EU) Artificial Intelligence Act (2) represents the first regulatory attempt worldwide to navigate this tension in the form of a binding, risk-based framework. In August 2025, obligations for providers of general-purpose AI (GPAI) models under the EU AI Act entered into application. They require providers of the most advanced GPAI models to evaluate possible systemic risks stemming from their models (3). This raises the regulatory challenge of ensuring that the evaluations provide meaningful risk information without imposing excessive burden on providers. The principle of proportionality, a binding requirement under EU law, requires the regulator to calibrate its actions to their intended objectives. The application of proportionality to model evaluations for AI risk opens opportunities to develop scientific methods that operationalize such calibration within concrete evaluation practices.

Article

Date published

2026-02-19

Date last modified

2026-03-10