-
Abstract
In the spirit of the European Commission’s (EC) risk-based approach to artificial intelligence (AI), the AI Act (COM(2021) 206 final) contains a four-level taxonomy of AI-related risks, ranging from non-high to unacceptable. For so-called high-risk AI, it sets out a priori technical standards, the observance of which is meant to prevent the occurrence of various types of harm. However, based on a quantitative/qualitative analysis of the results from two public consultations conducted by the EC, this study shows that the views gathered by the EC are not reflected in the AI Act’s provisions. Although in ‘standard’ EU risk regulation, the objective of attaining a desired level of protection can justify a regulatory address, evidence remains required, for the purpose of avoiding risk misrepresentations. Bearing in mind the requirement for evidence-based policy, expressed in the 2015 Better Regulation Agenda, this study argues that the AI Act, as it currently stands, is not based on the evidence gathered and analysed by the EC, but that a pre-existing policy strategy on AI seems to primarily – if not, exclusively – constitute the grounds on which the EC based the regulatory framework which took shape in the AI Act.
European Journal of Law Reform |
|
Article | Of Hypothesis and FactsThe Curious Origins of the EU’s Regulation of High-Risk AI |
Keywords | AI Act, artificial intelligence, evidence, regulation, risk, risk-based approach |
Authors | Ljupcho Grozdanovski en Jérôme De Cooman |
DOI | 10.5553/EJLR/138723702022024001008 |
Author's information |
Purchase access
You can purchase online access to this article. You will receive 24 hrs access @ € 17,50 (excl. VAT).
24 hrs access | € 17,50 (excl. VAT) |
Activate your code
If you have an access code, please activate it here.