Some thoughts on definitions
Additional comments to the European Parliament's compromise on the AI Act
Dear readers, today’s issue is a little shorter than usual. Because of some deadlines in the doctoral programme, I basically dropped everything in the last few weeks to put the finishing touches on a few draft chapters in my thesis. I should be free(er) by the end of the month, but until then, I will need to spend a bit less energy on this newsletter. Still, I wanted to continue my brief AI Act comments from last week, if only to think aloud with y’all.
I write this issue a few hours after the AI Act has been approved at the committee level in the European Parliament. The text approved by the committee members has a few differences from the leaked version I shared last week, which reflect a few additional compromises made by the co-rapporteurs. Now, the plenary of the Parliament is expected to vote on this approved text at some point in June, and it seems likely that the informal position of the Parliament will be largely along the lines approved today.
A simplified overview of the EU legislative procedure
For those unfamiliar with the EU legislative procedure, approval in the plenary does not mean the AI Act will become law immediately. Since the Act follows the ordinary legislative procedure, it must be approved by the co-legislators: the European Parliament (formed by directly elected representatives) and the Council of the European Union (formed by ministers from the Member States).1 Since the text in discussion in the Parliament does not match the general position adopted in the Council late in 2022, these bodies must find a common position if the Act is to become law at the EU level.
Officially, the ordinary legislative process takes place over three readings. Once the legislative procedure is initiated,2 the Parliament can reject the submitted proposal or adopt it at first reading, with or without amendments. The Council then votes on the text from the Parliament, either rejecting it (in which case the procedure ends) or approving it. If the Council proposes amendments, the amended text goes back to the Parliament for a second reading. And, if the second reading is not enough to produce an agreement, there is a third reading, in which each institution can decide to approve or reject a text produced by a conciliation committee formed by an equal number of Council and Parliament representatives.
You might have noticed that the Council general position was approved before the Parliament voted on the AI Act at the committee level, let alone at the plenary. This is because the EU legislative procedure increasingly relies on informal procedures to find an agreement between the institutions. Instead of going through three rounds of legislative ping-pong, the Parliament and the Council can sort out their disagreements through informal negotiations. During these so-called trilogues, the co-legislators and the European Commission try to develop a provisional agreement that can satisfy the Parliament and the Council.3 Such trilogues can happen at any point in the legislative procedure, but they often happen before the co-legislators adopt a formal position on a legislative piece.
The AI Act is likely to follow the same procedure. Shortly after the Parliament votes on the text approved at the committee level, the trilogue machinery gets going. Some estimates suggest that the trilogues will start by the end of June, and I have even seen some guesses that negotiations might be over by the end of the year. If and when the trilogue produces an agreement,4 the provisional agreement will be voted by each co-legislator according to the formal procedure described above. Barring any last-minute shenanigans, the text produced by the trilogues will then be approved at first reading and become law. But I am not placing any bets on when that might happen.
What is an AI system?
After this necessarily simplified digression, let’s go back to the substance of the AI Act. I am mostly happy with the Parliament’s proposal, which, as I argued last week, tries to make the best of the constitutional constraints of EU law. Yet, these advancements are hamstrung by how the compromise text defines some key terms in the Act.
In the Commission proposal, an AI system is
software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with;
While the explanatory memorandum describes this definition as “technology-neutral”, it is explicitly framed in technological terms: an AI system must use one of the techniques listed in Annex I. Namely, machine learning techniques, logic and knowledge-based techniques, and “Statistical approaches, Bayesian estimation, search and optimization methods.” Various commenters have criticized the third category, as it creates the risk of over-inclusion. After all, one might implement “statistical approaches” through a fancy Excel spreadsheet, which does not match our intuitive image of AI.
Personally, I do not think this is a good argument for narrowing down the definition, as many harms related to automated decision-making are not caused by sophisticated technologies. For example, reporting by Lighthouse Reports has shown that some municipalities in the Netherlands have relied on a prejudiced risk scoring system implemented through “a spreadsheet and programming script that create risk profiles.” Issues like these make me sympathetic to the idea that, if anything, AI is too narrow a category, and we should regulate certain issues regardless of the software used to carry out these tasks.
The parliamentary compromise decides, instead, to refine the definition of AI towards a more technology-neutral formulation. If the Parliament has things its way, an AI system is
a machine-based system that is designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate outputs such as predictions, recommendations, or decisions that influence physical or virtual environments.
Still, I would like to point out that “machine-based system” is an awkward compromise. On the one hand, it placates those who mistakenly believe AI regulation should focus on machine learning. On the other hand, the use of “machine” lends itself to very expansive readings, which encompass not just the kinds of techniques covered by the Commission text but any form of computing (including maybe analogue ones?) that generates the kinds of output listed there with some degree of autonomy. I am not sure if the pursuit of neutrality narrows the scope or provides clarity here.
Don’t believe the hype: the problem with “foundation models”
Technology regulation, as it is usually framed, is a highly neophiliac field: it is always proposing responses to the latest technological development. Such an approach is not always wise, as “technology” is not always the most productive target for regulation, and some of the technical artefacts that most impact our lives are often very old. Nonetheless, there are strong political and career incentives to discuss whatever is in the media and stick to trendy formulations.
Of course, I am not immune to this phenomenon. And neither is the European legislator, as the Parliament compromise text introduces two definitions that reflect current discourses on AI. Article 3(1d) AI Act now defines general purpose AI system as “an AI system that can be used in and adapted to a wide range of applications for which it was not intentionally and specifically designed”. This definition responds to a real issue that has gained salience in 2022 and 2023: the diffusion of models such as GPT-4, which have some use on their own but are meant as building blocks for a broad range of applications.
Systems meant for general purposes cause some conceptual problems for the regulatory framework proposed by the Commission, in which each AI system is associated with a clear application. Last year, Nicolas Petit and I highlighted the policy trade-offs involved in either classifying such systems as high-risk in themselves or limiting responsibility to those who deploy such systems for high-risk tasks. However, the Parliament introduces very few rules for general purpose systems. Instead, it directs regulatory attention towards a closely related category: foundation models.
Article 3(1c) defines a foundation model as a model “that is trained on broad data at scale, is designed for generality of output, and can be adapted to a wide range of distinctive tasks;”. The providers of such systems are subject to certain design and transparency requirements, listed in the newly-introduced Article 28(b). Such provisions warrant more attention in the future, as they seem to strike a compromise between the two poles that Nicolas and I identified. But, for now, I want to complain a bit about the definition itself.
My main issue with “foundation model” is that the term itself is a market ploy. It was introduced by a white paper written by Stanford scholars, which has been critiqued as an overstatement of the capabilities enabled by current models and their generality. By enshrining the term into regulation, the Parliament compromise text gives an official veneer to the idea that future developments in AI will necessarily follow the fine-tuning of large and general models. Since even some of the greatest sources of AI hype seem to believe that model growth is starting to yield diminishing returns, perhaps this commitment to the term could have been avoided. Especially since the definition does not add much to the woefully underused construct of general purpose AI. Let’s see how things evolve in the trilogues.
It’s the end, doo doo doo doo
So, I’ll stop my comments here for today. Please feel free to share your thoughts and reactions, and to subscribe if you haven’t done so yet.
See you next time!
The Council should not be confused with the European Council, which is a body largely formed by heads of state or government and defines the political direction and priorities for the EU. Or with the Council of Europe, which is not even part of the EU but a full-fledged (and older) international organization. None of which, of course, feature any Jedi.
In the vast majority of cases, the European Commission is the competent party to submit legislative proposals. Of course, there are various exceptions and possibilities for informal influence, but they exceed the scope of the current discussion.
Negotiations at the trilogues increase the influence of the Commission in legislation, and they are also affected by parties other than the formal legislators, such as other EU institutions and interest groups. Because of that, their opacity is often criticized.
Depending on political divergences, an agreement might never come to pass. See, for example, the case of the ePrivacy Regulation, which has been in limbo for years due to strong disagreement between the Parliament and the Council.