Hello, dear reader, and welcome to another issue of AI, Law, and Otter Things! Departing from my usual policy of not commenting on recent events, today I’ll write a bit about the European Parliament’s compromise text for the AI Act. As reported by Luca Bertuzzi, the political agreement achieved last week departs from the Commission text (and from the Council’s general position) in meaningful ways, such as introducing remedies for people affected by AI systems and rules for general-purpose AI. Below, I will share my initial impressions of the proposal, as made public by CONTEXTE. This time, I will skip the usual recommendations section, but rest assured that there is a cute animal waiting for you at the end of this issue.
As others and I have argued elsewhere, the EU approach to AI regulation is heavily constrained by the limits of EU regulatory competencies. Within these legal limits, the European Commission came up with a proposal that has a prima facie claim to being useful as a market integration instrument. Such a proposal, however, does not attend to the demands from civil society for a more robust protection of fundamental rights and the creation of new remedies for those affected by AI. The Parliament’s compromise text seems to be a honest attempt to bridge this gap. But, in doing so, the proposal introduces some potentially troublesome points.
I don’t plan to cover the entire agreement here. After all, I should be writing my thesis instead of this newsletter, and the AI Act’s text is likely to change quite a bit in the trilogue anyway. But I will spend some time on a few issues: today’s newsletter elaborates a bit on the change in legal basis proposed by the Parliament, and I plan to write a bit in future issues about the changes in risk classification, generative AI, and explanation and human oversight. Please feel free to share your thoughts, push back on my impressions, or suggest other topics to discuss.
Legal grounds for the AI Act and the limits to EU action
The first potential issue concerns the legal basis for the AI Act. Both the Commission proposal and the Parliament compromise (as well as the Council general position) invoke two articles of the Treaty on the Functioning of the European Union (TFEU) as the sources for regulatory competence: Article 16 TFEU on personal data and Article 114 TFEU, which provides a broad competence for approximating the internal markets of EU Member States. However, they make different uses of these legal bases, which have an impact on the overall framing of the proposal.
For the Commission, Article 16 TFEU played a limited role: according to Recital 2 of the AI Act proposal, it was seen as a basis only for the provisions concerning “…‘real-time’ remote biometric identification in publicly accessible spaces for the purpose of law enforcement”. Everything else was based on Article 114 TFEU, which means that the measures must actively contribute to the elimination of likely obstacles to the internal market.1 Hence, the framing of the rules for high-risk AI systems as a spin-off of the New Legislative Framework for product safety law, which has been widely and rightfully criticized.
To avoid these pitfalls, the Parliament compromise text extends the role of Article 16 TFEU. According to the newly-introduced Recital 2(a):
As artificial intelligence often relies on the processing of large volumes of data, and many AI systems and applications on the processing of personal data, it is appropriate to base this Regulation on Article 16 TFEU, which enshrines the right of everyone to the protection of natural persons with regard to the processing of personal data concerning them and provides for the adoption of rules on the protection of individuals with regard to the processing of personal data.
From what I understand of the formulation above, Article 16 TFEU is no longer tethered to a specific provision. Instead, it is applicable to the entire Act insofar as these provisions concern the processing of personal data. Since many of the current issues with AI-related harms are connected to the impact of AI systems in personal lives, this means Article 16 TFEU might be invoked to ground any measures that do not meet the requirements for market harmonization under Article 114 TFEU. The data protection competence, which played a marginal role for the Commission, is now as ubiquitous as Roy Kent.
Such an approach has a few advantages. First, it provides more robust grounds for governing AI systems that have no connection to the internal market, especially those used in the public sector. As Annex III of the Commission proposal made quite clear, high-risk applications of AI are often connected with the public sector, which seems an odd fit to market-based regulation.2 This oddness is somewhat mitigated by the fact that many public-sector AI systems are acquired off-the-shelf or developed after procurement processes, and so the extension of market-based rules to these systems is lawful as an incidental effect of the overall regulation of markets for AI.3 Yet, administrative bodies often develop their systems in-house, especially for applications without a clear commercial equivalent. The use of Article 16 TFEU thus places the regulation of the latter kind of systems in solid ground, without requiring contortionisms to fit into “market harmonization” systems with no link whatsoever the the EU single market.
A second advantage of increasing the role of Article 114 TFEU is mitigating the Procrustean effect of reducing fundamental rights and other public interests to their dimensions that can be described in product safety language.4 While Article 16 TFEU does not define the scope of the right to protection of personal data, EU secondary legislation has tended to read that right quite extensively. For example, the GDPR's stated purpose is to protect the “...fundamental rights and freedoms of natural persons and in particular [my emphasis] their right to the protection of personal data.”5 Data protection has, therefore, both an intrinsic value and an instrumental value as a mechanism to protect the dimensions of fundamental rights that might be affected by data processing.6 The Parliament's proposed expansion of the duties for deployers of AI systems7 is therefore in line with established practices in data protection law, such as the use of technical and organizational measures as requirements from data protection by design.
But Article 16’s expanded role is not a silver bullet. While the data protection frame allows the AI Act to reach issues that go beyond the product safety frame, it also means the Act is subject to various kinds of critique directed at data protection law. For example, data protection has been traditionally focused on individual-level harms, and it also struggles to capture cumulative harms. As such, the AI Act still lacks the instruments to tackle issues such as the protection of democracy and the rule of law, which the Parliament includes in its revisions to the Act’s scope (Article 1).
In addition, Article 16’s fuzzy deployment in the AI Act creates risks of competence creep. Under the principle of conferral, the EU can only act within the limits laid down by the competences conferred to it in the treaties. Since EU normative instruments often deal with complex issues, it is not unusual to see EU legislation invoke two bases (as the AI Act does) or even more. The multiplicity of legal bases usually reflects that a single act has multiple components, each one grounded on a specific bases.8 But, as we have seen above, this is not the case in the Parliament proposal: both Article 114 TFEU and Article 16 TFEU are invoked to ground everything. This dual grounding creates an obstacle to control over the limits of EU competences, as regulators can switch from one to the other on the grounds of convenience, rather than having bases that complement or supplement each other with no overlap. Of course, judicialization is likely to resolve any issues in one way or the other, but perhaps it is a good idea to make things clearer in the trilogue.
Finally, the otters
I guess this is it for today. Now, I’ll watch the new season of Star Wars: Visions go back to my thesis and leave you with fluffy critters. See you next time!
See Tobacco Advertising I (C-491/01), ECLI:EU:C:2002:741, paras. 60-61.
Especially when one looks at the role that technical standards, established by private or quasi-private actors, play in the AI Act’s governance model.
On the acceptability of incidental effects, even when they connect to legal bases other than the one invoked by an EU act, see Danube River (C-36/98, Spain v. Council, ECLI:EU:C:2001:64, para. 74).
See my paper with Nicolas Petit, linked above.
Article 1(2) GDPR.
See, inter alia, Maria Dymitruk and I on data protection’s instrumental value for the broader protection of fundamental rights in the context of judicial AI.
The artists formerly known as “users”. This rebranding was a wise decision by the Parliament.
See, e.g., Elise Muir, An Introduction to the EU Legal Order (Cambridge University Press, 2023), p. 173.