Hello, dear reader, and welcome to another AI, Law, and Otter Things! Today’s issue is mostly a response to something that annoys me in discussions about the AI Act. Before getting started with that, however, I want to share two quick updates.
First, my friend Francisco de Abreu Duarte and his team just launched The Legal Place, an all-in-one platform for legal training. This platforms offers various courses tailored for the needs of legal professionals, including a course of mine on the basics of the AI Act. The platform makes use of AI not as a source of content, but as a tool to deliver content created by human experts. Check it out!
Second, the manuscript of the Research Handbook on Competition and Law (Pier Luigi Parcu, Maria Alessandra Rossi, and Marco Botta, eds.) has just been sent to Edward Elgar Publishing. So, at some point earlier next year, you should be able to read various interesting contributions on the topic, including a chapter I wrote with my habitual co-authors Juliano Maranhão and Giovanni Sartor on “Competition in and through Artificial Intelligence”.
That said, it is now time to move on to the rant. For the most part, I try not to write things out of anger, or because someone is wrong in the literature. Some of the best academic works I’ve read were produced out of righteous anger,1 but that is not really what makes me tick. Between self-restraint2 and my lack of interest in having the last word,3 I rarely have something interesting to say about things that annoy me. But in this case I think sharing what annoys me might be useful, so here we go.
After that, as usual, I will share a few reading recommendations and a cute otter.
Against “limited-risk AI”
Whenever one talks about the AI Act, it is common to hear references to its potential “Brussels Effect”. The European Commission hopes, and some commenters suggest good reasons for, that the European Union (EU)’s economic might will contribute to making the AI Act’s rules a global reference for AI regulation. Other commenters, including myself and Anca Radu, are not so certain such an effect is likely or desirable. But today I will talk about a different effect that is also visible in debates about the AI Act: the Mandela Effect.
As defined by one of the last good places on the Internet, the Mandela Effect is a form of shared false memory. It takes place when large numbers of people believe they remember something that is not in fact true, like when people swear that “Play it again, Sam” is an actual quote from Casablanca. I am increasingly convinced that this is the main reason why “limited-risk AI systems” are mentioned as one of the risk risk categories established by the AI Act.
The idea that the AI Act creates a category of limited-risk systems has no basis on the text of the regulation itself. As Aleksandr Tiulkanov helpfully points out, the expression appears nowhere in the Act’s binding provisions. In fact, as Tiulkanov shows, the only mention to “limited risk” that persists in the final text is connected to the derogations present in Article 6(3) AI Act. So, if any system deserves the label of “limited risk”, it is a system that would otherwise be classified as high-risk (and thus remains subject to the registration obligation under Article 49 AI Act).
Even without an explicit textual mention, the emergence of the “limited-risk” category could be understandable if it addressed a clear cognitive need. For example, it would make sense to speak of a separate limited-risk category if it offered a way to make sense of the transparency obligations created in Article 50 AI Act. However, this framing is somewhat misleading for two reasons:
It overlooks that the requirements laid down in Article 50 AI Act also apply to high-risk AI systems. For example, Recital 132 AI Act states that the duty of disclosing AI in human-computer interactions applies “without prejudice to the requirements and obligations for high-risk AI systems”.
It overstates the normative density of the transparency obligations, which do not aim to supply a comprehensive set of rules for a certain class of AI systems. As Recital 137 AI Act puts it, compliance with those provisions is not enough in itself to ensure that the use of the AI system or its output is lawful.
In the case of high-risk AI, this framework is laid down in the Act itself. That framework is applied together with other cross-cutting legal requirements, in particular those created by the GDPR.
For all other systems, the AI Act does not define a specific legal framework because the EU lawmaker has deemed that the existing sector-specific rules (plus the same cross-cutting rules applicable above) are enough to address AI risks.
In short, the label “limited-risk AI systems” is neither informative nor used in the AI Act. Nonetheless, it keeps popping up everywhere. Even the Commission’s official explainer uses it. This is in no small part due to the fact that the four-tiered view makes for a neat visualization. It allows you to add an extra tier to the pyramid, one that shows the regulation does something about some kinds of opacity, which can be difficult to communicate otherwise.
For this reason alone, I don‘t expect the idea of a “limited-risk” category to vanish any time soon. Even if it can fuel misunderstandings about what the Act actually requires from systems covered by one or more of the substantive provisions in Article 50 AI Act. So, at the end of the day, I am just a boy, standing in front of a literature, begging it to stop using this term.
Reading recommendations
Bryan Choi, ‘AI Malpractice’ (2024) 73 DePaul Law Review 301. See also Rebecca Crootof discussion of this paper at JOTWELL.
Pierre Dewitte, ‘The Many Shades of Impact Assessments: An Analysis of Data Protection by Design in the Case Law of National Supervisory Authorities’ (2024) 2024 Technology and Regulation 209.
Clement Guitton and others, ‘How Distrust is Driving Artificial Intelligence Regulation in the European Union’ (2024) 15 European Journal of Law and Technology. Important engagement with the idea of trustworthiness that is supposedly at the heart of the Act.
Rocco Palumbo and Mohammad Fakhar Manesh, ‘Travelling along the Public Service Co-Production Road: A Bibliometric Analysis and Interpretive Review’ (2023) 25 Public Management Review 1348.
Maria Lucia Passador, ‘AI in the Vault: AI Act’s Impact on Financial Regulation’ [2025] Loyola University of Chicago Law Review forthcoming.
Alexander Peukert, ‘Copyright in the Artificial Intelligence Act – A Primer’ (2024) 73 GRUR International 497.
Finally, the otter

Thanks for reading! Please consider subscribing if you haven’t done so yet:
Don’t hesitate to reply to this email or contact me on social media to keep the conversation going! See you next time.
My go-to example here is Terry Pratchett. Unfortunately, that description of his was made by Neil Gaiman, who I was uninclined to quote even before recent developments. So, no link to his writing here (without this being in any way a statement about his innocence), but you are a grown-up (I hope!) and you can look for the text if you are so inclined.
I am by no means a calm person, though I’ve fortunately moved away from my hot-headed youth.
As Robert Nozick put it, “At any rate, I believe that there is also a place and function in ongoing intellectual life for a less complete work, containing unfinished presentations, conjectures, open questions and problems, leagues, site connections, as well as a main line of argument. There is room for words on subjects other than last words.” As for me, my academic ambition is to be a conversation starter rather than a conversation stopper.