Hello, dear reader, and welcome to another issue of AI, Law, and Otter Things! Quite a bit has happened since our last issue. We now have a new substitutive text for Brazil’s AI bill, which will be voted in the Senate next week. Luxembourg’s state websites were subject to a DDoS attack. Elsewhere in Europe, there were various interesting developments, such as a police raid directed at former EU justice commissioner Reynders and attempts (so far unsuccessful, as they should be) to carve logistic regression methods out of the scope of the AI Act. Not to mention geopolitical developments around the world. I won’t be commenting on those here, though I might write something about the Brazilian AI bill once is passes the Senate.
Today’s newsletter focuses on the idea of regulatory monocultures in the field of AI, which I had mentioned in a previous issue. After that, we have the usual sections: a few reading recommendations, some open calls for papers and job openings, and a cute otter to wrap things up.
Is there a risk of an AI regulatory monoculture?
In a previous issue, I shared some notes about why the AI Act might not create a Brussels Effect as strong as the European Commission might hope. For now, let us entertain the possibility that this prediction is wrong.1 If that is the case, some theoreticians could claim validation for their work,2 EU policymakers would maintain their elevated sense of global relevance in a burning world,3 Ted Cruz would be pissed off,4 and so on. But what would the spread of the AI Act mean for AI regulation around the world, actually?
One scenario that worries me is the risk of a regulatory monoculture. To the best of my knowledge, the term has not been defined yet in the literature, though it has been used sometimes: Google Scholar returns 22 hits for “regulatory monoculture”, none of them in 2024. This use-without-definition is not absurd, both because it does a good job in conveying a vibe of uniform, boilerplate regulation, and because similar terms, such as “algorithmic monocultures” and “monocultures of the mind” (pdf link, alas) have been in use elsewhere. However, I think something can be gained by refining the concept a bit further.
To make clear the implicit definition above, I’m currently using the term “regulatory monoculture” to refer to a scenario in which regulatory approaches around the world converge towards certain essential features, even if they diverge on details. The idea that this convergence can take place is far from alien to regulation scholars, and it has been explored to some extent in the field of AI. For example, Margot Kaminski has a fascinating treatment of the “policy baggage” that shapes AI regulation when we decide to frame it as a problem of risk regulation. Her analysis captures an important element of the monoculture, but I suspect that convergence can also take place at a much deeper level. For example, the various iterations of the Brazilian AI bill all adopt a rights-based framework, but they still cannot help but draw on constructions patterned after the AI Act. My hope is that the monoculture metaphor can help us identify other dimensions of this phenomenon.
As I mentioned in the past, I am a radical instrumentalist when it comes to metaphors. Given that almost anything can be a metaphor for almost anything else, a metaphor will only help us if it does something beyond highlighting a similarity. In this sense, I suspect the monoculture metaphor can be rather productive. It can be useful to communicate the risks of uniformity: just like monocultures are vulnerable to infections,5 a regulatory monoculture might deprive policymakers from potentially useful signals to cope with uncertainties in AI regulation. It might also suggest tools for further study of the phenomenon, for example by proposing measurements and thresholds for what count as a metaphor. So, ultimately, my interest here is to get a better view of the regulatory landscape for AI.
Here the correspondence comes into play. A correspondence between objects is not sufficient, but it is certainly necessary for a fruitful metaphor. This means that, to get something out of the monoculture metaphor, we need to establish at least two things:
First, it is important to understand where the metaphor distorts things, that is, the aspects of the AI regulatory landscape that cannot be properly described by it.
Second, it is necessary to understand what factors might lead the metaphor to break down altogether. For example, a recent opinion piece by Emmie Hine points out that any EU influence in US state-level regulation is outweighed by Big Tech.
If the monoculture metaphor offers a suitable description of (some of) the AI regulation landscape, one can ask all sorts of follow-up question. In particular, a question that is making the rounds is whether there is something unique to AI that forces the development of a monoculture. Some authors6 have argued that the paths of technological development in AI play an important role in shaping not just the markets for AI technologies but the downstream markets in which those technologies are used. But this is not, in itself, enough for the emergence of a regulatory monoculture, as regulators might frame problems differently and use different tools. Conversely, monocultures are unlikely to be an AI-exclusive phenomenon, so any conceptualization of them must dialogue with work in other regulatory domains.
My own attempt to conceptualize these things is still at an early stage. I had prepared a first stab at it for PLSC-Europe 2024, but I could not attend as I was just moving to Luxembourg at the time. Now that I am situating myself in cyber policy (which is a new field for me) and working on my book proposal, this project remains in the backburner. Still, I thought it might be interesting to use this newsletter to organize a bit my ideas on the subject, and I would love to hear from you if you have thoughts on the topic and/or are working on something related.
Reading recommendations
Marco Almada and Maria Estela Lopes, ‘Participation in Privatised Digital Systems’ (SLSA Blog, 26 November 2024). Make sure to read the rest of the guest series, too!
Lee A Bygrave, ‘The Emergence of EU Cybersecurity Law: A Tale of Lemons, Angst, Turf, Surf and Grey Boxes’ (2025) 56 Computer Law & Security Review 106071.
Benjamin Farrand, Helena Carrapico and Aleksei Turobov, ‘The New Geopolitics of EU Cybersecurity: Security, Economy and Sovereignty’ (2024) 100 International Affairs 2379.
Torbjørg Jevnaker and others, ‘De Facto Rule-Making Below the Level of Implementing Acts: Double-Delegated Rule-Making in European Union Electricity Market Regulation’ [2024] European Journal of Risk Regulation 1.
Andrei Kucharavy and others (eds), Large Language Models in Cybersecurity: Threats, Exposure and Mitigation (Springer 2024).
Sarah Tas, ‘Datafication of the Hotspots in the Blind Spot of Supervisory Authorities’ (2024) 30 European Law Journal 87.
Events and Opportunities
Next Monday (9 December) I will be speaking at the lunchtime roundtable “AI Act and the Ecosystem of Justice” we’ll host at the University of Luxembourg. You should join us, either in person or online!
The new AI Accountability Lab at Trinity College Dublin is recruiting! They are looking for two post-docs, three PhD researchers, and a lab manager, from a variety of backgrounds.
The Fourth Annual Cybersecurity Law and Policy Scholars Conference invites contributions until 15 December 2024. It is a PLSC-style event for works in progress, which will take place in Columbus (OH, USA) on 4 and 5 April 2024.
The Socio-Legal Studies Association (SLSA) Annual Conference 2025 has an open call for papers until 18 December 2024, with a variety of tracks that are likely to interest the readers of this newsletter. The event itself will take place in Liverpool (UK) from 15 to 17 April 2025.
The NOVA School of Law in Lisbon is about to launch its Platform for European Administrative and Regulatory Law (NOVA PEARL). Stay tuned, and follow them on Bluesky in the meantime.
The British and Irish Law, Education and Technology Association (BILETA) will have its 40th annual conference on 2 to 4 April 2025 in London (with an online component). They invite abstracts until 10 January 2025.
My former colleagues at DigiCon are inviting applications to their Digital Constitutionalism Academy. The event will take place on 27 and 28 March 2025, with the theme of “AI Regulation between Innovation, Fundamental Rights, and Digital Sovereignty”. Applications are accepted until 20 January 2025.
The next edition of the European Workshop on Algorithmic Fairness will take place on Eindhoven (Netherlands), from 30 June to 2 July 2025. They will accept paper submissions until 13 March 2025.
Thanks for your attention! Hope you found something interesting above, and please consider subscribing if that is the case:
In any case, here is an outstanding otter to accompany you for the rest of the day. See you next time!

As I pointed out there, the AI Act’s standards can spread even without a stricto sensu Brussels Effect. And I’m not particularly good as a forecaster, alas, otherwise I’d love to monetize those skills.
That’s always the case, of course, and I make no claims to be above gloating either.
Once again, this outcome seems pretty impervious to factual developments.
Which only goes to show that being wrong is not always a bad thing.
See, e.g., the recurrent issues with the chocolate supply chain.
Include myself, Juliano Maranhão, and Research Handbook on Competition & Technology (Elgar 2025).