Hello, dear readers, and welcome to another issue of AI, Law, and Otter Things! It has been more than a month since my last attempt at newslettering, but I could not find anything interesting to say. Between my post-thesis brain worms and my anxiety with the academic job market, I spent the last few weeks going through the motions1 before feeling I had something interesting to say. However, today is World Otter Day, so it seems a good opportunity to get back in the saddle.

Today’s newsletter will feature the usual mix of cute animals, essays about technology law, reading recommendations, and opportunities. Starting with the cuteness, I think its appropriate to say something about the date. Every year, the International Otter Survival Fund organizes various activities in the last Wednesday of May, with the goal of raising awareness about the risks that otters face—12 of the 13 species of otters are deemed to be under considerable risk of extinction—and the best practices for the protection of their environment.
Given that I am by no means an otter expert (just an enthusiast), I encourage you to check IOSF’s webinars on the topic. Or simply to look at the lovely pics and videos people are sharing in social media with the hashtag #WorldOtterDay. But now it is time to get back to AI and Law
The AI Act as an international template?
Last week, I finally attended the Computers, Privacy and Data Protection conference in Brussels. Given that I don’t see myself as a data protection scholar these days,2 I thought that it wouldn’t really be my jam. Turns out it was a very interesting event after all: I attended some good panels, caught up with people I hadn’t seen in a while, and finally met in person some folks with whom I interact quite often online. All in all, these were fun days in Brussels, which helped me get out of my post-submission funk.
During my stay in Brussels, I had the opportunity to speak at two panels. The first one was in a pre-CPDP event organized by the LSTS group at VU Brussel and CTS-FGV/Rio, where I had the opportunity to discuss the AI Act’s influence on the Brazilian AI bill. The second one was a panel on Friday morning, put together by CDSL, in which I discussed my commentary to Article 25 GDPR (written with Giovanni Sartor and Juliano Maranhão) in light of the recent legal and technical developments on AI. Today I will focus on the former topic.3
When we discuss the potential influence of the AI Act on foreign legal instruments, we must distinguish between two questions: is the AI Act an influence on other regulatory efforts directed at AI? To which extent should it be an influence? In the panel, I argued that the influence does exist to some extent, but is not unescapable. And this is not necessarily a bad thing (even for the European Union!), as it is far from clear that the AI Act’s model is suitable to the needs of other jurisdictions, even when regulatory aims overlap with the EU’s lofty ambitions in AI regulation.
Discussions about AI regulation around the world often touch upon the question of whether the AI Act will have a Brussels Effect. According to the theory, a Brussels Effect occurs when EU legislation in a particular domain becomes a global standard. That can happen de facto, when business find it cheaper to comply with EU standards for their entire output instead of creating different product lines for Europe. Or as a de jure phenomenon, when other jurisdictions pattern their laws after the EU’s regulatory approach. Arguably, this is what happened with EU data protection law, as countries around the world started to adopt GDPR-like legal instruments.4
Some months ago, Anca Radu and I made two arguments about the AI Act’s potential Brussels Effect. First, that the theoretical conditions for a Brussels Effect are only likely to happen for some kinds of AI systems, mostly those falling under the high-risk classification,5 as well as for general-purpose AI models with systemic risk. Second, that any Brussels Effect is more likely to spread the form of EU law than to actually spread the values that the Act is meant to protect, such as the protection of fundamental rights, democracy, or the rule of law.6 As a result, there are no irresistible market forces pushing the AI Act as a de facto global standard. Other forms of AI regulation are possible.
In fact, there are good reasons why legislators in Brazil and other jurisdictions might diverge from the AI Act’s construction of risk regulation. For a non-exhaustive list:
Even if lawmakers are protecting the same values as their EU counterparts, and understand those values in a similar way, local policy priorities might vary.
In the Brazilian context, for example, legal scholars such as Bianca Kremer have provided insightful analyses of how myths such as the idea that Brazil is a “racial democracy” create unique concerns for the deployment of algorithms, inter alia, in law enforcement contexts. Yet, there are strong political blocs pushing for precisely this type of technology, sold as a solution to urban violence in Brazil.
Both the Brazilian AI bill and the EU AI Act intend to foster the uptake7 of AI technologies. However, the EU (or at least some of its Member States) has sought to develop “national champions” to compete with US- and China-based providers of general-purpose AI models, where as Brazil and other jurisdictions can mostly hope to innovate in markets for AI applications.
The EU approach to AI regulation is dependent on the availability of a huge legal infrastructure (that of product safety law, plus the capabilities the Commission and the Member States are required to develop under the AI Act) and technical and legal expertise that might not be available for other jurisdictions. Even if those jurisdictions invest in developing said infrastructure and expertise, they might find that their legal and technical experts are poached by higher salaries elsewhere in the world.
Many of the regulatory choices made in the design of the AI Act are highly contingent on the EU’s particular institutional context. For example, the EU lacks a general competence to legislate on fundamental rights, but on the other hand it has regulated product safety for decades at this point. Other jurisdictions might lack the same constraints or be able to leverage other strengths. For example, Brazil has a culture of judicial protection of collective interests (and the legal bases for doing so) that goes beyond what is currently in place in European jurisdictions.
Even in light of reasons like the ones above, the influence of the AI Act can still be felt in other regulatory instruments. The Council of Europe’s convention on AI, for instance, has incorporated mechanisms for risk management that are patterned after the EU approach, while the Brazilian bill replicates some of the Act’s structural features even as it tries to follow a rights-based approach to regulation. Yet, I suspect that the Brussels Effect is a poor explanation for that influence. Instead, it can be traced to other factors, such as:
Explicit activity by the EU to push its approach in bilateral, plurilateral, and multilateral forums (see, e.g., my article with Anca about the Council of Europe’s AI convention);
Imitation of EU practices as an attempt to legitimize regulatory intervention;8
Symbolic use of the trappings of EU law to achieve different regulatory aims (see, e.g., Bueno and Canaan (2024)); or
Delegation of the preparatory work needed to design a regulatory instrument.
Some degree of convergence towards the EU model for AI regulation is surely rational in light of those factors. Yet, convergence is by no means unavoidable, and jurisdictions have some good reasons to diverge from the AI Act’s template. If they do so, the EU might be frustrated in its ambitions to drive the global agenda for AI regulation. Even so, it might benefit from coexistence with other regulatory models, learning from experiences in other jurisdictions instead of creating a regulatory monoculture that might be unsuited for future challenges. All we can say for now is that the next few years will be pretty interesting in terms of regulatory design.
Job openings and events
The Globalization and Law network at the Maastricht University is hosting the event The Day after Public.Resource.Org v Commission: A New Era for the Openness and Copyright of Standards Referenced in EU Law? on 28 June. If you are interested in standardization and the AI Act (and you should be!), you should not miss this one.
Professor José van Dijck at the Utrecht University is hiring a postdoctoral researcher for the project “Governing the Digital Society”. Applications are open until 9 June.
CEPS is hiring a Researcher/Research Fellow in AI Ethics and Policy for its Global Governance, Regulation, Innovation and the Digital economy (GRID) Unit. Applications are open until 31 May.
KU Leuven is hiring a Research professor in International Public Law, and one of their areas of interest is “the impact of new technologies on supranational governance”. Applications are open until 3 September.
The AFAR (Algorithmic Fairness for Asylum-Seekers and Refugees) project at the Hertie School (Berlin) is looking for a part-time postdoctoral researcher (32 hours/week). Applications are open until 20 June, with an envisaged starting date of 1 September.
The AI, Media & Democracy Lab at the University of Amsterdam is hiring a postdoctoral researcher. Applications are open until 23 June, with a preferred starting date of 1 January 2025.
Taina Bucher at the University of Oslo is hiring for a Postdoctoral Research Fellowship - Reimagining AI. The deadline is 15 August.
That’s it for today, I think! Thank you for reading, and please consider subscribing if you have not done so already.
And, as always, please feel free to reply to this email with any thoughts, comments, critiques, or suggestions of papers/events/opportunities you’d like me to share. See you an otter day! 🦦
And watching silly TV shows with my wife. We covered the whole run of Lucifer and are now a few seasons into Grey’s Anatomy.
Not that there’s anything wrong with that, some of my best friends are data protection scholars.
In no small part because, as I mentioned some months ago, I think I already said most of the interesting things I had to say about the topic. There are still some topics that I see as worthy of further inquiry. For example, I think scholars would benefit from more attention to the differences between interface design and design practices for infrastructures. Additionally, debates on the topic (and I am no exception) often fail to pay enough attention to the gap between standardization, or even documentation, and the actual design practices. I believe I am not the best person to work on those lines of research, and I surely don’t want to become an abyss domain expert. So, don’t expect further publications from me on those matters. Instead, check the work of scholars such as Kostina Prifti.
Though one might question the extent to which data protection laws around the world actually follow the GDPR or merely replicate its form.
Which the Commission estimates to be 5-15% of all AI systems in the EU, but some private studies argue that the number might be much higher.
Even if one takes for granted that the AI Act actually manages to protect such values, which is far from certain. See, inter alia, my working paper with Nicolas Petit, or Margot Kaminksi’s critique of risk regulation for AI.
Every time I read “uptake”, I cannot help but replace it with “updog”.
This factor is arguably present within the EU legal order itself: see Papakonstinou and de Hert (2022) on “GDPR mimesis”.