Hello, dear reader, and welcome to another issue of AI, Law, and Otter Things! It has been three weeks since the last newsletter, breaking my streak of writing at least once every one or two weeks. In part, this was because I could not find something particularly interesting to say, but my schedule for the last few weeks also did not help. So, in lieu of my usual short essays, today I will share some ideas at an even earlier stage of development than usual, in the hopes they might be interesting for some of you.
As per the usual structure, I will first share those brief reflections. After that, I’ll share some reading recommendations and academic opportunities, followed by an otter picture. Hope you find something of interest here!
The AI Act’s Katamari Effect
I am not much of a gamer nowadays. From time to time, I drop an unhealthy amount of time into a game,1 which is usually a few years old.2 One of the games that I only got to know much after its release is Katamari Damacy, a wacky game originally released for the Playstation 2 in 2004. In it, you control a minuscule character that moves around the game map, trying to get objects to stick to a magical ball called “katamari”. Hijinks ensue, to the sound of a ridiculously catchy soundtrack. So, if you haven’t played this game yet, it might be a good way to chill. Yet, when I found myself with too much time in my hands during a train trip, I was struck with the idea that the game offers a not too shabby metaphor for the AI Act (Bear with me for a second!).
There are two senses in which the EU’s approach to regulating AI resembles a katamari. The first regards its composition. In the game, you can form balls with everything — anything smaller than the katamari will stick to it, which means that, as you progress in the game, you can assimilate ridiculous objects, such as human beings or even small mountains. Likewise, the AI Act is a hodgepodge of approaches to regulation. As (among others) myself and Nicolas Petit have argued, many of the Act’s oddities can be traced to its attempt to weld the protection of fundamental rights and product safety law. Further research points out to the presence of other approaches to regulation: Catalina Goanta has traced the roots of the AI Act’s prohibitions to the Unfair Commercial Practices Directive, and one can point as well to the presence of experimental regulation tools3 and supply chain governance elements in the Act.4 So far, my work and that of others has tended to look at how each of these elements interferes with the AI Act’s “soul” in product safety. However, future analysis might be enriched by looking at the increased complexity that emerges from having not two but a handful of regulatory subsystems merged into a purportedly coherent legal framework.5
Another relevant aspect is that of completeness. In Katamari Damacy, each level dares you to create progressively bigger balls of everything.6 With the AI Act, there is a huge potential towards the same tendency, as it relies on a very broad legal basis and AI technologies are everywhere. If data protection law has sometimes been criticized as a wannabe “law of everything”, the AI Act might lead to even greater levels of creeping Europeanization. In short, it can become a legal katamari, cast as the centre of a variety of legal issues loosely connected by their connection to the use of an AI system or model.
Of course, the mere fact that one can articulate a metaphor does not make it useful. And I would not dare to claim that the katamari should become a more common figure in regulatory discourse. It does not necessarily cast light into aspects of the AI Act that were not noticed previously, and it does not help explain the law to non-experts. Nonetheless, it amused me for a second, and I share it in case other weirdos might be able to run away with it.
What I’ve been up to
Over the last few weeks, I had to prepare myself for a few presentations. Some of those dealt with research that was at an advanced stage, so I basically had to write down something resembling the papers I had in mind. Others are still at an earlier stage, which required me to grapple with the ideas before pinning them down in a Word document.7 So, here’s a brief summary of what has been made public so far:
The CPDP panel on “The transformation of EU cybersecurity law”, in which I spoke last month, is now available on YouTube. Also, my panel from last year’s conference (about the GDPR commentary to which I contributed) is available, too.
On 3 and 4 June, I participated in the “Security in the Digital Age” workshop put together by the Erasmus Center of Law and Digitalization. There, I presented my ongoing work with Niovi Vavoula and Giacomo Zampieri on how EU digital law has been transformed by the growing role of two totalizing frames — security and the protection of fundamental rights.
I had the opportunity to present my working paper “Technology neutrality in EU digital regulation” at two different venues. On 5 June, I gave a 90-minute lecture on it at the Utrecht University, thanks to a kind invitation by Nikita Divissenko and the DAI platform. On 12 June, I had a 15-minute presentation at the biannual conference of the ECPR Standing Group on Regulatory Governance. Two fruitful sessions, which helped me develop my discussion of technology neutrality beyond the AI focus I had in my PhD.
Also on 12 June, the NOVA PEARL platform on administrative and regulatory law gave me the opportunity to present my paper with Nicolas Petit on the AI Act. Many thanks to the organizers and participants, in particular the discussants Filipe Brito Bastos and Sofia Pavão.
In the following weeks, I will join the Second Summer Colloquium of the Jean Monnet Centre of Excellence JUST AI as an online discussant, briefly introduce the training module on AI and data protection I prepared for the Hellenic DPA under the EDPB Support Pool of Experts Programme, present another paper at the workshop “A Binding Law of Data?”, and teach at the The Europaeum Summer School: AI and the Digital Future. Don’t hesitate to reach out if you are around for any of those events!
Recommendations
Nathan Davies and Albert Sanchez-Graells, ‘Procurement as Infrastructure’ (Social Science Research Network, 16 June 2025).
Alma Diamond, ‘Law in Society: Defending Hart’ [2025] Res Publica.
Riccardo Fidato and Luigi Lonardo, ‘The Foreign Affairs Aspects of the Artificial Intelligence Policy of the European Union’ (2025) 30 European Foreign Affairs Review.
Merlijn van Hulst and Dvora Yanow, ‘From Policy “Frames” to “Framing”: Theorizing a More Dynamic, Political Approach’ (2016) 46 The American Review of Public Administration 92.
Ian Roberge, Heather McKeen-Edwards and Malcolm Campbell-Verduyn (eds), Ineffective Policies: Causes and Consequences of Bad Policy Choices (Policy Press 2025).
Inga Ulnicane, ‘The roles of government in AI governance and policymaking: re-shaping or re-enforcing power asymmetries?’ in S. Dan, N. Mäntylä, J. E. Fountain (Eds.) Artificial Intelligence and Power in Government (Palgrave Macmillan, 2025 forthcoming).
Opportunities
This Friday (20 June), the Chair in Cyber Policy at the Faculty of Law, Economics and Finance - University of Luxembourg will host another edition of our "Phish and Chips" lunchtime talks! To wrap up our lecture series for this academic year, we have the privilege of hosting Renate Verheijen, who will share insight into the role the EU Agency for Cybersecurity (ENISA) plays in this Cyber law making and implementing area. Register here to join us in person or online.
Early-career scholars in law, sociology, socio-legal studies, anthropology, political science, public administration, and science and technology studies (STS) are invited to submit original paper proposals for participating in the Law and Society Initiative (IDES) Annual Conference in Lausanne, Switzerland. Abstracts are due by 20 June 2025.
There are three open postdoctoral positions in European Law here at the University of Luxembourg. One of them is a 24-month contract at the Department of Law, working closely with Professor Eleftheria Neframi and the doctoral training unit on green and digital transitions. The others are 36-month contracts at the Luxembourg Centre for European Law, dealing with European Constitutional Law writ large.
The University of Bologna’s PhD in Law, Science and Technology is hiring a new cohort of researchers, with up to nine funded positions. Applications are due by 1 July 2025.
The Brussels School of Governance is hiring for a funded PhD on Digital Sovereignty Claims. Applications are due by 20 July 2025.
Masaryk University in Brno (Czechia) will host the CYBERSPACE 2025 conference on 28-29 November 2025. They invite papers until 31 July 2025.
Sarah Tas, Valentina Golunova, and Mariolina Eliantonio are hiring a PhD researcher to work on 'The Shifting Governance and Enforcement of Fundamental Rights under EU Digital Law'. Applications are due by 17 August 2025, and you should definitely entertain the idea of working with them if this area is of your interest.
The Cambridge Forum on Technology and Global Affairs invites papers on “New Power Brokers in an Age of Technological and Geopolitical Upheaval” until 22 September 2025.
Lund University is hiring people across departments for work on AI and/or sustainability. The positions include, inter alia, one assistant professorship in law and one in the faculty of humanities and theology. The positions all seem to start from early 2026.
Finally, the otters

Thanks for your attention! Hope you found something interesting above, and please consider subscribing if you haven’t done so already:
Do not hesitate to hit “reply” to this email or contact me elsewhere to discuss some topic I raise in the newsletter. Likewise, let me know if there is a job opening, event, or publication that might be of interest to me or to the readers of this newsletter. Hope to see you next time!
Nowadays, Warhammer 40k: Rogue Trader, where I hope to finish an Heretic run before starting the upcoming expansion. Readers of previous issues might recall previous fixations with XCOM: Long War and Sunless Skies, as well as Football Manager.
When I was younger, I could not afford a top-end console or PC, so I mostly played old-school games. Now that I have a bit more of spending flexibility, my disposable income mostly goes to helping Britain stay afloat after Brexit.
See, e.g., the role of sandboxes.
See, e.g., the provisions on general-purpose AI models.
If you want to work on this, be my guest.
There is an in-game explanation for that, something about trying to recreate the moon and the stars from available materials. The game’s plot is a zany trip and I cannot hope to do justice to it. Fortunately, doing so is not necessary for my point.
Never LaTeX, which is a beautiful typesetting language in which no human being should have to write directly in 2025. In particular, never ever ever Beamer.