Hello, dear reader, and welcome to another issue of AI, Law, and Otter Things! After maintaining a regular publication schedule in April, I ended up skipping three weeks in this newsletter. For the first two of them, I was enjoying part of my annual leave, getting some rest, and painting some plastic miniatures. Hopefully, I will return to a regular schedule soon, but the next weeks will be quite busy, so I won’t be making any promises at this stage.

Speaking of the near future, I’ll have the opportunity to present my work at a few events. This Wednesday, I’ll be at CPDP.ai speaking at a panel on The transformation of EU cybersecurity law; if you are willing to put up with me (and the 8:40 am timeslot), you’ll have the opportunity to hear insightful thoughts from Elaine Fahey, Laura Brodahl, Suzanne Nusselder, and the moderator Niovi Vavoula. The day after, I will be at ICON•S Benelux 2025, joining the amazing Maria Magierska, Sarah Tas, Lisette Mustert, and Belle Beems at the panel "Enforcement in the European Digital Policy: Challenges, Opportunities, and Paths Forward", moderated by Giovanni de Gregorio. If you are in Brussels this week, don’t hesitate to reach out!
Before that, I will share below some thoughts on AI and transparency, looking at the limits of a technology-centric approach to this issue. These notes are part of a paper I was drafting on the subject, and presented at Dubrovnik early last month, but ultimately decided not to move forward with.1 Instead of leaving them to rot in my hard drive, maybe they might be useful to someone (even if as a source of wrong ideas to refute!). After that, the usual content: reading recommendations, academic opportunities and otters. Hope you enjoy it!
The legal construction of AI opacity
It is a truth universally acknowledged that modern artificial intelligence (AI) technologies are opaque objects. Opacity is often associated with a series of technical properties of AI systems, such as the scale of their operation and the complexity of the mathematical and programming work used in their construction.2 As many real-world incidents—and even more scholarly works—have shown, this opacity can create obstacles to proper oversight and accountability and directly harm certain rights such as the rights to an effective remedy and a fair trial. Accordingly, a substantial body of literature has proposed legal tools that can be used to push for transparency in the development and deployment of AI. Yet, my work on AI transparency has convinced me that, even though opacity is related to technical factors, creating transparency is not merely a matter of having laws that oblige designers to create their systems in the right way.
How does the law contribute to technical opacity? Let me count the ways…
This is because any technical opacity is amplified by law in at least three ways. The first one refers to the vagueness of existing legal requirements for transparency. While regulations such as the GDPR and the AI Act feature certain obligations from disclosure, those are formulated in open-ended terms that are meant to be applicable to a variety of technologies. Such an approach ensures regulatory robustness, but leaves considerable room for manoeuvre to regulated actors, allowing them to minimize any meaningful disclosure of technical information. There is a considerable body of work showing the vulnerability of explainable AI techniques to this kind of manipulation. As I argue elsewhere, however, the same issues emerge with other kinds of technical interventions for the promotion of transparency.
The second source of amplification is a conceptual mismatch between legal and technical ambitions for transparency. In short, my point here is that lawyers and people working on transparency-promoting techniques (such as XAI models or inherently interpretable models) often ask different things from AI systems, and substituting one thing for the other will likely lead to nonsense or, worse, responses that seem to have more scientific grounding than they actually do.
Whereas regulation often associates the transparency of AI technologies with the normative justification of the actions taken with the use of those technologies, technical approaches to transparency tend to be more concerned with a scientistic explanation of the mechanisms involved in producing outputs. As a result, even the adoption of current technical approaches to transparency will not answer the kind of questions one asks about decisions involving AI in a legal context. They will, at most, provide some information about the perceived factual grounding of algorithmic outputs, from which the embedded normative reasoning will still need to be extracted.
Here, one might point out that there are some technical approaches that allow AI systems to engage with normative reasoning,3 which might conceivably by used to address that gap. But even if that remains possible in abstract, doing so requires the incorporation of techniques that are not in the usual toolkit of transparency. And implementing those techniques will, in turn, require a clearer response to the vagueness issue raised above, in order to allow for the better specification of what problem must be solved.
Last but not least, the law can itself be a source of occlusion. At this point, there are various strands of legal scholarship that show how legal provisions can prevent disclosure of information about AI systems. Among others, Charlotte Tschider has provided a high-level overview in US law; David Hadwick and Shimeng Lan have shown how tax administrations invoke reasons of state secrecy to preclude access to algorithms; and Madalina Busuioc, Deirdre Curtin, and I have mapped various mechanisms in the draft AI Act (which mostly survived to its final version) that allow providers of AI technologies to control the flows of information about technical artefacts. No matter how technically legible an AI system might be, it might still be rendered opaque by private law instruments such as contracts and trade secrecy, by public law mechanisms such as those pertaining to state secrecy, or, often, by a combination of both.
Looking more closely at the EU
In the paper I presented in Dubrovnik, I highlighted two EU-specific sources of opacity. The first is the conflation of lawfulness and trustworthiness in the EU approach to AI regulation. As (among others) Charly Derave, Nathan Genicot, and the late Nina Hetmanska show in the context of ETIAS, EU policymakers emphasize the idea of “trustworthiness” as an element of the “European approach to AI”, but use it as a consequence of lawfulness rather than a requirement for it. That is, a system that meets legal requirement is deemed to warrant trust, and in doing so would not be so critical from a transparency standpoint. By including various exceptions and carve-outs in the law, one might argue that the EU lawmaker tries to defuse claims of transparency by imposing trust by law, rather than using transparency as a tool to promote trust.
Additionally, recent case law in the CJEU is somewhat ambivalent when it come to tackling sources of AI opacity. Two recent rulings can be said to advance the cause: in Dun & Bradstreet Austria, the Court has recognized the existence of a right to an explanation in data protection law, while Malamud (Case C-588/21 P) can contribute to broader transparency regarding the technical standards that will likely underpin software design. Contrastingly, there are quite a few recent cases in which transparency has been playing second fiddle to other interests, such as the protection of commercial value in procurement settings or financial privacy. So, in any particular case, opacity might be preserved (or even imposed) in light of some legal values, even if a higher level of transparency would be feasible from a technical perspective.
Finally, the toothpaste
These points do not exhaust the discussion of transparency. They do, however, exhaust what I feel I have to add to the debate, which is—in part—why I am unlikely to finish the article I had in mind originally. Nonetheless, my point here is that all those factors can amplify the opacity that results from technical properties of AI technologies, and even introduce opacity into relatively transparent technical approaches. Much like one cannot push toothpaste back into the tube, once an AI system is brought into being, the visibility of its inner workings cannot be solved by adding more components to a system, or even by tweaking it towards reduced complexity.
This is not to say that technical fixes in this domain are useless. I believe that further research on technical interventions can help build tools that are useful for legal demands, rather than just trying to shoehorn them into a scientific vision of explanation. Still, those technical fixes can only produce the expected effects once one has a clear vision of what kind of transparency is needed, and once any legal and contextual drivers of opacity are exposed. Otherwise, technical AI transparency provides at best a partial view of the whole — and, at worse, it offers a veneer of legitimacy to practices that should be unacceptable.
Recommendations
Later this month, Edward Elgar Publishing will publish the Research Handbook on Competition and Technology (Pier Luigi Parcu, Maria Alessandra Rossi, and Marco Botta, eds.), which features a chapter of mine (with Juliano Maranhão and Giovanni Sartor) on “Competition in and through artificial intelligence”, as well as contributions by some brilliant people, including readers of this newsletter.
Kevin J. Elliott, Democracy for Busy People (University of Chicago Press 2023).
Bobbie Johnson, ‘North Korea Stole Your Job’ (Wired, 1 May 2025).
Malte Möck and Peter H Feindt, ‘Learning Mode Misfits in Policy Learning: Typology, Case Study and Lessons Learnt’ (2024) 31 Journal of European Public Policy 2050.
Jennifer Orlando-Salling, ‘(De)Coloniality and EU Legal Studies’ (Verfassungsblog, 8 May 2025).
John Pavlus, ‘When ChatGPT Broke an Entire Field: An Oral History’ (Quanta Magazine, 30 April 2025).
B Guy Peters and Maximilian L Nagel, ‘From Benign to Malign: Unintended Consequences and the Growth of Zombie Policies’ [2025] Policy and Society puae039.
Élisabeth Quillatre, ‘The Evolution of French Anti-Terrorism Legislation: Balancing Securit, Privacy, and AI Regulation’ (2025) 11 European Data Protection Law Review 45.
And, for something completely different, recently I enjoyed reading “Tomorrow, and Tomorrow, and Tomorrow” by Gabrielle Zevin. A lovely read for book nerds who are also video game nerds.
Opportunities
The Institute for Law & AI (LawAI) will host the Cambridge Forum on Law and AI, a five-day gathering of law students, professionals, and academics eager to explore pressing issues at the intersection of AI, law, and policy in Europe. Applications are due by 31 May, and the event itself will take place from 14 to 18 August 2025.
In Australia, Monash University is hiring professors at various levels, from lecturer to full professor. Their areas of interest include public law, law and tech, and legal philosophy, so the positions might be of interest to some of you. Applications are due by 8 June.
Brazil’s data protection authority is hiring people for temporary contracts of up to 5 years. The openings include management, operational, and supporting roles, and applications are due by 15 June.
Meanwhile, in Florence, the EUI’s School of Transnational Governance has two job openings: one for a Chair in Transnational Law and another for a Chair (or two Assistant Professors) in Digital Governance and Policy Innovation. In both cases, applications are due by 25 June.
And now…the otter
Thanks for your attention! Hope you found something interesting above, and please consider subscribing if you haven’t done so already:
Do not hesitate to hit “reply” to this email or contact me elsewhere to discuss some topic I raise in the newsletter. Likewise, let me know if there is a job opening, event, or publication that might be of interest to me or to the readers of this newsletter. Hope to see you next time!
As discussed in a previous issue.
The classic reference here is Burrell (2016).
I tend to think of those mostly in terms of logic and knowledge-based systems, but I will not exclude ex ante the possibility of other techniques being included here.