Hello, dear reader, and welcome to another issue of AI, Law, and Otter Things! As I write this newsletter, Southern Europe is facing a massive heatwave. Between this heat, the conclusion of a pair of “Revise and Resubmit” responses, and my work in the organization of the Law and Logic and AI and Law summer schools,1 I did not have the time I wanted to develop a monographic issue.
Today’s issue offers, instead, promotes the academic work I did over the past year. After that, I share some readings (of the academic and non-academic kinds) that might be of interest to readers. Finally, the usual otter pics.
A bit more of self-promotion
Self-promotion can be a weird thing. On the one hand, it is always convenient to speak about what one knows (or believes to know), especially when one is as vain as I am. On the other hand, my work is a frequent subject in this newsletter, and because of that I always worry a bit that I might sound too monothematic if I dedicate a post to presenting my recent work (that, for the most part, featured in previous issues). But the number of subscribers to AI, Law, and Otter Things has grown considerably over the last few months, so maybe it is a good idea to present an overview of what I have published in more serious venues.
My current research focuses on the regulation of AI technologies, which is a topic that is often covered in this newsletter. In my PhD thesis, I investigate the question of technology neutrality in AI regulation: when should regulation deal directly with technical details, and when should these details be left to other actors (such as executive bodies, the courts, or the designers of AI systems)? To answer these questions, I propose a theory of technology-neutral regulation, which is informed by other works I have developed on various technological facets of AI regulation.
One strand of my work deals with the legal value of explainable AI (XAI). In a recent article published in European Law Open, Madalina Busuioc, Deirdre Curtin, and I argue that XAI techniques, in itself, is not a solution to the opacity problem surrounding AI systems, as they offer a form of mediated transparency in which the controller of the AI system has ample leeway to control what is visible to the outside world. This is not to say that XAI has no value: indeed, Blazej Kuzniacki, myself, and others propose in the World Tax Journal that explanation mechanisms are needed to comply with the constitutional constraints that guide state action in the domain of taxation. But trust in explainable AI models is contingent on robust institutional transparency mechanisms rather than an alternative to them. Otherwise, the technical promises of explanation open themselves to manipulation by software developers and users.
Another set of publications deals with regulation by design, that is, the use of design mechanisms to ensure compliance with legal rules. In my first legal paper, published at ICAIL 2019, I argued that effective human intervention in automated decision-making procedures requires the adoption of technical and organizational measures that support users as they seek to contest automated decisions. My subsequent work on this topic has dealt more generally with the potential and limits of design measures in AI governance. In a forthcoming chapter in a GDPR commentary, I comment (with Juliano Maranhão and Giovanni Sartor) on the provisions of data protection by design and by default (Article 25 GDPR), arguing that compliance with these requirements requires both technical and organizational measures addressed not just at the principles listed on Article 5 GDPR but at the broader principles data protection law is meant to uphold. In particular, Maria Dymitruk and I have examined in a book chapter how these design requirements oblige developers of judicial AI systems to adopt measures direct at protecting fair trial rights.
However, some of my work also points out the limits of technical measures to impose legal requirements. In a solo paper for the European Journal of Risk Regulation, I posit that reliance on technical measures alone can lead to various forms of entrenchment of current legal measures, as how designers react to current legal requirements may end up embedded in technological infrastructures. In a working paper, Nicolas Petit and I argue that reliance on technical measures, as seen in the AI Act’s adoption of a product safety framework, can lead to a reductionist view of issues that cannot be described in technical terms, such as the protection of fundamental rights.
Some of my work deals with the implications of these design issues to the overall design of regulatory frameworks for AI. In a short piece for the CPI TechReg Chronicle, I present an introduction to regulation by design as a co-regulatory modality, in which legislators provide general guidelines but leave the implementation of regulatory measures to software designers. And Anca Radu and I have a working paper arguing that, in light of the Brussels Effect of EU digital regulation, the AI Act’s shortcomings are likely to create obstacles to the fulfilment of the European Commission’s ambition of using the Act as a vehicle to ensure global AI regulation is driven by “EU values”.
Beyond these design-related topics, I have also written on various topics on technology law. Some of these are connected to personal data: in that same GDPR commentary, I discussed (once again with Juliano Maranhão and Giovanni Sartor) the definition of pseudonymized data and the legal bases for content personalization based on personal data, and Juliano Maranhão and I have examined at International Data Privacy Law the treatment of voice data as personal data, with an emphasis on medical contexts. Most of my collaborations in fields beyond data and AI have been written in Portuguese so far, but I have a book chapter and a blog post on matters of platform governance. And finally, I have a blog post for RAILS (which I am developing into a full paper as other commitments allow) that discusses the lack of cohesiveness in AI regulation and why that is a good thing.
In the past few years, I had the opportunity to engage with a broad range of topics over the past few years, both as a solo author and (more usually) as a contributor in larger teams. This variety has been very productive for my thesis, allowing me to engage with domain-specific manifestations of overarching issues in technology and AI regulation. And it addresses my vulpine approach to knowledge. Of course, some of these contributions might be more pedestrian than others. Nevertheless, I hope to bring something interesting to the table, and please feel free to email me about these topics and other potential overlaps between your interests and mine.
Reading recommendations
Enough of myself: now it’s time to share the writings of others. And, given that it is past 18:00 and the thermometer is still at 37 degrees, my first recommendation concerns air conditioning. European homes seldom have AC, which is often seen as a luxury at best and as a threat to the environment at worst. But Faine Greenwood (more famous for drones and, more recently, Alf) makes a strong case that the use of air conditioning can save lives as temperatures rise throughout the world and that many of the harms of AC systems can be reduced by the use of new technologies and sound policies.
Guillermo Lazcoz and Paul de Hert revisit Article 22 GDPR on automated decision-making to clarify the idea of a “meaningful” human intervention in the decision-making procedure. Though I am not remarkably sanguine on their use of the decision-making loop and do not share their optimism regarding data protection impact assessments, I still think their work is a valuable way to reopen a doctrinal debate that seems to have grown stale in the last few years.
A paper by Hilde Weerts, Raphaële Xenidis, and others, published on FAccT 2023, provides a crash course on EU discrimination law, showing that its open texture does not lend itself to implementation in terms of particular fairness metrics.
At that same conference, Jennifer Cobbe, Michael Veale, and Jatinder Singh broaden discussions of algorithm accountability by showing how the law needs to take stock of the broader supply chains in which particular AI systems are situated.
On a more fictional bent, Anne Currie just published Invisible Hands, the latest entry in her Panopticon book series. Having previously recommended her books, I seize the opportunity to reinforce that recommendation. The Panopticon series offers a fascinating sci-fi take into technologies such as large-scale prediction and virtual worlds, while attending to climate change and its impacts in society and the world. Much fun ensues (for us, not for the individuals living in that fictional world).
And, within the Warhammer 40k-verse, I have recently read The Infinite and the Divine. It is a Necron-centric story, following the rivalry of two ancient figures—Trazyn the Infinte and Orikan the Diviner—across the ten millennia that precede the current timeline of WH40k. It is not a good entry point into Warhammer, but the story is quite fun if one has already spent a bit of their life (and potentially more money) with Games Workshop.
Finally, the otters
If you enjoyed this newsletter, please subscribe to receive future issues:
Where, among other things, I had the pleasure to finally meet some of you in person
.