Late registration (AI, Law, and Otter Things #28)
In this issue, I ramble a bit about the aims of AI regulation scholarship, describe a bit of my recently-published work, and recommend some work by other people. As usual, there are otter and dog pictures.
This, I believe, is my first late newsletter. I was supposed to send this issue last Wednesday, but I decided to take a half-week break over Easter. Taking a break allowed me to spend some time with my wife and dog, which are always good things. Taking a break also allowed me to play the Final Fantasy VI pixel remaster [Spoiler-ish, I guess: 1, 2] and continue to watch Star Trek: The Next Generation from the beginning [3]. All in all, it was a much-needed rest period.
Now that I have regained a bit of energy, this newsletter should be a bit more interesting than whatever content I could prepare in time for next week. Hopefully, I will stick to the regular fortnightly schedule, but in any case, I would rather skip a week than send something just for the sake of completion.
AI regulation scholarship as plumbing
A recent issue of this newsletter mentioned Mary Midgley's comparison of philosophical work to plumbing. In her 1992 paper "Philosophical Plumbing", she argues good philosophy is infrastructural work, which maps out the conceptual patterns existing in society and cleans up the confusion resulting from conceptual messes. Such a frame does not mean giving up on abstraction or orienting thought towards "applied" topics rather than the big questions, but it does suggest the reach of conceptual engineering is somewhat limited. Nevertheless, conceptual plumbing is an essential task, if only to prevent intellectual sewage from flowing into our lives.
Of course, I am not qualified to speak of plumbing as a metaphor for the philosophical practice itself. But I am increasingly convinced that AI regulation scholarship could benefit from a plumber-like approach. Authors such as Nicolas Petit and Jerome de Cooman have mapped a considerable variety of approaches to AI regulation, which diverge not only on regulatory goals and instruments but even with regard to the proper object of AI regulation. Likewise, fundamental concepts, such as AI, accountability, and transparency, are used in myriad ways that are not necessarily compatible with one another. The result is that the various participants within the AI regulation debate—even those coming from the same discipline—often speak at cross purposes, sometimes without even noticing the other parties are using the same term for different purposes and not merely getting things wrong.
There are at least two ways out of this conceptual mess. The first one is providing a "grand theory" of AI regulation that displaces every other contestant and provides a relatively unified field of discourse for debates. However, any such theory would need to take some particularly sharp stands regarding value conflicts in AI. Otherwise, it would be too vague to provide anything but general advice or introduce contradictions as it attempts to reconcile directly antagonistic views. Therefore, a consistent "grand theory" of AI would require a hedgehog-like approach to the social issues and disputes that AI impacts or otherwise make salient.
I am more sympathetic to a fox-like approach to this problem. Rather than attempting to provide a unifying account of how AI should be governed, perhaps it would be best to embrace the fragmented nature of the problems that prompt AI regulation. In this case, the role of AI regulation scholarship would be to help the various stakeholders [4] to understand what changes with AI technologies and what does not, in particular regulatory contexts. I believe such an approach gives a better account of AI systems as general-purpose technologies, which are not as easy to fit into neat regulatory boxes as other forms of technology.
But, at the end of the day, both hedgehogs and foxes have a lot of conceptual clean-up to do in their areas of interest.
Self-promotion
I was involved in two book chapters and a conference paper published recently. The first is a long-running collaboration with Maria Dymitruk on data protection and judicial automation, which is now available as a book chapter in the Research Handbook on EU Data Protection [5]. This chapter examines the similarities in structure between the right to a fair trial and the right to data protection in the EU Charter of Fundamental Rights. Drawing from that analysis, we argue that the provisions on data protection by design present in the GDPR (particularly in Article 25) require the adoption of technical and organizational measures that ensure automation systems in judicial contexts do not compromise the right to a fair trial.
The second chapter, co-authored with Caio Cesar Carvalho Lima and Juliano Maranhão, also focuses on Data Protection by Design. As part of the edited volume Legal Innovation [6], we discussed how Brazilian data protection law incorporated a duty of data protection by design, which is largely patterned after Article 25 GDPR and some shared influences—notably the privacy by design movement. The differences in the overall architecture of the legal systems for the protection of personal data mean solutions for Article 25 GDPR might not automatically be compatible with Article 46 LGPD, but EU experiences can nevertheless be useful in understanding and implementing data protection by design.
Finally, the workshop paper [7] moves away from data protection by design to engage with the use of explainable AI in the tax domain. Together with Blazej Kuzniacki and Kamil Tylinski, I argue that protecting taxpayer rights in the face of increasing automation of tax functions requires some form of explanation of AI systems. The paper examines the constitutional principles governing taxation in modern democracies to show how explanation can promote legal certainty and fair trial rights in an automation context. We further propose an initial overview of existing XAI techniques and how they address certain taxpayer needs.
Promoting otters
A few papers that might interest readers of this newsletter:
Saar Alon-Barkat and Madalina Busuioc, ‘Human-AI Interactions in Public Sector Decision-Making: “Automation Bias” and “Selective Adherence” to Algorithmic Advice’ [2022] Journal of Public Administration Research and Theory.
Sebastian Bordt and others, ‘Post-Hoc Explanations Fail to Achieve Their Purpose in Adversarial Contexts’, accepted to the 2022 ACM Conference on Fairness, Accountability, and Transparency (FAccT '22).
Laurens Naudts, Pierre Dewitte and Jef Ausloos, ‘Meaningful Transparency through Data Rights: A Multidimensional Analysis’ in Eleni Kosta and Ronald Leenes (eds), Research Handbook on EU Data Protection Law (Edward Elgar Publishing 2022).
Mariano-Florentino Cuéllar and Aziz Z Huq, ‘Artificially Intelligent Regulation’ (2022) 151 Daedalus, the Journal of the American Academy of Arts & Science 353.
Lilian Edwards, ‘Regulating AI in Europe: Four Problems and Four Solutions’ (Ada Lovelace Institute 2022).
And, as usual, otters:
[tweet https://twitter.com/Lontrinhass/status/1515686668321308679]
See also that Netflix show narrated by Obama, "Our Great National Parks". The episode on Monterrey spends quite a bit of time with sea otters.
Notes
[1]: The remaster includes vocals in the Opera House songs, so do yourself a favour and change the game to a language other than English in that part.
[2]: Yes, you can still suplex the train.
[3]: I am now at the very end of Season 6.
[4]: In my particular case, legal scholars and practitioners.
[5]: Edward Elgar 2022, edited by Eleni Kosta and Ronald Leenes.
[6]: Thomson Reuters Brasil, 2022.
[7]: Accepted to EXTRAAMAS 2022 (9–10 May, Auckland & online).