Dear reader,
Welcome to a new issue of AI, Law, and Otter Things! Today, I want to write some lines on transparency in AI-related contexts, as well as the usual reading suggestions and cute animals. Before that, however, I will practice some disclosure and speak briefly about my decision to move this newsletter to Substack.
My newsletter was initially hosted by Revue, a newsletter service owned by Twitter. Since I am a heavy user of Twitter, and my general lack of plans to monetize this newsletter, that platform seemed more convenient for my writing ambitions than Substack, despite all cool kids preferring the latter. But, amid the current infrastructure troubles on the birdsite, I began to worry that the newsletter service wouldn’t stay up for long.1 So, I decided to migrate my newsletter—and its backlog—to a service that is more likely to last for a while, yet.
This newsletter should stay roughly the same in the new venue. I will keep posting rants about law & technology and other interests of mine, as well as recommendations of stuff to read, watch, and listen to. And the usual otter and dog pictures. I am already seeing things that I enjoy here on Substack, such as the footnotes function,2 but any changes in form or editorial line are likely to happen slowly rather than all at once. Otherwise, my Assistant Editor will be unhappy:
My recent journey into AI transparency
Long-time readers of this newsletter might have noticed that quite a few of my recent projects are connected, in one way or another, to the elusive notion of transparency. This is something of an unexpected development, as transparency has never been a central question in my research. Before starting the PhD, my research focused on automated decision-making and the possibilities of contesting it. Nowadays, my research focuses on the roles that technical knowledge can play in shaping legal responses to the uncertainties stemming from AI. In both cases, my interest in transparency was somewhat instrumental rather than my main direction of effort.
Nonetheless, my engagement with transparency opened some possibilities for collaboration. I have already mentioned in previous issues my ongoing work on explainable artificial intelligence (XAI) in tax law, in which my collaborators and I attempt to identify legal and technical approaches to help taxpayers understand the use of AI systems in the tax domain. We have already published two contributions with a legal focus3 and we are currently working on a more technical paper, led by our computer scientist authors. By covering both the legal and technical legs of explanation, we intend to show how XAI techniques can contribute to a better understanding of what AI does.
We should be wary, however, of the tendency to think this understanding is enough to render AI systems transparent. There are limits to how much understanding explanation techniques can provide in many relevant contexts, and the opacity of these systems is not produced solely by technical factors. Even a technically simple system such as a decision tree can be rendered inscrutable because of organizational factors or legal barriers such as trade secrets. Explanations, at best, can provide a partial picture of AI systems in their sociotechnical context, and at worst they can enable certain kinds of informational distortion.
In a recently-accepted journal article,4 Madalina Busuioc, Deirdre Curtin and I examine the impact of this substitution of explanation for transparency. While the EU AI Act falls short of selling explanations as the solution for algorithmic opacity,5 it nevertheless relies on an approach to transparency as a mediated communication of information, which gives to the providers and users of the AI systems various levers to shape what kind of information reaches oversight actors. We argue that such an approach can stifle the accountability that transparency is meant to support, suggesting instead that AI transparency must rely more extensively on disclosure mechanisms.
This does not mean XAI techniques have no role in AI governance. In fact, suitably designed explanations can be a powerful tool to make sense of technical opacity. But they can only be trusted in the first place if they are deployed in a context with enough transparency safeguards to ensure that the explanations provide a reliable and sufficiently delimited portrait of what is going on. Otherwise, the putative accessibility of misleading explanations may act as little more than a cover-up for practices that escape any meaningful accountability.
Transparency in AI systems?
My third foray into the debates surrounding transparency in AI is that I am organizing, together with Francesca Palmiotto Ettore, a symposium on that topic for DigiCon. We have invited a few authors to discuss various aspects of the technical and legal sources of opacity in the use of AI, the potential impact of transparency (or lack thereof) in said contexts, and possible approaches to AI transparency issues.
So far, we have published three posts. In the first post, Mia Leslie and Tatiana Kazim argue for the compulsory disclosure of executable versions of AI systems, which would allow people to probe the outputs produced by these systems under various types of circumstances. In the second post, Joshua Brand makes a moral case for the use of XAI systems as a tool to support meaningful human control over AI systems. The latest post, written by Ida Varošanec, examines the uneasy relationship between AI transparency and the legal protection to trade secrets often invoked by providers of AI systems. New posts will be published every Thursday, and the symposium will likely run until the end of January.
I say likely because we have two kinds of contributions to the symposium. Not only we invited authors to present their unique views on the debate, but we also opened a call for blog posts to the general public. Until 27 November, we accept submissions on case studies of AI transparency, analyses of systemic issues with current and proposed transparency practices and laws, and more conceptually-minded proposals (such as questioning the very concept of transparency that underlies our discussions). Please consider sending us your post!
Caveat emptor
A funny thing that happens when you collaborate with people working on adjacent topics is that you end up forming strong opinions that go beyond your command of the scholarship. But I am not an expert on the technical or institutional aspects of transparency, and I do not plan to pivot into one. Nonetheless, I believe that my perspective on how, specifically, technical and legal factors are intertwined in AI regulation has contributed to these debates.
More importantly, everything I said above reflects my own views on the topic, and nothing is endorsed by any of my co-authors or by the authors of the posts we publish on the seminar. Unless, of course, they say something to that effect in our joint work or elsewhere. So, I recommend you read these other things, too—even (or perhaps especially) if you disagree with my points above.
Recommendations
Since I spoke of DigiCon above, may I suggest you join our DigiConference next week? On Monday afternoon and Tuesday morning (Italian time), we will host at Villa Salviati (and on Zoom) participants interested in discussing the various constitutional issues that arise with digital technologies. Our conference features keynote talks by Karen Yeung, Hannah Bloch-Wehba, and Florian Grisel. There is still time to register and join us for this event.
For those inclined to join Metaverse events, we also have a Tuesday afternoon virtual session where Nicolas Petit and Jerome De Cooman will present their newest paper on “Asimov by Lawyers”. You will also have the opportunity to look at the gorgeous virtual gallery curated by my friend Yeliz Döker, featuring much of the art we published at DigiCon over the past few months.
As for the usual reading recommendations, I will only suggest a few things in addition to the links presented above.
Laurence Diver, Digisprudence: Code as Law Rebooted (Edinburgh University Press 2021). Available in open access, though the Edinburgh University Press website adds some friction to the process.
Geoff Gordon, Bernhard Rieder and Giovanni Sileno, ‘On Mapping Values in AI Governance’ (2022) 46 Computer Law & Security Review 105712.
Benedict Kingsbury, ‘Infrastructure and InfraReg: On Rousing the International Law “Wizards of Is”’ (2019) 8 Cambridge International Law Journal 171.
Last but not least, some recommendations for entertainment. If you do not actively despise Star Wars, I strongly suggest that you watch Andor: it tells a powerful story of oppression and rebellion, respecting the source material while taking it into the political directions that George Lucas tried but ultimately could not pull out. For a completely different kind of TV show, Reboot is a funny show about 90s nostalgia and meta-humour about streaming services.6 Finally for some gaming, I’ve spent more time than I care to admit playing Long War, a mod for XCOM Enemy Unknown (which I now realize is a 10-year-old game). Avoid it at all costs, unless you are into that kind of thing and have lots of time to spend. It’s very good.
And, now, the otters
See you next time! And please consider subscribing if you haven’t done so already:
However, it seems that Revue was already at risk even before the current faecal tempest at Twitter.
As an academic and a Terry Pratchett fan, I cannot resist their appeal.
A conference paper on explanation in tax AI and its connection with fundamental rights, and a journal article with a more comprehensive overview of current legal frameworks (or lack thereof) on explanation in tax AI.
The article has not been published yet, but I am happy to privately share the latest version under the terms allowed by the publisher.
Unlike some readings of the purported right to an explanation in the GDPR.
Disclaimer: in this house, we are biased in favour of Rachel Bloom, especially after Crazy Ex-Girlfriend.