Hello, dear reader! After almost two years, today I share with you the fiftieth post on AI, Law, and Otter Things. This milestone makes me happy for quite a few reasons. First of all, because the newsletter has succeeded beyond my original expectations. While I am not exactly making any money out of it, previous issues have allowed me to gain confidence and speed while writing in English.1 Some have also allowed me to test ideas I later developed into actual research. Last, but not least, I am incredibly flattered that almost 200 people cared enough to subscribe—and sometimes even comment—on these sparse (and only occasionally serious) notes. Thank you all!
On a more personal note, my quasi-periodic writings in this newsletter have scratched an itch that I had mostly ignored since I stopped regular blogging in the early 2010s. Even though I spend way too much time on Twitter,2 it is (or at least was) a place to discover things and meet people who share your weird interests, not develop longer arguments.3 It is nice to feel comfortable enough to jot down some notes about whatever is going on in my mind at the time, and it is even better to see that such thoughts can be of interest to others. Especially considering the many cool people I met or stayed in contact with through this newsletter.
As for the issue, today I’ll mostly try to organize a few thoughts about how we think about technology within the law. After that, I will share a few highlights from World Otter Day.
Metaphors and analogies in law & tech
Metaphors and analogies are windows into the world. Even a cursory glance at regulatory scholarship shows how they shape discourse about AI. Machine learning is a black box, AI systems are products, code is law, and so on and so forth. Each of these metaphors and analogies suggests that certain features of an object or process are relevant, at the same time that it downplays other aspects. For example, when we speak of code as law, we emphasize that code constrains human behaviour, disregarding or at least minimizing the various ways in which the normativity of software diverges from legal normativity. The (not always conscious) choice of the metaphors we use to discuss objects and processes offers, by consequence, a framing for the issues regulation is meant to address.
In doing so, metaphors and analogies do not just shape how we understand regulation and its components. They also influence our assessment of the regulatory options at hand. When the AI Act casts AI systems as products, it addresses those AI-related issues that can easily fit into a product safety framework while struggling with fuzzier things such as fundamental rights. These insights are not original, of course, and they are in fact quite basic social science stuff. But it pays to think of metaphors and analogies in terms of what they allow us to see and do, as well as of how they constrain our visions and actions.
Consider James Grimmelmann’s recent paper, The Structure and Legal Interpretation of Computer Problems. Drawing from scholarship on programming languages theory, Grimmelmann proposes that software code is similar to legal text in some relevant aspects. To sustain his argument, he distinguishes between three types of meaning that a program might have, showing that any legal meaning of a system’s outputs or functioning is grounded in the technical meaning of a system but not entirely dependent on it due to a myriad of social factors. We can, therefore, interpret code in the broader context of legal interpretation and, in doing so, benefit from established techniques for interpreting the law.
By framing software as a form of text, lawyers can avoid excessive deference to technique: computer programmes are not a feature of the world but something that can be made amenable to the kind of interpretation legal scholarship and practice thrive on. Grimmelmann extends that frame by showing how software-as-text can be read from a legal lens, a practice that is useful for various kinds of legal problems surrounding software. As he suggests, a legal interpretation of software can be useful in domains where the main relevance of software is as a bearer of meaning: for example, when we look at software as a form of expression, or in the case of smart contracts which have their meaning specified in software.
The analogy between code and legal text suggests, at the same time, that the legal interpretation of software code can be of limited use or even misleading in certain circumstances. Legal scholars and practitioners have long been aware of the distinction between “law in the books” and “law in action”, that is, the idea that the behaviour of legal institutions does not always match the expectations we might have from reading legal text. Likewise, Grimmelmann acknowledges from the start that “code” is an abstraction, as the effects of that code will depend on factors such as the hardware in which it is executed, the compiler or interpreter that transforms code instructions into an executable form, or the data that feeds a machine learning system. Furthermore, the same software may play different roles according to its context: for example, a decision-making system produces different effects if it is used to automatically carry out the decision it calculates or if that decision is just a recommendation for a human.
Abstracting away these socio-technical elements of software is an essential technique for development. Without such abstractions, a developer would need to consider all the complex dynamics between the various elements involved in a computer system in order to do the most basic functions. Instead, they treat those elements as black boxes: a programmer that uses a pre-defined library of AI functions does not need to think about how that library carries out its mathematical operations, just about what kinds of inputs it needs to produce the desired outputs. If lawyers are interested in questions about the meaning of software, they might benefit from these abstractions, as Grimmelmann aptly shows in his paper.
Yet, some important legal questions depend on precisely the kind of factors that a focus on code abstracts away. For example, large-scale data centres and computing clusters have a substantial environmental and social footprint, and interventions directed at code (such as mandating the use of more efficient techniques) will have a limited impact on it. Other issues cannot be diagnosed by looking at the code alone. For example, algorithmic discrimination can be shaped not just by the code and how the algorithms are trained but also by factors such as the data used to produce inferences about individuals and the use of those inferences by the organizations that use an algorithm. In those cases, the meaning of software code provides, at best, an incomplete map of the issues at hand.
In addition to these constraints in problem framing, a focus on code also affects the kinds of tools lawyers might borrow from technological disciplines. For example, Grimmelmann’s focus on code as law suggests that lawyers might benefit from engaging with programming language theory rather than AI as they seek to understand the legal implications of code. But, if we are looking at issues that cannot be reduced to code, other kinds of technical expertise might be needed, such as those offered by software engineering. In addition, we might come up with other kinds of legal intervention. For example, instead of mandating the use of encoding legal rules in software code, regulators might establish requirements directed at the use context of a software system or the organization that deploys it.
While I picked Grimmelmann’s paper as an example of the trade-offs and limitations, I do not think this is an issue with his argument. It makes a good case for looking at software with legal tools, providing some interesting insights along the way. Instead, the shortcomings and trade-offs I mapped above apply to his work, my work, and everyone else’s, as we cannot help but rely on analogies and metaphors to make sense of complex technologies. What I said above should not, therefore, be read as a critique of this specific proposal. Instead, I suggest a pragmatic engagement with metaphors and analogies in regulation. We should not think just about whether we are offering a suitable description of an object or process, but about what kinds of insights and actions are enabled (or curtailed) by our frame and what we expect to do with the metaphor or analogy. By keeping these questions in mind, and trying to understand where our models of the world break down, we might be able to arrive at more useful outcomes. Or, at least, know when we are barking up the wrong tree.
Reading recommendations
Niels Åkerstrøm Andersen and Justine Grønbæk Pors, ‘On the History of the Form of Administrative Decisions: How Decisions Begin to Desire Uncertainty’ (2017) 12 Management & Organizational History 119.
Drawing from the study of Danish administrative bodies, the authors identify various moments in the evolution of administrative decision-making. In particular, they point out that modern administrative practices often create uncertainty as a form to carve new possibilities of action in increasingly complex scenarios. As the article shows, it often pays to historicize seemingly static concepts such as “decision-making”.
Corinna Coupette and others, ‘Law Smells: Defining and Detecting Problematic Patterns in Legal Drafting’ (2023) 31 Artificial Intelligence and Law 335.
In software engineering, “code smells” are patterns in code that, to the experienced eye, offer a sign that a piece of code might become difficult to understand or maintain. The authors draw from this concept to coin the idea of “law smells”, that is, patterns in legal text that can be problematic, and then show how some of the techniques used to identify code smells can be used to automate the detection of law smells. Once these smells are detected, legislators may address them or keep things unchanged. But ex ante diagnosis of these potential issues might save much effort in the future and maybe even avoid undesirable effects in the application of the law.
Jacob Feldgoise and others, ‘Studying Tech Competition through Research Output: Some CSET Best Practices’ (Center for Security and Emerging Technology, 26 April 2023)
Interesting methodological discussion about how to measure research outputs at the national level, especially in the field of AI. The authors point out various key insights and the need to be transparent, since the results of measurements can vary considerably depending on their premises. But, first and foremost, one must be aware of what is being measured: as the authors highlight, research output leadership does not necessarily translate into technology or innovation leadership.
OECD, ‘AI Language Models: Technological, Socio-Economic and Policy Considerations’ (Organisation for Economic Co-operation and Development 2023) DSTI/CDEP/AIGO(2022)1/FINAL
A useful primer for debates on Large Language Models.
Thanks for reading so far! Please consider subscribing to this newsletter and spreading the word:
And now, for some otters:
World Otter Day
Every year, the last Wednesday of May is World Otter Day: a date for raising awareness about these lovely mustelids and the risks they face. There are 13 species of otters around the world, which are native to the Americas, Africa, Asia, and Europe. Some of those live in the sea, while most are riverine creatures. And all of them are threatened by extinction.
Various factors drive these extinction risks, such as pollution and conflicts with human fishers who are afraid that otters will steal their fish. But a major driver is the illegal trade in otter pelts and live specimens, amplified in the last few years due to how popular otters have become. Otters are not prepared for life as pets, and so many of them die either during capture or because of the conditions they are subject to in private life. Even though they are so adorable, videos of otters in domestic environments and cafés are very problematic, so I try to avoid them these days.
Instead, I will leave you with some videos of otters from aquaria, shelters, and other places where they are more suited to live.
Though Grammarly certainly helped with that.
Mastodon wasn’t really my jam due to all the tone policing. Bluesky seems to offer a more interesting approach to federation, but I’m mostly a lurker there.
Twitter threads are rarely as useful as their creators might think, and longer tweets run against the network’s core value proposition.