Hello, dear reader, and welcome to another issue of AI, Law, and Otter Things! It’s been a month since my last post, but it was an eventful month. I went back to Brazil for the first time since starting my PhD, for a brief stay featuring a lecture at my alma mater (the University of Campinas) and lots of events with friends and family. After coming back, I addressed the last set of supervisor comments for my thesis, which is now undergoing language correction before final submission. And, now that the thesis is on its way, I’m finally catching up on some works that were on the backlog because of it, such as updating my AI Act paper with Nicolas Petit. All in all, there are lots of things happening, but nothing yet at a stage I can share in this newsletter.1

Unlike the previous issues, however, today I won’t be sharing cut content from my thesis. There are still many things that I want to share with you on that front, and I am happy to say that the previous posts on roads not taken led me to fascinating discussions. But today I want to talk about anything other than my thesis, so I’ll pull a leaf from JOTWELL’s book and talk a bit about readings I enjoy.
For those of you who don’t know JOTWELL, it is an online journal in which legal scholars can publish blogposts where they talk about scholarship they like. This is a good way to find out about what the (mostly US-based) people in your field are thrilled about and what they find interesting about a given book, article, or other substantive piece of writing. I had the ambition of doing something like this in my newsletter, but it fell through after a few issues. However, now might be a good time to come back to it as a personal exercise.
During this final stretch of my dissertation, I’ve been largely in a write-only mode. I’d try and think about things in paper,2 putting together various strands of what I read over the past few years and articulating them in terms of my account of technology-neutral regulation. In doing so, my reading became mostly instrumental. I was only reading stuff that is directly relevant to my task at hand, and reading it with a view to engaging those sources in my thesis. I believe this approach is unavoidable when one is finishing a text: comprehensiveness is illusory (even if one assumes everything relevant is written in a single language), so at some point one must resort to the old Fremen saying: “Arrakis teaches the attitude of the knife — chopping off what's incomplete and saying: ‘Now, it's complete because it's ended here.’”
Still, working on a write-only mode is not really sustainable in the long run, at least for me. I mean, once you acquire a solid formation in your field (probably half-way though your PhD), you might get away with this kind of incremental addition to your knowledge base, especially if you are working within the kind of well-defined field that makes one think of Kuhn’s characterization of normal science. Yet, as Raul Pacheco-Vega often reminds us, reading is writing: one should read broadly not just to situate one’s work within the literature but also as a way to overcome writer’s block. If you are working at a field with less stable boundaries, reading broadly is also crucial to find tools and concepts that can ground your work.
For those of us who work on tech law in particular, there is an additional reason for reading broadly: to avoid reinventing the wheel. As Bert-Jaap Koops points out, branches of legal scholarship that are related to tech often reinvent the wheel. Because we write a lot3 and are always concerned with the latest piece of new-fangled tech, we are often reinventing the wheel.4 In fact, one would be hard-pressed to find good ideas in tech law scholarship that were not anticipated, in some fashion, by people writing in the 1970s or such.5 This is not to say that nothing is new: some of the best contemporary scholarship on tech innovates because it brings to the table perspectives that were marginalized in those debates. We can do a lot better than rehashing old arguments, but to do so we need to know what those arguments are in the first place.
I make no claim to being an expert on the history of technology law,6 even within my narrow field of AI regulation. In fact, I can’t even tell who are the historians of technology law, except for Gloria Gonzalez Fuster’s long-standing research and collection of sources on the history of data protection. What I want to do, instead, is to highlight a few writings that might be interesting for people who are into the kind of topics I cover in this newsletter.
Arthur C Clarke, ‘Superiority’ (1951) 2(4) The Magazine of Fantasy & Science Fiction 3.
Let’s start with some light reading. In this short sci-fi story, Arthur C Clarke tells the story of a spacefaring civilization that loses a war because of the technology edge it holds over its enemies. While the losers’ technical expertise was leaps and bounds above that of their victors’, the former’s attempt to win the war through innovation led them to adopt increasingly experimental and unstable technology, disrupting the war effort beyond recovery. This story illustrates some topics that play an important role in tech law scholarship (such as the dangers of techno-solutionism) and others that should be more salient than they are (such as the brittleness of technological infrastructures).
Herbert A Simon, ‘Designing Organizations for an Information-Rich World’ in Martin Greenberger (ed), Computers, Communications, and the Public Interest (The Johns Hopkins Press 1971).
Attention is a concept that appears frequently in scholarship about technology. A few years ago, at least, one could not read about topics such as platform governance without reading about how online platforms deploy a panoply of tools to capture our attention. In this paper, Herbert Simon offers an overview of how attention becomes a scarce resource in an information-rich world, and how organizations cope with this scarcity. And, as interesting as Simon’s paper is, it is also fun to see that many of the critiques that are often raised about the view of organizations as mechanisms for filtering information are already present there. In particular, one can see a preoccupation with how centralization and filtering can ensure that high-level decision-makers are not forced to engage with the impact of their actions, especially when it comes to vulnerable populations.
Laurence H Tribe, ‘Legal Frameworks for the Assessment and Control of Technology’ (1971) 9 Minerva 243.
This paper can be seen as an ancestor of many debates happening within technology regulation. Here, Tribe deals with the problem of measuring and addressing the uncertain effects of technology, especially the ripples that follow indirectly from adoption of a given technology. Today’s literature is much more sophisticated in terms of regulatory interventions and forecasting mechanisms, but two key issues are already here: how can the law steer the path of technical development, if that is at all possible? What kinds of legal instruments can be used to that effect? Given how often we need to come back to these discussions, I think this work (as well as Tribe’s 1972 paper in the South California Law Review and 1973 book o the topic) might warrant a closer look by legal scholars, even those not dealing directly with the history of tech regulation.
Lisanne Bainbridge, ‘Ironies of Automation’ (1983) 19 Automatica 775.
My first English-language paper about the law dealt with automated decision-making, a topic that was all the rage in 2019. Everybody would talk about Article 22 GDPR: praising it, critiquing it, probing its limits. We could all have carried out sharper inquiries if we had paid more attention to previous studies about what happens when humans find themselves in a position to interact with or oversee the functioning of automated systems. Of course, some of us did cite those papers (and Bainbridge’s appeared quite often), but quite a lot of the optimism about human intervention that prevailed back then—and, to a lesser extent, even now, as seen in the AI Act—could be tempered by dealing with the all-too-human issues people were aware of 40 years ago.
Donald MacKenzie, Inventing Accuracy: A Historical Sociology of Nuclear Missile Guidance (The MIT Press 1990).
Back in the golden age of Twitter, Maaike Verbruggen, an expert on emerging technologies in the military field, curated some threads with all sorts of amazing scholarly resources. One of her strongest recommendations was this tome by Donald Mackenzie, which looks at the evolution of the US nuclear missile programmes and the role that “accuracy” plays in them. The book tells a story that is interesting on its own merits, but people interested in law and tech have an additional reason for reading it: many debates we have about technologies have often appeared before in military contexts. For AI regulation folks, we have here a discussion of the origins of the “black box” metaphor as it is bandied around in transparency discourse, of how “accuracy” is not a neutral target or an inevitable consequence of technical development, the role of contingency and path dependence in the choice between technical alternatives, and the limits of methods for anticipating technological development, among many other topics. So, this is another good place for those wanting to reap some of the incremental improvements made on the wheel instead of (or before) reinventing it.
Hope you get something interesting out of these recommendations, and please let me know about the old-school papers, books, and such you like to read, especially those underrepresented in traditional narratives about law and tech. Before leaving, please consider subscribing to this newsletter if you haven’t done so already:
See you next issue, folks!
For the Warhammer nerds out there, another big development is that I found an Assault on Black Reach box I had bought a lifetime ago and hadn’t bothered to paint. So, now I’m playing Orks, and I’ve had unexpected fun with kitbashes. Pictures are forthcoming as soon as I finish painting 70+ Waaagh! fellas, which may take a while.
Actually, a blank Word file. You don’t want to read my handwriting, and one of my favourite things about not being a computer scientist anymore is not having to use LaTeX.
I plead guilty, Your Honour. But I was young and I needed the money. Still need it, tbh.
Of course, the same is true of computer science itself. Just see the running gag of how Jürgen Schmidhuber claims to have invented every single major breakthrough in machine learning back in the 1990s or so.
Once again, I am not immune to that, to the extent that I have any good ideas at all.
Fun fact: I actually was a History major before switching to Computer Science. My change was motivated by a variety of reasons, not least of which financial ones and a lack of love for archival work, but my time at the Instituto de Filosofia and Ciências Humanas taught me some important stuff about handling sources and thinking about methodology. (It was also my first small step towards becoming a decent human being, but that’s another story that definitely doesn’t end there)