Hello, dear reader, and welcome to an otter issue of this newsletter!1 I’m writing this newsletter on a day off from work, so my approach is a bit different from usual. Instead of focusing on something from my research, today’s issue covers a personal interest of mine: science fiction. If you’re here for the AI stuff, the discussion might interest you anyway, but if that’s not the case, please feel free to skip ahead to the usual reading recommendations (or the cute animal pics at the end).
The Butlerian Jihad as a regulatory model?
Sci-fi is not a new topic for this newsletter. In the past, I’ve written about sci-fi and regulatory imaginaries and recommended some books, TV shows, and games. But, to a large extent, my discussion of sci-fi has focused on Frank Herbert’s Dune cycle, about which I have even written a draft paper that, alas, is now in limbo. If I ever need to trace back my intellectual influences, I will probably spend a lot of time writing about how my reading of Dune at an early age pushed me towards academia. Today, however, my focus is a bit narrower: I want to come back to a particular event in that fictional universe.
Dune differs from many other sci-fi universes because of its lack of advanced digital technologies. It doesn’t feature hyperintelligent AIs or robots, and computers don’t appear even as navigation tools. Yet, I have argued in a previous issue that Dune can nonetheless be useful for discussing AI regulation if we focus on the event that led to the prohibition of AI: the Butlerian Jihad.
This Jihad, which happened thousands of years before the first book in the Dune series,2 involves two separate moments: the initial rejection of AI technologies and their subsequent ostracizing. In the first moment, humans rejected intelligent technologies and destroyed all existing instances of such technologies.3 Unlike the Age of Simplification from A Canticle to Leibowitz, this action was not a wholesale rejection of technology: humans continued to have flying machines and even more sci-fi-ish technologies such as spaceships, energy shields, and laser weapons. Instead, it is a narrower rejection of the replacement of human cognition by "God of machine-logic", which led to a strong cultural taboo against anything resembling automation.
Such a rejection would not be out of place in many of the calls for AI regulation. It can be straightforwardly connected to proposals geared at the bugbear of AGI. However, it is no less compatible with more realistic and moderate proposals that some domains, such as criminal trials, should remain outside the reach of present-day AI. And it is in that sense that the Butlerian Jihad made its way to regulatory discourse: as an example of limiting moment, which might be seen as necessary, an overreaction, or a scenario to avoid.
In all these readings, the Butlerian Jihad supplies an example, even if fictional, that the path of technological development isn’t linear. Such an example is particularly valuable because real-world examples of restraint in technological development, or abandonment of established technologies, are relatively uncommon (though, as Matthijs Maas shows, far from inexistent). So, the reference to this particular piece of science fiction can play a useful rhetorical role in marshalling efforts to support proposals for AI bans—or dismiss them.
References to the Butlerian Jihad might also be useful to illustrate failure modes of such bans. For example, political actors in the Dune universe often subverted the spirit of the ban for reasons of realpolitik,4 but societal controls over these actors were weakened by the feudal structure that developed once humanity became way more reliant on human labour for its maintenance. This suggests that any proposal to ban certain technologies would do well in considering potential second-order effects.
The second moment of the Butlerian Jihad—the taboo—might also be of interest to those thinking about technological development. As Frank Herbert understood well, the destruction of existing AI technology would mean little if the reinvention of these technologies was an unavoidable fact of life. But the avoidance of reinvention did not come through law, depending instead on an ingrained taboo against automation that persisted long after its cause vanished. People interested in the legal construction of technology might invoke this situation as an example of how the use of power and the establishment of norms can shape technological development, while those interested in long-term governance might use it as an example of the need to ensure policies continue for a long time. A reading of Dune, therefore, can illustrate the interplay between social structures and technological evolution.
One should not, of course, read too much into references to the Butlerian Jihad. Our universe differs from Frank Herbert’s imagination in various important ways, such as the absence of sandworms and the spice that allows human society to replace machines with superhuman thinkers. Still, the Butlerian Jihad provides another example of how sci-fi authors have faced questions with some overlap with current challenges. These examples do not provide closed answers to the challenges within the fictional universe, let alone those facing us in the real world. But they might turn out to be useful as illustrations of how society can respond to new challenges and as a source of insights that we might tap into. With caution, of course.
Events and stuff
Next week, I’ll be in Brussels for the first time, presenting a working paper I’m developing with Anca Radu. The paper’s titled “The Brussels Side-effect: how the AI Act can reduce the global reach of EU AI regulation”, and it follows up on my working paper with Nicolas Petit by examining how the AI Act’s regulatory constraints may lead to the exportation of regulatory standards that are at odds with the so-called “European Approach to AI”. We give particular attention to the interplay between the AI Act and the proposed Council of Europe convention on AI. If you are in Brussels (not sure if there’s an online option), you can register here, and don’t hesitate to get in touch if you’re interested in the manuscript.
On a less serious note, I plan to get back to tabletop RPGs, so I’d appreciate any tips about interesting systems (especially those that can be played with two players or a player and a GM). Also, please let me know if you’re playing online, as that’s an option I’d be interested in.
Reading recommendations
Since I’m on a day off, today’s recommendations will not be as discursive as usual.
Sci-fi recommendations
B Baade, “The Law of Frank Herbert’s Dune: Legal Culture between Cynicism, Earnestness and Futility” (2022) 0(0) Law & Literature 1.
J de Cooman and N Petit, “Asimov for Lawmakers” (2022) 18(1) Journal of Business & Technology Law 1.
K Kennedy, “The Softer Side of Dune: The Impact of the Social Sciences on World-Building” in Exploring Imaginary Worlds (Routledge 2020).
A Martine, A Desolation Called Peace (Tor Books 2021).
M Travis, “Making Space: Law and Science Fiction” (2011) 23(2) Law and Literature 241.
M Travis and K Tranter, “Interrogating Absence: The Lawyer in Science Fiction” (2014) 21(1) International Journal of the Legal Profession 23.
General recommendations
F Beigang, “Reconciling Algorithmic Fairness Criteria” [2023] Philosophy & Public Affairs Early access.
A Burke, “Occluded Algorithms” (2019) 6 Big Data & Society 2053951719858743.
JE Cohen, “Affording Fundamental Rights: A Provocation Inspired by Mireille Hildebrandt” (2017) 4 Critical Analysis L 1, 78.
T Straube, “The Black Box and Its Dis/Contents: Complications in Algorithmic Devices Research” in M de Goede and others (eds), Secrecy and Methods in Security Research: A Guide to Qualitative Fieldwork (Routledge 2019).
N Varsava, “The Gravitational Force of Future Decisions” in T Endicott et al (eds), Philosophical Foundations of Precedent (Philosophical Foundations of Law, Oxford University Press 2023).
M Yurrita et al, “Disentangling Fairness Perceptions in Algorithmic Decision-Making: The Effects of Explanations, Human Oversight, and Contestability” in (Hamburg, Association for Computing Machinery 2023) Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems.
No otters today, sorry!
Instead, I will leave you with Tomé (our fluffy cat, who’s keeping my father-in-law company in Brazil) and Winnie.
Hope you enjoyed this issue (or at least the pet pics), and see you around! Please consider subscribing in order to receive future issues:
Sorry!
It is covered in some of the prequels, but I’m not considering them here.
The functions usually ascribed to such automation tools in sci-fi (and in modern life) are instead performed by superhuman individuals: ships travel across the galaxy under the guidance of Guild Navigators imbued with pre-cognitive powers, while administrative tasks in governments are performed by Mentats trained to perform computational and information storage tasks with their brains.
See, e.g., the use of Ixian machines by various political actors in the Duneverse.