Hello, dear reader, and welcome to another issue of AI, Law, and Otter Things! Today’s newsletter features a sneak peek into my PhD dissertation, in which I talk a little bit about why and how I engage with technology-neutral regulation. Then, I will share a few recommendations. And, as usual, I will wrap up with a lovely otter.
Before that, some personal updates. My PhD defence will take place in a bit more than two weeks. If you happen to be in Florence (or to want to join via Zoom), you can register to spend two hours of your time listening to me talking about technology-neutral regulation and AI. Right before that, I hope to meet some of you at the ICON-S conference in Madrid! But, for now, let’s go back to technology neutrality.
Technology-neutral regulation as delegation
My PhD dissertation is titled “Delegating the Law of Artificial Intelligence: A Procedural Account of Technology-Neutral Regulation”. To put it shortly,1 I argue that we should understand technology-neutral regulation as a form of delegation of powers. When policymakers avoid making decisions about the technological contents of the law, they shift the power to make those decisions to other actors (whether public or private). Once we view technology neutrality in that way, we can better understand the consequences of regulatory design choices, and I use that framework to identify potential effectiveness and legitimacy gaps in the EU framework for AI law.
The novelty of this argument does not come from acknowledging the link between technology-neutral regulation and delegation. Scholars often acknowledge this link when writing about technology neutrality (see, e.g., this article by Crootof and Ard). What is new about my thesis is that I see delegation not as a side effect of technology-neutral regulation, but as its central feature.2 Viewing technology-neutral regulation in that way has at least two methodological advantages for technology law scholarship.
First, it allows us to make sense of the conceptual mess that surrounds “technology-neutral regulation” and related terms. We see the term being used everywhere, often as something inherently desirable, but its meaning is taken for granted. But once we move past the intuition that technology-neutral regulation is somehow neutral about technology, we are left with a big question: what do we mean by neutrality in the first place?3 And different branches of the law give different answers to this question.
If you ask somebody working on economic regulation, they will likely frame technology-neutrality as a non-discrimination principle: technology cannot be used as a factor to distinguish between market actors. In EU data protection law, instead, neutrality is seen as a non-avoidance mechanism: the law should remain applicable regardless of the technical means through which data is processed. These approaches can lead to very different outcomes, something that can be especially problematic in cross-domain issues (such as AI regulation), as regulators from different backgrounds might interpret the same policy in different ways.
Various scholars have tried to argue that policy reasons should lead us to prefer one formulation over the others.4 I propose, instead, that the common denominator between these different accounts is that they all require some form of delegation of powers. Every form of technology-neutral regulation means that the policymaker turns technology into somebody else’s problem, even if the reasons for doing so might vary. Viewing technology-neutral regulation as delegation thus allow us to give a coherent meaning to the term, which encompasses most (if not all) of its current uses. Such a view does not dissolve the disputes found above, but allows us to understand that they are disputes about policy aims (and thus about power) rather than conceptual arguments.
The second advantage I’ll discuss here is that viewing technology-neutral regulation as delegation suggests different ways to study regulation. As I discussed before in this newsletter, the way we frame issues suggests highlights some aspects of the issue while downplaying others. Many traditional accounts of technology-neutral regulation view it as a property of the legal text: that is, a legal provision is technology-neutral if it does not refer to specific technologies. This means that the question of whether regulation is technology-neutral must be evaluated through the interpretation of specific legal provisions.5
Putting delegation at the centre of technology-neutral regulation suggests that we should look at regulatory instruments in different ways. Instead of focusing on whether specific provisions mention or not a technology, we can look at how the pursuit of technology neutrality leads policymakers to shift the power to determine the technical contests of the law. Out of the box, this suggests a few questions: who gets to exercise that powers? For which purposes? Under which conditions? All of which have little to do with about whether the legal text can be associated which a particular technological artefact or method.
More importantly, it also suggests that technology law scholars can benefit from the vast body of scholarship that has been developed by administrative lawyers on delegation, which can be used to refine those questions, identify methods that can be used to address them, and potentially design regulatory arrangements that avoid shortcomings such as the lack of democratic legitimacy associated with technical standardization or the difficulties software developers might face when complying with broad regulation-by-design requirements.
With my thesis, I hope to show that looking at technology-neutral regulation in that way is possible and desirable. I do so by articulating the implications of this procedural account that views technology-neutral regulation as delegation, and applying that account to the analysis of three issues: the production of technical norms in the AI Act, regulation by design, and the specification of technical transparency norms by courts and regulators. Hopefully, the brief outline above should entice you to join my defence, and I look forward to hearing your thoughts on these matters.
Reading recommendations
Enough about myself, now it’s time to highlight interesting works by other people.
Kars Alfrink and others, ‘Envisioning Contestability Loops: Evaluating the Agonistic Arena as a Generative Metaphor for Public AI’ (2024) 10 She Ji: The Journal of Design, Economics, and Innovation 53.
Catherine Brinkley, ‘Hardin’s Imagined Tragedy is Pig Shit: A Call for Planning to Recenter the Commons’ (2020) 19 Planning Theory 127.
Paul Cairney and others, Making Policy in a Complex World (Cambridge University Press 2019).
Anastasia Ershova and others, ‘Constraining the European Commission to Please the Public: Responsiveness through Delegation Choices’ [2023] Journal of European Public Policy early access.
Nari Johnson and others, ‘The Fall of an Algorithm: Characterizing the Dynamics Toward Abandonment’ in (ACM 3 June 2024) FAccT ’24 337.
Meg Leta Jones and Amanda Levendowski (eds), Feminist Cyberlaw (University of California Press 2024).
Barbara A Mellers and others, ‘Human and Algorithmic Predictions in Geopolitical Forecasting: Quantifying Uncertainty in Hard-to-Quantify Domains’ [2023] Perspect Psychol Sci 17456916231185339.
Przemysław Pałka and Bartosz Brożek, ‘How Not to Get Bored, or Some Thoughts on the Methodology of Law and Technology’ in Bartosz Brożek and others (eds), Research Handbook on Law and Technology (Edward Elgar Publishing 2023)
Dorian Taylor, ‘Agile as Trauma’ (31 May 2022).
Francesca Trevisan and others, ‘Deconstructing Controversies to Design a Trustworthy AI Future’ (2024) 26 Ethics Inf Technol 35.
Last but not least, look at this otter
Thank you for reading this issue! Please consider subscribing if you haven’t done so already, and don’t hesitate to hit “Reply” to this email (or contact me in person or in social media) if you’d like to keep the conversation going:
See you around!
I would be tempted to use “Taking delegation seriously” as a title somewhere in my dissertation, but I can’t stand that particular legal meme (or Dworkin, for that matter).
One may also problematize the notions of “technology” and “regulation”.
See, e.g., Maxwell and Bourreau.
And it can be a tricky assessment: as Birnhack showed more than a decade ago, the law can embed certain assumptions about technology even if no specific technologies are mentioned. So, technology neutrality necessarily becomes a matter of degree rather than something that can be achieved in absolute. I agree with this point, but propose that neutrality must be evaluated in a different way.