What do we talk about when talking about AI? (AI, Law, and Otter Things #26)
Welcome to another issue of "AI, Law, and Otter Things"! Today's newsletter is somewhat brief, containing mostly a rant about the definition of AI, followed by the highlights from The Digital Constitutionalist, and the usual otters.
Artificial intelligence seems to pop up everywhere nowadays. Lawmakers use it as a frame for regulation; washing machines claim to use it; even educated fleas talk about it nowadays. But, while it might be easy to find examples of discourse on AI or artificial intelligence systems, the term "artificial intelligence" itself is more slippery. As a canon textbook puts it, researchers converge around four approaches to defining "artificial intelligence":
AI as the attempt to reproduce human cognition;
AI as the attempt to automate rational cognition;
AI as the attempt to reproduce human behaviour; or
AI as the attempt to automate rational behaviour.
These approaches result in different standards for evaluating a supposedly intelligent system. They also provide different roadmaps for the development of artificial intelligence. As a result, there are feuds between researchers from different fields—especially when one approach becomes prominent in terms of results and/or funding. Yet, most researchers will likely agree that a deep learning system is "artificial intelligence" in a sense that makes no sense if applied to an Excel spreadsheet, even if the latter is used for decision-making.
Accordingly, recent legislative approaches define "artificial intelligence system" rather than AI itself. This approach is followed, for example, by the OECD, the European Union, and the Brazilian proposal for a Legal Framework for AI. As it seems much easier to point out whether a system is an AI system than defining the general content of "artificial intelligence", a system-centric approach would improve legal certainty and draw upon the broad consensus between experts.
I am very sympathetic to a system-based approach to AI, but one should not think it will solve all demarcation problems in AI regulation. Even the cleanest, most precise definitions will still have some edge cases, and the decision of whether or not any given definition applies to a specific context might turn out to be hard in practice. At a more general level, any legal definition of AI involves a series of trade-offs. Some definitions might be over-inclusive and label as AI systems that one might not want to classify as such, as the Excel spreadsheet mentioned above. Other definitions might be under-inclusive: for example, regulation of automated decision-making excludes recommender systems and other forms of decision-aiding systems. Understanding how these trade-offs are made in practice will allow us to better understand the potential impact of misclassification.
In light of these issues, some authors have proposed that we avoid speaking of artificial intelligence altogether. As an alternative, discussions on AI-related themes should always mention the human (and corporate) actors involved in a given application context, as well as the specific technologies being used. This approach allows us to engage with the specific technologies, making it easier to spot things such as spurious claims (e.g. the use of AI to revive phrenology) and downright discriminatory applications. The result would be more precise debates, which avoid the mystic aura associated with words such as "artificial intelligence" and "algorithms", often presented as the end of the debate rather than as objects subject to scrutiny.
Nevertheless, I am not totally on board with ditching "artificial intelligence", at least in the short run. For better or worse, "artificial intelligence" and its related terms, such as (argh) "algorithm", provide a unifying term to aggregate various debates that have something in common: the concern with social change enabled by digital technologies. Suppose we align our response solely to technical definitions. In that case, we might fail to consider the very real worries about discrimination, loss of dignity, erosion of the rule of law, and other concerns raised in the AI regulation literature. And, even if these concerns are the fruit of broader social trends and not of the inherent properties of AI technologies, they still warrant some response. As history clearly shows, oppression does not become suddenly lighter just because it relies on lo-fi, artisanal surveillance and mechanisms of domination.
Why does the definition of AI matter, then? We should be open to the possibility that, in some cases, the answer is "it does not". There are situations in which issues described in terms of AI turn out to have AI involved at all. Even when the is AI involved, the legally-relevant effects might be produced by human labour, opaque corporate or governmental structures, or even by issues related to digital processing with no AI involved. But even in those cases, AI technologies might be relevant in determining how the specific harms that generate social concern come to pass. So perhaps a definition of "artificial intelligence" is less useful as a tool for defining the scope of regulation and more useful as a tool for understanding how to design effective regulation for a given issue.
A SNL sketch on AI
I am not really the kind of person that shares Saturday Night Live—or, to be honest, of much of American comedy—but this time I must make an exception:
New posts at Digi-Con
After a quite intensive first month, The Digital Constitutionalist continues its activities. A few highlights since the last issue of this newsletter:
I wrote a post recommending science fiction short stories;
Francisco de Abreu Duarte published his post on the EU regulation of content moderation and a poem on AI and war;
Cecil Abungu wrote on the governance of automated decision-making in developing countries;
Rachel Griffin presented an argument for moving platform governance beyond a focus on individual rights.
We have an open call for blog posts on the role of technologies in the Russian war of aggression against Ukraine, and some posts on this matter are already on our editing queue. In parallel with this call, we continue to welcome blog posts on legal matters, reaction texts in response to texts we already published, as well as essays and original short stories, poems and other artwork on science fiction. Come publish with us!
And an otter thing...
UK readers might be interested in supporting the UK Wild Otter Trust: