Promises and expectations (AI, Law, and Otter Things #33)
Today's newsletter has some news about my dog, discussions and reading recommendations about AI regulation, and some sci-fi comments. As well as the usual otters, of course.
Hello, dear reader! You are probably here to see something on AI and law—maybe on science fiction, too—but I will once again begin with some dog-related comments.
Winnie has been living with us for about three and a half months now. She is incredibly sweet, though a bit too protective of our home, and has learned everything we have tried to teach her so far. However, Winnie was very scared of the outside world. This fear is entirely understandable, as she lived in a secluded dog shelter before moving to a city that received more than 5 million tourists last year (and used to receive about three times as many before the pandemic). The sheer amount of human bodies moving around made her quake with fear. After the first few days, we gave up on taking her for a walk.
All of this started to change in the last few weeks. Whereas she used to become very shy whenever we opened the door, Winnie started to show much curiosity about the outside world. So we started to spend more time with her in the hallway until she felt comfortable exiting our apartment building. She is still very scared whenever a bicycle appears, and slightly less so with cars and trucks, but now she seems to actively enjoy playing outside.
Why am I speaking of Winnie? First of all, because I am proud of her and looking for any excuse to share photos of her here. But this story also made me think of a paper I presented back in June. In that paper, I argue that legal analyses should engage with technical arguments to assess whether an AI system delivers on its promises. Just like we expected that Winnie would be interested in taking long walks, whoever deploys an AI system usually has high expectations: gains in efficiency, increased agility, reduced costs, and so on.
But those promises do not always hold for a variety of factors:
Contextual constraints, such as a country dog having to deal with large crowds, or an AI system operating outside the conditions it was designed for;
Impossible expectations: as much as I want to, Winnie has steadfastly refused to write this newsletter in my place. Likewise, an AI system is unlikely to parse context-dependent forms of humour, and it certainly cannot create a perfect compiler;
Poor execution: I have to admit that sometimes I am lazy in training her; this does not help with my goal of taking her out but is certainly less harmful than biases in AI training or faults in AI code.
This taxonomy is by no means complete, of course. Fortunately, a new paper published in FAccT '22 proposes just that: a comprehensive map of how AI systems may fail to deliver the functionality they promise. By engaging with these technical and non-technical modes of failure, we can respond to the actual risks and challenges involving AI instead of merely taking the promises of technology users and developers at face value.
As Lee Vinsel argued, accepting the claims of technological actors rather than engaging with them can be a dangerous posture. It might lead us to unwarranted optimism regarding what technology can deliver and lead us to overlook various kinds of harm. But even a critical posture that takes technological promises at face value can be dangerous, as it may point us towards addressing the wrong kinds of risk and making compromises that should not be made.
For example, the European Commission seems to expect EU citizens to accept mass surveillance of online communications for the laudable goal of combating child sex abuse. But academics, civil society organizations, politicians, and technical experts have pointed out several technical and political problems with the technologies needed in practice to comply with requirements that are painted as technology-neutral by regulators. These concerns were not addressed by the impact assessment report accompanying the proposed regulation, which instead handwaves the issue without providing much ground for this minority belief in the effectiveness of filter technologies. The actual technical trade-offs involved in the solutions proposed by law are therefore relevant for understanding what should be the proper balance between the various principles at stake here.
In short: do not leave technical promises unchecked. Unless you want to produce another Network Dilemma, and nobody needs that.
The final frontier
Now, let us move on to a different kind of promise. The last few weeks were quite enjoyable for my nerdy side. At this stage, you have probably seen the awesome photos produced by NASA's new telescope, but one should always see them again. But, as striking as these photos are, they are merely a signal of what an outside-of-Earth telescope can deliver us. Though I am by no means as up-to-date in astronomical research as teenage me was, I surely look forward to seeing what scientists from around the world will manage to produce from such a powerful data source.
To continue in space—but now in the realm of fiction—I have to say that Star Trek: Strange New Worlds delivered on its promise of being a spiritual successor to the Original Series. While it has retained the short seasons that are typical of modern shows, rather than going with a 26-episode monster, they have managed to create a story that is much more light-hearted and episodical than recent Star Trek shows. In doing so, they have provided a solid run of episodes, which is probably the best first season I ever saw on Trek. Some of the episodes, including the series opener and finale, are instant classics, and even the less thrilling episodes of the series are still enjoyable. I therefore reinforce my recommendation for this show.
Finally, I want to speak of a case in which my expectations were broken positively. Sandman was a very important cultural reference for me as I grew up because of its cultural and historical richness and gripping narrative. Since then, however, I have not enjoyed anything else by Neil Gaiman. Even Good Omens felt a bit uneven for me, though the more Pratchett-esque parts caught my attention. Because of that, I was afraid that my appreciation for Sandman would take a hit if I were to read it again at the ripe old age of 31. Fortunately, that was not the case: it not only holds up quite well as a work of art, but it also has aged well in terms of social mores and humanistic values. Totally worth revisiting or even reading for the first time if you are not much of a comics person (as I am not).
Reading recommendations
Recently, most of my readings have been work-oriented. One (kind of) exception is Anne Currie’s Panopticon series of sci-fi books. The series’s first book, Utopia Five, has it all: virtual worlds, climate change (and its consequences to computing), reflections about social orderings and politics, privacy, and futuristic-yet-not-magical artificial intelligence. It will probably be of interest to readers interested in science fiction.
As for academic stuff, there are a few texts that might interest you:
Andrew Bell and others, ‘It’s Just Not That Simple: An Empirical Study of the Accuracy-Explainability Trade-off in Machine Learning for Public Policy’, 2022 ACM Conference on Fairness, Accountability, and Transparency (Association for Computing Machinery 2022)
AI literature tends to support one of two views on transparency: either there is an accuracy-explainability trade-off, or there is no loss in performance from adopting interpretable models. Based on two experiments, the authors argue things are more complicated—and that interpretable models are not necessarily more explainable than black boxes.
Upol Ehsan and others, ‘The Algorithmic Imprint’, 2022 ACM Conference on Fairness, Accountability, and Transparency (Association for Computing Machinery 2022).
Drawing from an infrastructural approach to algorithms, the article introduces the idea of the “algorithmic imprint” to capture the idea that wrongful decisions continue to produce harm after a system is deactivated. To show the usefulness of the concept, the article provides a study of the A-levels algorithmic scandal, interviewing teachers and students in Bangladesh to show how the effects of algorithmic practices spread in time and space.
Päivi Leino, ‘The Institutional Politics of Objective Choice: Competence as a Framework for Argumentation’ in Sacha Garben and Inge Govaere (eds), The Division of Competences between the EU and the Member States: Reflections on the Past, the Present and the Future (Hart Publishing 2017).
The choice of the legal basis for an EU act is often presented as an objective constraint to legislation, and the CJEU case law certainly refers to it in such terms. The article shows, however, that the choice of legal basis is, in fact, a strongly political procedure in which both the limits of legislation and the adequate procedure are negotiated. Litigation, far from being an antidote to this politicization of the choice of legal basis, is part of the political dynamics.
Kira JM Matus and Michael Veale, ‘Certification Systems for Machine Learning: Lessons from Sustainability’ (2021) 16 Regulation & Governance 177.
Certification—and the companion practices of standardization and labelling—often appears in AI regulation. The authors argue that sustainability certification, which focuses on processes, is a better analogue for what AI regulation needs than network standards, which rely on technical network effects to ensure compliance.
Niels ten Oever and Stefania Milan, ‘The Making of International Communication Standards: Towards a Theory of Power in Standardization’ (2022) 1 Journal of Standardisation.
Current theories of standardization overlook the role of power. The authors draw from a tripartite account of power—capital, politics, and ideology—to explain the dynamics between standardization actors.
Now, some otters
The very title of this newsletter carries a promise regarding content. I have spoken about law, AI, and various stuff. So, there is nothing more appropriate than concluding this issue with the usual load of otters.