Reading tips (AI, Law, and Otter Things #32)
In this issue, I suggest a few texts that might interest the reader, based on what I've been reading lately. And the usual dog and otter pictures, of course.
Dear reader, I've decided to go back to the roots of this newsletter. Unlike the past few editions, I actually had some ideas of stuff I wanted to write about. But developing the topics I want to explore would take more time than I have right now, so I'd rather do something else.
Instead, I will present some reading suggestions, accompanied by a short pitch for each one. I hope you find something interesting here!
Marija Bartl and Jessica C Lawrence (eds), The Politics of European Legal Research (Edward Elgar 2022).
A new, interesting volume that discusses the various approaches used to study the law in/of Europe, organizing them around four political axes: the politics of questions, the politics of answers, the politics of audiences, and the politics of the concept of law itself. The chapters on legal formalism and comparative administrative law are particularly interesting.
Filipe Brito Bastos, ‘Doctrinal Methodology in EU Administrative Law: Confronting the “Touch of Stateness”’ (2021) 22 German Law Journal 593.
In this paper, the author argues that EU administrative law is often read from categories and doctrinal arguments borrowed from national administrative law. Such readings, however, might mislead analysis by ignoring the distinctive institutional elements of the EU. To avoid this, scholarship needs to cast issues in EU constitutional terms and establish a dialogue with other fields of EU law.
Charly Derave, Nathan Genicot and Nina Hetmanska, ‘The Risks of Trustworthy Artificial Intelligence: The Case of the European Travel Information and Authorisation System’ (2022) European Journal of Risk Regulation.
Establishing “trustworthiness” is one of the key goals of the EU AI strategy, as seen in the AI Act. However, the authors argue that the European standard for trustworthy AI still leaves room for various harms to individuals and society. To make this point, the authors examine the legal framework surrounding the risk prediction algorithm in ETIAS. The literature points out various harms that might arise from the algorithm, such as discrimination, further compounded by the opacity surrounding it. Despite these risks, the ETIAS algorithm is not seen as being at odds with “trustworthy AI”; indeed, the authors argue, it is part and parcel of the EU concept of trustworthiness.
Danilo Doneda and Rafael AF Zanatta, ‘Personality Rights in Brazilian Data Protection Law: A Historical Perspective’ in Marion Albers and Ingo Wolfgang Sarlet (eds), Personality and Data Protection Rights on the Internet: Brazilian and German Approaches (Springer International Publishing 2022).
This article, written by two prestigious Brazilian data protection scholars, examines how the evolution of data protection law in Brazil has been influenced by the local treatment of personality rights, particularly via consumer law. This historical perspective on data protection helps explain some elements of Brazilian law. In particular, it cannot be described as merely a “Brazilian GDPR”, as this influence of personality rights and consumer protection has produced distinctive influences on the LGPD, especially regarding the protection of collective rights.
Ryan Calo, ‘The Scale and the Reactor’ (2022).
To be honest, I am not a huge fan of this article as it stands right now. While I love the pun in the title, the author’s engagement with STS feels quite reductionist, as it paints the critical approaches to the field with too broad a brush, while failing to engage altogether with STS traditions from the Global South. Despite this significant issue, the author nevertheless highlights two important points. The first point is that law & tech, given its normative and pragmatic vocation, cannot be reduced to Latour-style descriptions of technological practice. Second, a nuanced framing of technologies is not always conducive to better legal decision-making. These points are very important and frequently overlooked in critiques of law & tech, and so I recommend this paper despite (and not because of) its treatment of the STS literature.
Mark L Flear, ‘Regulating New Technologies: EU Internal Market Law, Risk, and Socio-Technical Order’ in Marise Cremona (ed), New Technologies and EU Law (Oxford University Press 2017).
This chapter maps how the EU has approached the regulation of new technologies, with particular attention to the food sector. Addressing the risks stemming from new technologies requires a technical understanding of what is happening. Furthermore, the competencies available to the EU push regulators towards covering these products through a market harmonization framework (as is currently the case with the AI Act). The author argues that adoption in the EU of market harmonization legislation as the primary regulatory instrument has negative components (disapplying national law) and positive ones (active regulation and incentive measures), which tend to reduce tech risk regulation to product safety debates, thus depoliticizing them.
Shafi Goldwasser and others, ‘Planting Undetectable Backdoors in Machine Learning Models’ (2022).
Much of my recent work deals with explanations of AI systems and their usefulness and limits as tools for the law. In this paper, the authors inflict a strong blow to claims that explanation can help us trust AI systems: under standard cryptographical assumptions, it is impossible to detect whether a provider inserted backdoors in an ML algorithm. To make things worse, the authors further argue that mitigation strategies for backdoors have limited capabilities to address the problem. So, there is no guarantee that your explanation does not refer to a system tampered with through a backdoor. But, at the same time, these backdoors cannot be detected through inspection, either. So, this article provides a stark reminder that technical transparency, even if one has the efforts to direct towards analysis, might not necessarily save us.
Michał Krajewski and Mariolina Eliantonio, ‘Is the EU Courts’ Toolbox to Tackle Scientific Uncertainty Sufficient?’ (REALaw, 22 April 2022).
A brief and interesting blog post on how EU courts have been approaching matters of scientific uncertainty. Judicial review in the CJEU is not limited to assessing the application of EU law but also looks at the factual bases of legal acts. In science-heavy cases, it does so by checking the procedures w.r.t. their precaution and use of information. However, current approaches have a limited engagement with the actual scientific debates and are thus constrained in their ability to reach the substantive elements of cases. This argument is likely to be very relevant for the AI Act, especially regarding large machine learning systems.
Daniel E Walters, ‘Taking Democracy Seriously in the Administrative State’ (LPE Project, 16 May 2022).
In a long-ish blog essay, the author proposes that democratizing the administrative state cannot be done within a consensus-based approach. Instead, we need to find means to incorporate agonistic debate into administration, seeing the existence of irreducible conflict not as a failure of procedure but as a fact of democratic life.
And now, some meta otters: