Errata (AI, Law, and Otter Things #32bis)
Corrections to the previous newsletter.
Apparently, two errors escaped my attention when I sent the latest issue of this newsletter.
The item Shafi Goldwasser and others, ‘Is the EU Courts’ Toolbox to Tackle Scientific Uncertainty Sufficient?’ (REALaw, 22 April 2022). is actually a conflation of two items: a post by Michal Krajewski and Mariolina Eliantonio (on the EU courts) and a preprint by Shafi Goldwasser and others (on technical aspects of AI).
When I was adding the hyperlinks to the papers, I ended up conflating the descriptions. So, here are the two recommendations I meant to make:
Shafi Goldwasser and others, ‘Planting Undetectable Backdoors in Machine Learning Models’ [2022]
Much of my recent work deals with explanations of AI systems and their usefulness and limits as tools for the law. In this paper, the authors inflict a strong blow to claims that explanation can help us trust AI systems: under standard cryptographical assumptions, it is impossible to detect whether a provider inserted backdoors in an ML algorithm. To make things worse, the authors further argue that mitigation strategies for backdoors have limited capabilities to address the problem. So, there is no guarantee that your explanation does not refer to a system tampered with through a backdoor. But, at the same time, these backdoors cannot be detected through inspection, either. So, this article provides a stark reminder that technical transparency, even if one has the efforts to direct towards analysis, might not necessarily save us.
Michał Krajewski and Mariolina Eliantonio, ‘Is the EU Courts’ Toolbox to Tackle Scientific Uncertainty Sufficient?’ (REALaw, 22 April 2022)
A brief and interesting blog post on how EU courts have been approaching matters of scientific uncertainty. Judicial review in the CJEU is not limited to assessing the application of EU law but also looks at the factual bases of legal acts. In science-heavy cases, it does so by checking the procedures w.r.t. their precaution and use of information. However, current approaches have a limited engagement with the actual scientific debates and are thus constrained in their ability to reach the substantive elements of cases. This argument is likely to be very relevant for the AI Act, especially regarding large machine learning systems.