Dear reader,
It seems we’ve finally arrived at the fortieth issue of this newsletter. It took me a bit longer than initially planned, as I eventually dropped any semblance of a regular publication schedule. Sometimes life got in the way, and sometimes I felt I had nothing interesting to say (or at least not enough time to write something interesting). Even so, it is flattering to see that people actually take the time to read some of my rants and recommendations, leading to some exciting conversations over the past year and a half(-ish). So, thank you, and I hope you stick around for the following issues.
Today’s issue is a bit more monographical than usual. Below, I share some initial thoughts about the Brazilian and EU proposals on AI regulation. Afterwards, I recommend some stuff that readers might find interesting, and then I show the usual otters. I hope you enjoy the issue!
Some news on AI legislation
I usually try to avoid commenting on the news here. In part, this follows from my irregular publication schedule. There’s also a bit of professional pride involved: I am not a particularly quick thinker, so I try to avoid putting myself in a position where I might say something wrong (or, worse, uninteresting) for clicks. But today, I feel inclined to write about legislative developments in Brazil and the EU.
Earlier this year, the Brazilian Chamber of Deputies (the lower legislative house) passed a bill on “foundations, principles and guidelines for the use of AI in Brazil”. In my professional opinion, the bill provides a great example of how not to regulate AI. It not only ignores the last few years of critique of principle-driven approaches to AI regulation, but it provides a particularly sloppy set of principles. Once this bill was forwarded to the Senate, the upper house did the sensible thing and created an expert commission to support the elaboration of an alternative bill.
After a few months of work, this commission finally published a 912-page-long report, including a preliminary draft for the alternative bill. Unlike the Deputies bill, the new preliminary draft follows a risk-based approach patterned after the EU AI Act, in which the application towards which an AI system is directed is used to determine the applicable legal regime. But, unlike the AI Act, this Brazilian bill establishes rights for people affected by AI systems and includes more extensive (and public) risk assessment requirements.
This is not to say, however, that the Brazilian bill is without flaws. I hope to spend some more time with it shortly, but for now, I would like to highlight two structural issues that the bill shares with the AI Act. The first one concerns the risk framing that both bills share. Brazilian and EU legislators take for granted that the risks associated with AI systems can be specified in terms of the application to which a system is destined. Furthermore, they adopt an actuarial model of risk: risk comes out of discrete events, and the likelihood and severity of adverse consequences of said events can be measured ex ante. As I argue with Nicolas Petit in a paper about the AI Act, both of these hypotheses are ill-suited to describe the kind of harm to fundamental rights that AI might introduce. And, as Margot Kaminski shows, the particular choices EU (and now Brazilian) legislators made about AI risks overlook some potentially useful tools for risk management, such as mechanisms for increased stakeholder involvement or design mandates. If legislators do not address these shortcomings of risk-based approaches, we might end up with a regulatory model that fails to cover critical dimensions of the fundamental rights AI regulation is meant to protect.
These issues with the risk frame are compounded by the adoption of a narrow definition of AI. Article 4, I, of the Brazilian preliminary bill is patterned after the AI Act definition. However, it introduces a new element: an AI system uses “approaches based on machine learning and/or logic and knowledge representation” to generate predictions, recommendations, or decisions. On the one hand, this definition creates the risk of under-inclusion, as the providers and users of AI systems may adopt various strategies to game the definition and argue that they do not use any such techniques to produce outputs with real-world significance. On the other hand, this definition might also create a problem of over-inclusion, as a broad interpretation of “approaches based on…logic” would include…well, basically any computer system. As I mentioned in a previous issue, I am somewhat sceptical of the value of narrowly-defined AI laws,1 but this “involuntary” extension of the bill’s scope would also render it applicable to contexts its governance mechanisms are ill-equipped to handle.
It would be naïve, however, to deny that there are strong economic and regulatory pressures towards a narrow definition of AI as a regulatory object. In fact, the Council of the EU’s general approach to the AI Act also narrows down the definition of an AI system. Even though the Council’s definition of AI system2 drops the list of technologies in Annex I of the Commission proposal, it does so by embedding specific technologies into the definition. And the technologies are the same ones mentioned in the Brazilian bill: machine learning and logic and knowledge representation. This means that the statistical approaches initially covered in the Commission proposal are no longer considered AI systems, even though they play substantive roles in many systems used to generate decisions, recommendations, and other outcomes that the AI Act is meant to address.3
In addition to the narrower definition of an AI system, the Council's general position also narrows down the definition of high-risk systems. The list of high-risk applications in Annex III AI Act remains—in fact, it has valuable novelties, such as the inclusion of digital infrastructure in the protection of critical infrastructure. But the newly-added Article 6(3) AI Act specifies that a system is not considered a high-risk system if it is “purely accessory in respect to the relevant action or decision to be taken”. As with the definition of specific technologies mentioned above, the notion of a “purely accessory” application lends itself to some gaming attempts. Case law might provide a precise meaning to this definition, and, before that, administrative guidelines might also be useful. But, given the controversies surrounding the definition of “decision solely based on automated data processing” in the GDPR,4 clarity might take a long time to appear.
Things you might be interested in
I started my legal education at a relatively late age (about the same age as some of my European colleagues started their PhDs), and with some specific interests in mind.5 Because of that, and the challenges involved in studying while working full-time (and/or finishing my masters), there was a strong component of min-maxing involved. These priorities gave me a strong grounding to start my graduate studies in law, but I often feel some knowledge gaps as I engage with topics outside my comfort zone in public law. So, I’ve been trying to revisit some topics in private law to supplement my education. One book I particularly enjoyed in this respect so far is Jan Smits’s comparative introduction to contract law.
Within my own area of specialization, I would like to point you towards a few recent and not-so-recent resources:
Giulia Gentile and Orla Lynskey, ‘Deficient by Design? The Transnational Enforcement of the GDPR’ (2022) 71 International & Comparative Law Quarterly 799.
European Commission, Legal Service, 70 Years of EU Law. A Union for Its Citizens (Publications Office of the European Union 2022).
Bert-Jaap Koops, ‘Should ICT Regulation Be Technology-Neutral?’ in Bert-Jaap Koops and others (eds), Starting Points for ICT Regulation. Deconstructing Prevalent Policy One-Liners. (TMC Asser Press 2006).
Albert Sanchez-Graells, ‘Governing the Assessment and Taking of Risks in Digital Procurement Governance’ <https://papers.ssrn.com/abstract=4282882> accessed 8 December 2022.
Meg Leta Jones, ‘Does Technology Drive Law? The Dilemma of Technological Exceptionalism in Cyberlaw’ (2018) 2018 Journal of Law, Technology & Policy 101.
Gregory Porumbescu, Albert Meijer and Stephan Grimmelikhuijsen, Government Transparency: State of the Art and New Perspectives (1st edn, Cambridge University Press 2022).
Pierre Schlag and Amy J Griffin, How to Do Things with Legal Doctrine (The University of Chicago Press 2020).
Finally, there are two pieces that I have not read but jumped to the top of my queue. Bruno De Witte just published an article in the European Constitutional Law Review, where he offers a conceptualization and defence of doctrinal scholarship. As I argued in a previous issue,6 trendy legal scholarship is often too quick to discard the value of la doctrine. This is an understandable reaction to how self-absorbed traditional legal research can be. Still, it risks going too much the other way and missing what is distinctive about legal scholarship as a discipline. So, I am curious to see how this paper helps us avoid hermeticism on the one hand and methodological reductionism on the other hand.7
The other one is a new book by Roman Frigg on the philosophy of models and theories in science. As some of you might know, my research as a computer scientist was focused on the computational simulation of social phenomena. Back then, Frigg’s work with Julian Reiss on the philosophical issues surrounding simulation (or the lack thereof) was one of my inspirations in engaging with the philosophical foundations of modelling practices. So, I am very curious to see this book-length treatment of the state of the art of modelling in science, especially as I grow increasingly convinced that discussions on the framing of technology law issues could gain from engagement with their underlying models. But this is a topic for a future discussion, and not something for the book itself, which might nevertheless be an interesting read even if you are not a philosopher of science.
Finally, the otters
More specifically, an otter species that lives in the Brazilian region I’m originally from:
Please feel free to contact me with your thoughts on these topics. And, if you are not subscribed yet, click on this button to receive future issues by email:
See you next time!
Especially as some important AI-related concerns are, in fact, concerns with commercial software in general.
Article 3(1) AI Act (Council general position).
But at least this definition drops the pretence of being a “technology-neutral” definition of AI, a laughable claim that was present in the Commission proposal and has been echoed elsewhere. Acknowledging dependence on specific technological models is better than producing technology-specific regulations and then claiming technological neutrality.
I got into law to study legal philosophy, and my original plans were to make a living as a tax lawyer. But then I started working with tech law and, well, look at where that has led me.
But I am too lazy to provide a link right now.
My opinions on this subject can be a bit intense, but I will not put them in writing for now.