Hello, dear reader! As I write these lines, the European Parliament is holding a press conference on its just-approved position for negotiation in the AI Act trilogues. Meanwhile, in Chicago, the traditional FAccT conference showcases scholarship on AI fairness, accountability and (to a lesser extent) transparency. And in Brazil, a group of influential senators just proposed a constitutional amendment to establish fundamental rights to mental integrity and algorithmic transparency. Between that and other issues, you and I have a lot to read and keep track of already.
In addition, my newsletter planning got sidetracked by a few developments. Because I am about to enter the final year of my PhD programme, I need to think of my future employment prospects. So, I decided to update my personal website to update its contents and migrate it to a platform that demands less effort to maintain. And, in my spare time, I started playing Warhammer 40,000 again, which means I’m spending more time than I’d expect learning how to paint miniatures and such.
In light of what I said above, this issue will be quite short. Its first part briefly discusses the legislative developments in the EU and Brazil, in the former case, by highlighting some interesting materials to read. Then, I share some open calls for papers, academic jobs, and forthcoming events. And finally, I share some otter pictures to live up to this newsletter’s promise.
Legislative developments
Brazil and the EU are at different moments in their legislative approaches to AI. In Europe, today’s vote is not enough to turn the AI Act into law,1 but it consolidates important elements of the Act’s text. There seems to be widespread agreement on the general lines of the risk-based approach proposed by the European Commission, which is now modified to include specific provisions on generative AI, a broader and (IMHO misguidedly) technology-neutral definition of AI, as well as on the idea that providers can show that their AI system is not a high-risk system even if it meets the criteria laid down in Article 6 of the Act. These elements of the European regulatory framework, for better or worse, are likely to be present in the final text of the Act if and when it becomes law in the EU.
I have discussed extensively some of the elements of the proposal. Based on the initial proposal by the European Commission, Nicolas Petit and I have pointed out the difficulties of matching general-purpose AI systems to the product-based framework, the problems that stem from trying to protect fundamental rights through technical standards, and the capabilities and legitimacy gaps that are likely to affect the Act’s enforcement mechanisms. In this newsletter, I have updated these discussions based on the Parliament political agreement, discussing some problems with its legal basis, definitions of key concepts, and framing of risk. And, finally, Anca Radu and I argue in a working paper that the AI Act’s expected Brussels Effect is at odds with the EU’s ambitions to shape the values of global AI governance.2 Because of that, any contributions I could make at this time, in this place would not be particularly novel.
Of course, I am far from the only person working on these issues. Meeri Haataja and Joanna J Bryson have published a fair assessment of the Parliament’s proposal, highlighting the risks of defeasible risk classifications and overreach of the product safety framework. Luca Bertuzzi has a piece summarizing the Parliament’s innovations and the next steps of the legislative procedure. And Access Now raised important protections for fundamental rights that must be added in the trilogue. So, I guess we have a wealth of analysis regarding the AI Act (and please let me know of any developments covering overlooked points!).
Contrastingly, there is surprisingly little about the Brazilian AI bill. This is partly because other topics (such as platform regulation) dominate legislative and scholar agendas. And there is also the fact that many people writing about AI regulation in Brazil have been involved either in the draft substitutive bill or in the public consultation period that led to its construction. Still, I expect to see more informed analysis in the future.
Sober analysis of what is going on in Brazil is needed to make a counterpoint to nonsense such as the proposed constitutional amendment. As currently proposes, it adds a new item to the extensive list of fundamental rights protected by Article 5 of the Brazilian constitution:
scientific and technological development will ensure mental integrity and algorithmic transparency, in the terms of the law
I must admit that I am unfamiliar with the extensive literature on neurorights. But, at the very least, this formulation has the problem of mixing two distinct concerns. As the justification for the draft amendment puts it, the right to mental integrity protects one’s private life and agency against exploitation and surveillance. But algorithmic opacity is not tackling the same problem as mental integrity: instead, it is meant to be a safeguard to enable accountability for those who control algorithms. Furthermore, the “algorithmic” modifier to transparency is often a smokescreen, as it allows the controllers of algorithmic systems to pitch technical solutions such as explainable algorithms as an alternative to actual disclosure. At best, the inclusion of algorithmic transparency does not contribute to the protection of mental integrity; and, at worst, the constitutionalization of this concept may legitimize inadequate transparency practices that do not contribute to accountability.
How do we get a legislative proposal like this? It is a textbook instance of the Politician’s Syllogism:
P1. We must do something
P2. This is somethingC. Hence, we must do this
The justification for the constitutional amendment is marked by some technology hype, as it rushes to speculate about the development of neural technologies with little grounding on their actual capabilities. It then justifies transparency because of the existence of algorithmic biases (citing that good documentary Coded Bias),3 but do not bother to expand on the connection between these two elements, using them instead as motivation to say that the legal order needs to be changed. Add some perfunctory references to Luhmann and Kant, and voilà: a thinly-grounded provision that tackles an ill-defined problem.
Making things worse, this proposed amendment is not sponsored just by the usual suspects who have pushed terrible AI bills before. It has been sponsored by a broad coalition, which includes Bolsonaro’s former vice president (Hamilton Mourão), senators from Lula’s Workers’ Party, and established political figures from other parties. I can only hope that scholars and activists manage to do their part and prevent this proposition from being entrenched in the constitution, where it might do more harm than it prevents.
Opportunities
As the reader Lauro Locks brought to my attention: on 20 June 2023, the WTO TBT Committee is organizing two thematic sessions on regulatory cooperation on (1) “intangible digital products” (including AI); (2) “cybersecurity”. The links to each session point to their dedicated pages where they will also be streamed via WTO’s YouTube channel. The websites contain already some information and the names of moderators and some of the speakers. The sessions’ programmes will be finalized soon.
The journal Common Market Law Review announced the 2023 edition of its Prize for Young Academics. They welcome works in any subject in the area of EU law, with up to 10,000 words, written by academics under 30 or that defended their PhD at most three years ago. Joint-authored submissions are welcome if both authors meet the requirements, and they will accept papers during October 2023.
Télécom Paris is looking for an Assistant/Associate Professor in law and regulation of digital platforms, AI and data. Applications are accepted until 15 August 2023.
Tomorrow is the last day for submitting your abstract to The Digital Constitutionalist’s planned symposium on the right to Good Administration in the Age of AI. Accepted authors will need to send a full post (1,500-2,000 words) by 15 September 2023.
The AI + Society Initiative at the University of Ottawa announced a call for submissions for the 2023 Global AI + Regulation Emerging Scholars Workshop, and the Scotiabank Global AI + Regulation Emerging Scholar Award. An abstract of up to 2,500 characters should be sent by 1 July 2023, and a full paper of up 7,000 words is expected by 15 September 2023 in preparation for the workshop on 18 October 2023.
And now, the otters
If you have enjoyed this issue, please subscribe to my newsletter in order to receive future updates!
And feel free to reply to it with any thoughts, comments, or suggestions.
The EU legislative procedure can be a bit arcane, especially once one takes into account the informal practices, but I have provided a quick explanation in a previous issue if you are unfamiliar with it.
This paper is not available to the public, but feel free to drop me a line if interested.
It could be worse: at least they did not cite the awful The Social Dilemma.