Hello, dear reader, and welcome to another AI, Law, and Otter Things! It has been a long time since the last issue, for a few reasons. Between some delayed projects and the effort in moving to a new country, I wasn’t really in a condition to share interesting content. Now, thanks to my loved ones, the kindnewss of strangers, and modern medicine, I am finally getting settled into this new country and my new position as a postdoctoral researcher in Cyber Policy.1 So, it is time to get back to newslettering.
While I was away from this newsletter, quite a few things happened within its scope:
The EU has released its first draft of the proposed Code of Practice for providers of general-purpose AI models. As I have joined the group working on this document, I will not be going into it for reasons of confidentiality, but the final version should be published before May, to meet a deadline set in the AI Act.
There has been a political agreement regarding the next EU College of Commissioners, which is at best a mixed bag. Would not be surprised to see further erosion of the right of third-country nationals and environmental topics.
The debate about whether or not LLMs have reached a plateau in performance has been re-kindled. It must be a slow news week.
And, of course, the orange elephant in the room on the other side of the Atlantic raises lots of questions, inter alia about the future of US policy in tech and how the EU’s digital sovereignty ambitions will play out in this scenario.
On an unrelated note, as a Brazilian, it brings me no joy2 to tell you our former president, Bolsonaro, has finally been indicted for his part on a very sloppy putsch. It seems that the only thing that saved Brazilian democracy was the sheer incompetence of those trying to overthrow it.
Today I will not be going deeply into these issues, especially as I do not feel I have much of interest to say about them right now (except in the first one). Instead, I will share some hasty notes on the relationship between AI and EU data protection law. After that, as usual, I will plug some of my work, and then share some reading recommendations. Finally, there’s an otter by the end of the newsletter, to maintain our running theme.
Some thoughts on AI and data protection
A substantial part of my work over the past few years has been dedicated to this particular legal intersection. Back when I started law school after my stint as a data scientist in the industry, I just wanted to find something that would allow me to nurture my interests in abstractions and legal philosophy. I thought of following tax law, as some legal philosophers in my city were working on that branch of the law, but then a huge opportunity came thanks to the then-recent proposal for a Brazilian data protection law. One thing led to another, and now I am in Luxembourg, working more on cyber regulation and EU law than on data itself. Still, some of my best friends and interlocutors remain in data law, and so I end up working on it from time to time.
As part of a recent project, I had to reflect a bit about what the big picture of what AI changes in the EU approach to data protection. Frequent readers of this newsletter might have noticed I am not on the “AI disrupts everything” field, not least because of my antipathy for the use of AI without further qualification. However, writing down some introductory materials on AI regulation at a much more practical level than what I usually do gave me a useful opportunity to organize some of my thoughts.
Ultimately, I decided to cut the following passage from the text it was meant for. Even so, it might be of interest for some readers. Don’t hesitate if you would like to share your thoughts about this, or suggest how to make this brief outline more accessible to people who have not spent too much time with the AI Act’s legislative procedure.
From its outset, the GDPR has been designed as a form of AI regulation. The provisions on automated decision-making in Article 22 GDPR are not specific to AI, but they cover one of the most salient applications of AI technologies: the generation of decisions about natural persons without human involvement. Even more importantly, the GDPR is a technology-neutral instrument (Recital 15 GDPR), which means that all its requirements and prohibitions remain applicable when personal data is processed by or through an AI system. For example, one of the key factors in the Italian data protection authority’s complaint against ChatGPT was the lack of a legal basis for the collection of personal data used to train the AI model. It is not for nothing that some commenters describe the GDPR as the first AI regulation.
Despite the applicability of the GDPR to AI technologies, the last few years have seen considerable pressure towards AI-specific laws. The biggest example is the AI Act, which became law in 2024 after a legislative procedure that received an unprecedented level of attention from the media and the public. The AI Act establishes rules that apply to nearly all AI systems developed or used in the EU, but it is not the only EU legal instrument directed at AI technologies. Some pieces of EU law include sector-specific rules for some AI applications, such as the rules on recommender systems present in Article 27 of the Digital Services Act. Others are directed at AI more generally, such as the AI Liability Directive proposed in 2022. All those instruments, however, mention that data protection law remains applicable. As such, data protection law remains a cornerstone in the EU approach to AI regulation.
What changes in data protection with those new AI laws?
This is not to say that the legal framework for data protection remains unchanged. In some cases, those new regulations support the operation of existing provisions in data protection law. For example, Article 26(9) AI Act stipulates that the deployers of some AI systems are obliged to use information they receive under the Act to carry out the data protection impact assessment (DPIA) required under Article 35 GDPR. In this case, the AI-specific provision does not interfere with the regulatory data protection law. It merely creates an additional obligation to ensure that the DPIA works as intended by data protection law.
Some AI-specific provisions of EU law, instead, alter the legal framework created by data protection law. For example, Article 10(5) AI Act creates a new legal basis for the processing of special categories of personal data. As we discuss [elsewhere in the source material], this new legal basis is justified on the grounds that using data from those categories might be necessary to avoid biases in AI models that carry out high-risk tasks. Here, the AI Act provision is not merely reinforcing the GDPR regime, but crafting an exception to it, which is narrow and accompanied by various safeguards.
It would not be feasible to examine all the changes that EU laws on AI have introduced to data protection law. This training module presents a general overview of the interface between data protection and AI, but many of those changes are specific to particular applications of AI or industry sectors. Furthermore, some of them are not yet adopted into law, so any exhaustive treatment will soon become outdated. Instead, this training module focuses on the interaction between the GDPR and the AI Act, which, as a horizontal instrument for AI regulation, apply to most uses of AI in which personal data is processed.
The AI Act as a general AI regulation
By the end of the GDPR’s legislative procedure, there was already widespread concern that the new data protection laws might be insufficient to deal with new AI technologies. That concern was amplified by developments such as the SyRI case in the Netherlands, in which the courts ruled that a risk scoring algorithm proposed by the government did not respect the right to a private life. It was also boosted when new technologies such as ChatGPT called public attention to the variety of AI applications that are now part and parcel of day-to-day life in Europe and beyond. The AI Act, initially proposed in April 2021, offers a response to those concerns.
Before and during the AI Act’s legislative procedure, there were considerable divergences about what, if anything, needed to change. Some argued that AI created risks that were not properly covered by data protection alone, such as the potential negative impacts of false information generated by so-called “hallucinations”. Others, such as professors Sartor and Lagioia, suggested that some provisions of data protection law, while still applicable, needed further clarification. The AI Act addresses the demands of both camps. It provides more detailed obligations in some matters that were already covered by the GDPR, while creating rules in some issues that fall outside the purview of data protection.
The AI Act and the GDPR are both designed to ensure that fundamental rights are protected in a digital society. Yet, they follow different approaches to do so. EU data protection law follows a rights-based approach, in which the processing of personal data must respect the fundamental rights and freedoms of data subjects (Recital 2 GDPR). Under the principle of proportionality, the impact of personal data processing on those rights must be evaluated contextually (Recital 4 GDPR), not just at the moment an AI system is originally designed (Article 25 GDPR). Data protection law thus requires a constant evaluation of the risks created by processing.
In the AI Act, fundamental rights and other public values (such as democracy and the rule of law) are protected through a product safety approach. This approach lays down conditions for those who develop, commercialize, and deploy AI-based products in the EU. Any such product can only enter the EU single market if it is in conformity with the applicable requirements (see, e.g., Art. 8 AI Act). The idea, here, is that a product in conformity with those requirements will not create an undue risk to fundamental rights. But, given that some risks might not be detected or addressed ex ante, the AI Act also features a market surveillance mechanism to deal with situations in which an AI-based product that is already on the market or in use interferes with a fundamental right.
What I’ve been up to
My personal website is now up-to-date with my new position and recent publications. I must say I am pretty happy with that profile picture, something that doesn’t happen often.
I shared a new pre-print on SSRN. It is called Two Dogmas of Technology-neutral Regulation, and there I review the literature on technology neutrality to question two common assumptions about it: that neutrality is conceptually simple and that it is a more effective regulatory strategy than technology-specific regulation. This manuscript draws from a chapter of my thesis that I do not plan to include in a future book on the topic, but might nonetheless be interesting.
I am wrapping up a few big projects, including the final, updated version of that AI Act article with Nicolas Petit and some materials on AI and data protection that might be available to the public soon.
Things you might want to read
Sarah Backman, ‘Normal Cyber Accidents’ (2023) 8 Journal of Cyber Policy 114.
Filipe Brito Bastos, Judging Composite Decision-Making: The Transformation of European Administrative Law (Hart 2024).
Mindy Nunez Duffourc, Sara Gerke and Konrad Kollnig, ‘Privacy of Personal Data in the Generative AI Data Lifecycle’ (2024) 13 NYU Journal of Intellectual Property & Entertainment Law 219.
David Edgerton, ‘Tilting at Paper Tigers’ (1993) 26 The British Journal for the History of Science 67. A thought-provoking review of the excellent book Inventing Accuracy, which I have recommended before in this newsletter.
Samuele Fratini and others, ‘Digital Sovereignty: A Descriptive Analysis and a Critical Evaluation of Existing Models’ (2024) 3 Digital Society 59.
Andres Guadamuz, ‘The EU’s Artificial Intelligence Act and Copyright’ [2024] The Journal of World Intellectual Property.
Luca Nannini, ‘Habemus a Right to an Explanation: So What? – A Framework on Transparency-Explainability Functionality and Tensions in the EU AI Act’ (2024) 7 Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society 1023.
Paul Ohm, ‘Focusing on Fine-Tuning: Understanding the Four Pathways for Shaping Generative AI’ (SSRN, 21 June 2024). Insightful on fine-tuning as a site for regulation.
Francesca Palmiotto, ‘Procedural Fairness in Automated Asylum Procedures: Fundamental Rights for Fundamental Challenges’ (2024) 55 Computer Law & Security Review 106065.
Andrzej Porębski, ‘Institutional Black Boxes Pose an Even Greater Risk than Algorithmic Ones in a Legal Context’ (SSRN, 19 April 2024).
Nathan Schneider, ‘Innovation Amnesia: Technology as a Substitute for Politics’ [2024] First Monday.
Alessio Tartaro, ‘Value-Laden Challenges for Technical Standards Supporting Regulation in the Field of AI’ (2024) 26 Ethics and Information Technology 72.
And now, the otter
Thank you for reading! If you want to read more about AI, law, regulation, and otter topics, please consider subscribing if you haven’t done so already:
See you next time!
As before, this newsletter remains strictly personal, and any thoughts shared here are not vetted or endorsed by my employer or by my boss, Professor Niovi Vavoula.
Actually, it brings me a lot of joy, I’m afraid to say.