Hello, dear reader, and welcome to another issue of AI, Law, and Otter Things! I hope this email finds you well. In recent newsletters, I’ve been alternating between “meta” essays focused on life as a research and “substantive” essays dealing with ongoing research. This approach is convenient to me, as it makes easier for me to have something to share when my projects are still in their earlier stages. Given that, at the moment, I am kind of taking some time to reorganize my research agenda and contain the proliferation of interesting tangents I have some notes on, I might not stick rigorously to this schedule. Still, I hope to keep a relatively stable flow of newsletters in the near future.
Over the last week, quite a bit has happened within our scope. To focus on the European Commission alone, the last few days have seen an AI Continent Action Plan to, among other things, “simplify” the AI Act (whatever that might mean), develop capabilities in AI (much needed) and the technological and data infrastructures for supporting those technologies. At the same time, the Commission has opened a public consultation geared towards revamping the Cybersecurity Act, and will soon announce new dates for the entry into service of ETIAS and EES. Not to mention all the chaos with US tariffs and the EU reaction. I cannot hope to cover these topics right now, let alone developments elsewhere in the world, but some of them will likely come back in future issues.
In the meantime, today’s issue will focus on transparency and the CJEU’s ruling in Dun & Bradstreet Austria. After that, the usual recommendations, opportunities, and otters. But, before that, Winnie wants to say “hi”:
Tangential remarks on Case C-203/22
In late February 2025, the European Court of Justice ruled on Case C-203/22, CK v Dun & Bradstreet Austria GmbH and Magistrat der Stadt Wien. Among nerds people following data protection news, this case is largely related to an issue that has led to the death of many trees: whether EU data protection law features a right to an explanation of automated decision-making concerning a data subject. For a long time, matters of transparency have been central in my research agenda, even if now I am winding down my participation in those scholarly debates. Nonetheless, my colleagues at the Data Protection Scholars Network invited me to kickstart a discussion on this case last month, so I would like to share some thoughts with you on it.
Since my presentation, I have been made aware of some very handy commentaries on the case. Stefano Rossetti walks us through the contents of the ruling, praising the Court’s teleological presentation and attention to what “meaningful information” conveys in the various official languages of the EU.1 Ljubiša Metikoš points out that the ruling’s abstract framing of the information that needs to be disclosed reduces the possibilities available for contesting automated decisions. Given that those two blog posts, among others, offer an accessible presentation of the case, I will allow myself to focus on three points that are a bit less salient.
The first one concerns the value of regulatory guidelines. Both the opinion of AG Richard de Latour and the final ruling give considerable argumentative weight to the content of the guidelines on automated decision-making adopted by the Article 29 Working Party. The same can be seen in the SCHUFA case. In this particular case, the guidelines are used as an authoritative rather than legally binding source of guidance, and one that is of high quality to boot. Even so, it is not without reason that the legal consequences of formally non-binding instruments such as those guidelines have become the attention of growing attention from legal scholars, as they can lead to situations in which there is little oversight or legitimacy regarding the production of instruments that effectively determine the contents of the law. This issue becomes even worse when the guidelines are not particularly good, as I argue is the case of the Commission’s recent guidelines on the definition of AI system under the AI Act.
Another aspect of the ruling that I’d like to highlight is its treatment of national legislation. One of the issues brought to the court in this case is that Paragraph 4(6)the Austrian law on Data Protection precludes, as a rule, data subjects from gaining access to their personal data if that access would compromise business or trade secrets of the controller or a third party. In para. 75 of the ruling, the ECJ found that such a provision is incompatible with EU law, on the grounds that it short-circuits a balancing of factors that Article 15 GDPR establishes that must be made on a case-by-case basis. This finding is not particularly surprising, as it is in line with not only the earlier findings in Schufa but a long line of rulings, dating at least from the early 1970s, that blocks Member States from overriding provisions of EU law. However, I wonder how this will play out with the national “implementations” of the AI Act, as national efforts to fill in interpretive gaps in the provisions on high-risk AI systems might be deemed to impose a particular balancing of values that the Act requires to be undertaken on a contextual basis.
Last but not least, one interesting aspect of the referring court’s position that was not ultimately dismissed by the ECJ was its attention to the temporal aspect of models. As described by para. 25 of the ECJ ruling, the expert appointed by the Austrian court sustained that the disclosure of information should include scoring from not just the decision itself but also from other decisions taking place both before and after the one concerning the data subject CK. This is necessary because decision-making models are not static objects: they can change over time, either through self-learning or through a variety of other technical mechanisms to change (such as patching detected bugs). Accordingly, it might be useful to think of AI-related transparency as not just an event in time but an ongoing process. Doing so takes us beyond the realm of Dun and Bradstreet Austria, but this ruling provides us with a first step in that direction. Or, at least, does not entirely foreclose the way.
Recommendations
Two new outputs of mine are now available to the public. The first one, already mentioned above, is a short commentary on the European Commission’s guidelines on the definition of AI system under Article 3(1) AI Act. The other one, written with Niovi Vavoula and Giacomo Zampieri, is a very brief response to the Commission’s consultation on an implementing regulation on the Cyber Resilience Act.
Also, here are some interesting things by other people:
Deirdre Ahern, ‘The New Anticipatory Governance Culture for Innovation: Regulatory Foresight, Regulatory Experimentation and Regulatory Learning’ [2025] European Business Organization Law Review early access.
Guido Bellenghi and Ellen Vos, ‘Rethinking the Constitutional Architecture of EU Executive Rulemaking: Treaty Change and Enhanced Democracy’ (2024) 15 European Journal of Risk Regulation 793.
Thomas Claburn, ‘AI Code Suggestions Sabotage Software Supply Chain’ The Register (12 April 2025).
David Eaves and Beatriz Vasconcellos, ‘Digital Public Infrastructure Is the New Global Tech Bet—But Everyone’s Betting on Something Different’ (Tech Policy Press, 1 April 2025).
Daniel Little, ‘Sources of Technology Failure’ (Understanding Society, 12 May 2023).
Julia Pohle, ‘The European Strive for Digital Sovereignty: Have We Lost Our Belief in the Global Promises of the “Free and Open Internet”?’ (2023) 3 Weizenbaum Journal of the Digital Society.
Rebeca Remeseiro Reguero, ‘La propuesta de Regulación de la Inteligencia Artificial en Chile y el Reglamento Europeo de Inteligencia Artificial, ¿un caso de efecto Bruselas?’ [2025] IDP. Revista de Internet, Derecho y Política 1.
Yeling Tan and others, ‘Driven to Self-Reliance: Technological Interdependence and the Chinese Innovation Ecosystem’ (2025) 69 International Studies Quarterly sqaf017.
Opportunities
The 2025 Digital Law Research Colloquium (DLRC), organized by the School of Law of the University of Geneva and various other partners, will take place on 18 June 2025. Applications are due by 18 April (this Friday!).
The 2nd Workshop on LSAI - Law, Society, and AI at HHAI 2025 has extended its deadline for submissions until 18 April. The workshop itself will take place on 10 June 2025 in Pisa.
The call for the European Law Unbound inaugural conference has been extended until 25 April.
Our next Phish and Chips lunctime lecture at the University of Luxembourg will take place on 30 April, when we will host Andrea Raab (ICRC) for a lecture on Privileges and immunities of international organisations in the digital era. Participation is free, with mandatory registration. Join us either in person or online!
UNESCO and LG AI Research have an open Call for Best Practices for their planned Global MOOC on Ethics of AI. Submissions are due by 2 May.
The Chair of Law and AI at the University of Tübingen is organizing a Writing Workshop on the European AI Act, which will take place on 11 July 2025. Abstracts and short bios are due by 16 May.
Timo Seidl at TU München is hiring a doctoral researcher (100%) in the field of Political Economy of Technological Change. Applications are due by 18 May.
The IE Law School in Madrid is hosting the 2025 edition of its Lawtomation Days conference, with the overarching theme “Technology and (Dis)Trust: AI between confidence and controversy”. Submissions are due by 15 June, with the event taking place on 2 and 3 October. Having attended the 2023 edition, I can tell this is a very interesting event for people working on law and technology.
Finally, the otters
Thanks for your attention! Hope you found something interesting above, and please consider subscribing if you haven’t done so already:
Do not hesitate to hit “reply” to this email or contact me elsewhere to discuss some topic I raise in the newsletter. Likewise, let me know if there is a job opening, event, or publication that might be of interest to me or to the readers of this newsletter. Hope to see you next time!
That passage of the ruling is crucial, but it does remind me of a scene from Elite Squad, a modern classic of Brazilian cinema.