Hello, dear readers! Today’s issue departs a bit from the “ranting about the thesis” mode that guided my previous newsletters. Instead, I want to write a bit about the Spanish proposals for the AI Act’s trilogues, as reported by EURACTIV yesterday. After that, I will share the usual content: reading recommendations, calls for papers, job opportunities, and the usual otter.
More specifically, I want to discuss the AI Act’s approach to foundation models. For the most part, I agree with Joanna Bryson and Meeri Haataja that the requirements proposed by the Parliament are sensible,1 though one might want to look more closely at their potential impact on open-source developments.2 However, as I mentioned in a previous issue, the term “foundation model” is itself problematic. Regulating a model adds a new layer of complexity to the AI Act, which is mainly geared towards governing a different technical object: the AI system. Also, the term is associated with a very specific vision of what AI systems can do and what they might be able to do in the future. Using it in regulation thus gives legal gravitas to a particular framing of AI accompanied by a lot of hype. And, as far as sources of ideas for technology regulation go, techno-hype is only slightly better than Ashton Kutcher.
With those objections in mind, I am very curious to see how the topic is further developed in the trilogues. Unfortunately, I could not find the proposed compromise text on those passages, but the EURACTIV coverage of it suggests a few significant developments.
In the reported proposal, the definition of “foundation model” is refined to an “AI model that is capable to competently perform a wide range of distinctive tasks”, with the subsequent definition of capability benchmarks through implementing acts. In the absence of a definition of “model”, my first objection persists, but the definition drops many of the redundant moving parts in the Parliament proposal. The use of flexible criteria for assessing the breadth of use is also welcome.
However, this definition seems more apt to the concept of “general purpose AI system”, also introduced in the Parliament text. If one wants to give substantive content to the concept of a foundation model, perhaps it would make sense to focus on what makes those models a foundation: not their standalone use, but the possibilities of deploying them as components of a large variety of AI systems. Otherwise, sticking with AI systems as the regulatory target would make more sense.
Additionally, the compromise would introduce a new category of “very capable foundation models” subject to additional requirements. Those requirements would be justified because such models display “capabilities go beyond the current state-of-the-art and may not yet be fully understood”, and the criteria for identification would, once again, be left to implementing acts. At first glance, this seems a welcome recognition of the limits of ex ante foresight in mapping the risks of AI. Still, a lot hinges on the character of the additional obligations that would follow from the classification.
Nonetheless, I am concerned about the kinds of metrics that have been proposed as indicators of very capable systems. According to EURACTIV, the criteria for capability are not determined on the basis of the lack of knowledge about the system, or by extrapolation from the capabilities the system has already displayed. Instead, they rely on size indicators, such as compute, the amount of data consumed for training and—less clearly specified in the journalistic text—the scale of impact on consumers.
These are not necessarily good proxies for the breadth or depth of AI capabilities or for our lack of understanding about them. Furthermore, reliance on them raises two different kinds of capture risks. Providers of large-scale models may push for these metrics as a way to consolidate their market positions. In addition, reliance on these proxies signals a shift in the AI Act’s regulatory perspective towards “existential risk” accounts of AI safety, which often rely on the idea that growth in metrics such as those can introduce qualitatively different types of AI capabilities. Fortunately, neither form of capture is unavoidable, but either (or both) would be an unwelcome distraction from various kinds of risk that the AI Act can tackle.
The EURACTIV news item indicates a few additional proposals on foundation models. Those seem more prima facie reasonable, so I prefer not to comment on them without access to the actual text of the proposal. But the two discussed above seem very weird even in most abstract terms, and I am not sure that any sensible construction along these lines can salvage them. What do you think?
Recommendations
After all this talk of models, I am legally obliged to share a link to that Pirates of Penzance song.
Because I’ve already written too much above, my scholarly recommendations come without comments this time:
Luca Belli and Walter B Gaspar (eds), The Quest for AI Sovereignty, Transparency and Accountability. Official Outcome of the UN IGF Data and Artificial Intelligence Governance Coalition (Internet Governance Forum & FGV Direito Rio 2023).
Panagiotis Delimatsis, ‘Transnational Economic Activism and Private Regulatory Power’ (2023) 26 Journal of International Economic Law 559.
Eldar Haber, ‘The Law of the Trojan Horse’ [2024] UCD L Rev forthcoming.
Nadezhda Purtova and Ronald Leenes, ‘Code as Personal Data: Implications for Data Protection Law and Regulation of Algorithms’ [2023] International Data Privacy Law ipad019.
Catharina Ziebritzki, ‘A Hidden Success: Why the EU General Court’s Frontex Judgment is Better Than it Seems’ (Verfassungsblog, 13 October 2023).
As for audiovisual recommendations, my wife is watching The Americans for the first time, and I am re-watching it with her. It is a big soap opera filled with spy tropes, but in a good way. If you are into this kind of thing and somehow haven’t seen the show yet, check it out.
Opportunities
Open calls for papers
Sciences Po is hosting a conference on Legal Technologies and the Bodies. Abstracts are accepted until 15 November 2023, and the event itself will happen from 7 to 8 March 2024.
The British and Irish Law, Education and Technology Association (BILETA) annual conference will take place in Dublin from 17 to 19 April 2024. The overarching theme for next year is “Digital and Green: Twin Transitions?”, but submissions are accepted more generally on a broad range of tech law topics. The call for papers for the conference is open until 8 January 2024.
TILTing Perspectives 2024, with the theme “‘Looking back, moving forward’: Re-assessing technology regulation in digitalized worlds”, will take place in Tilburg from 8 to 10 July 2024. Submissions for their six tracks and a deep-dive panel are welcome until 15 January 2024.
Job opportunities
Utrecht University has a PhD position in Philosophy and Ethics of Techno-Science, with a deadline of 6 November 2023.
The Chair of Artificial Intelligence and Democracy at the EUI’s School of Transnational Governance is looking for a part-time (50%) research fellow. Fellows should have a PhD in Social Sciences, Philosophy, Computer Science, or a related field, and they will conduct research and support the training and outreach activities of the Chair. Applications are open until 31 October 2023, with the starting date being 16 January 2024.
The Law and Tax department at HEC Paris has a call for an open-rank professorship. They welcome applications in all legal disciplines until 12 December 2023 with a starting date in September 2024. Candidates should have a PhD (with dual education and/or habilitation being seen as advantageous), and more senior candidates are expected to have a strong research background, but teaching in French is not required at any level.
In the US, Northeastern University has an open-rank search for a Professor of Technology and Social Power (deadline 15 November 2023) and two open-rank positions for a Professor of Ethics and Values in Design (deadline 1 December 2023).
The Information Society Program at Yale invites applications for resident fellows for the 2024-2025 academic year (deadline 15 December 2023). Fellowships may focus on any of the various areas of interest of the program, and they can be renewed for a second year.
The Center for Democracy and Technology, based in Washington, D.C, is seeking a Research fellow for a two-year position in “a new CDT project that examines how content moderation systems, including the application of artificial intelligence tools, operate in non-English contexts, particularly in ‘low resource’ and indigenous languages of the Majority World.”
Finally, the otter

I hope you enjoyed this issue. Please subscribe for receiving future updates in your email inbox, if you haven’t done so yet. And feel free to hit “Reply” and contact me with any comments, complaints, or suggestions.
For more on this, see the updated version of my paper on the AI Act with Nicolas Petit, which should be available soon.
This is a topic I plan to engage more deeply with once I am done with the first draft of my thesis. For now, I can only say that I refuse to use the term “open-source” for measures such as making public APIs available.