Hello, dear readers! Now that the summer is getting closer and I am done with my doctoral deadlines for the near future, I hope to write a bit more often here. Today’s newsletter finishes (for now) my initial impressions on what is new in the Parliament’s position on the AI Act. After that, I will share the usual reading recommendations and a few calls for papers that might interest y’all.
These usual sections of my newsletter will be followed by a brief personal note (It's my party, and I'll cry if I want to) on mental health, football, and randomness. If those things are not your cup of tea, there is no harm in stopping your reading before the “TMI Corner” and (hopefully!) coming back for the next issue.
But, before we move on to the main text, here is an otter produced by Stable Diffusion:
Wildlife photographers have no need to worry about their jobs, but I must admit that I was really lazy with the prompt. Nonetheless, the image seemed to have the right vibe for today’s discussion.
Risky business
The AI Act’s approach to risk is, perhaps, one of its better-known aspects. As formulated by the Commission, the Act segments all AI into three regulatory models. The first model, under a precautionary line, prohibits the use of any AI system for certain purposes. The second model, which is the subject of most discussions about the AI Act, considers that some applications pose a high risk that can be eliminated, or at least mitigated, through a strict set of technical measures and post-market surveillance mechanisms. The third model, which the Commission expects to cover the vast majority of AI systems deployed in the EU, does not establish any additional rules beyond what is present in sector-specific regulation. Any AI system will fall within one of these three labels, and the specific label determines the rules applicable to it.1
As a result, the AI Act is a risk regulation instrument, but a peculiar one. Giovanni de Gregorio and Pietro Dunn describe this approach as a top-down framework, in which risk assessments fall to the legislator, as opposed to the bottom-up framework present in the GDPR, in which the regulator actors must assess what risks are present in each particular situation and what kind of measures are needed to address these risks. By proposing a top-down framework, the Commission proposal sacrifices some of the use it might have made of the expert knowledge provided by regulated actors, whose role in risk assessment is constrained.2 But, in doing so, it seeks to provide more legal certainty and avoid regulatory gaming by reducing the effort needed to understand which measures are applicable to a particular application.
The Commission’s proposal of a risk-based approach has been subject to various critiques. Margot Kaminski has a very interesting preprint about the issues and limits of previous risk regulation experiences and their limits in the context of AI. On a narrower topic, various people (including Nicolas Petit and I and, from a different direction, Philipp Hacker et al.) have pointed out that the idea that each AI system can be mapped to a particular application is strained by systems such as generative AI systems. An exhaustive list of critiques of the risk framing would exceed the scope of this issue, but I would like to point out two others: the discussion of which categories should be flagged as high-risk systems and the measures, if any, that should apply to non-high-risk systems.
Answering some of these critiques, the Parliament compromise text adds a few twists to the Commission’s envisaged framework. It creates some new rules for foundation models, which are narrower than those applicable to high-risk systems, but suffer from issues such as the demarcation problem I discussed in the last issue. The compromise text also adds a few twists to the ex ante definition of the high-risk label. Under the new formulation of Article 6(2) AI Act, a system is a high-risk system if it, cumulatively, falls under one of the areas listed in Annex III AI Act and poses a significant risk of harm to the health, safety, or fundamental rights of individuals (or, in one specific case, to the environment). That is, the presumption of high risk associated with the applications in Annex III is now defeasible.
To reduce the risks of providers trying to dodge the AI Act by claiming their system poses no significant risk, the Parliament text introduces a few controls. Under Article 6(2a), Providers who want to avoid the high-risk label must submit a reasoned notification, which is to be evaluated by the National Supervisory Authority (or the EU-level AI Board if the system is intended for use in more than one Member State). That authority, in turn, must object to the notification within 3 months. If they fail to do so, the system can be placed on the market under the baseline rules for non-high-risk AI.
I am not a huge fan of this arrangement. From a procedural standpoint, it creates the risk of a denial of service attack. If the authorities are swamped with notifications, they might fail to respond to anything (or, at least, anything beyond the most obvious cases) within the 3-month deadline. Unless authorities are provided with sufficient resources and staff, the evaluation procedure might become nothing more than a formality, especially if regulated actors rely on adversarial compliance, for example, by creating very complex reports that will demand considerable time and effort to analyse.
The use of a defeasible presumption of high risk also favours large economic actors, as they have the resources to create their requests and, potentially, make them so detailed that analysis demands a long time. If such requests are not made transparent, they might also create obstacles for small providers, who might be stuck with a high-risk label even if their systems are functionally equivalent to the systems of larger competitors that managed to appeal from the initial classification. But any efforts to make requests transparent are likely to stumble into roadblocks related to confidentiality, as protected under Article 70 AI Act.
Within the AI Act’s overall approach to risks, a defeasible risk classification seems to combine the worst of bottom-up and top-down approaches. It retains the top-down problem of constraining regulated actors to adopt a narrow risk frame that focuses only on a few sources of harm while increasing the compliance costs and lack of certainty associated with bottom-up risk assessments. If that is the direction European policymakers want to go, perhaps it would be better to ditch the three-layered risk approach altogether.
Recommendations
Reading
Corinna Coupette and others, ‘Law Smells’ (2023) 31 Artificial Intelligence and Law 335.
An interesting proposal that draws from software engineering methods to identify patterns in legal text that might make it less comprehensible and maintainable.
El-Mahdi El-Mhamdi and others, ‘On the Impossible Safety of Large AI Models’ (arXiv, 9 May 2023)
This preprint has various interesting contributions. First, it proposes “large AI model” as a terminological alternative to problematic terms such as “foundation model” or “general-purpose AI”. Second, it maps various safety challenges created or amplified by using such models in multiple contexts. Third, it provides a demonstration of the formal impossibility of solving these problems.
A note of caution for non-mathematical/CS readers: do not read too much into proofs of impossibility. They are contingent on the mathematical formulation of the elements used in the proof, and so a seemingly impossible result can sometimes be achieved if things are framed otherwise.3 Additionally, they tend to be general results, and so one might be able to achieve theoretically impossible things in very specific contexts. Nonetheless, an impossibility result is still a warning of problems that must be addressed somehow.
Christina Michelakaki and Sebastião Barros Vale, ‘Unlocking Data Protection By Design & By Default: Lessons from the Enforcement of Article 25 GDPR’ (2023)
A very thorough overview of how Article 25 GDPR is applied in practice by courts and data protection authorities in Europe. Some of them are more prone to enforce these provisions as a preventive mechanism: that is, finding that existing practices by a data controller breach the duties even in the absence of concrete harm. This is not the rule, though, and even those authorities that wield data protection by design and by default often shy away from specifying what measures are adequate in practice. But the practice shows that, far from being an empty norm, Article 25 GDPR has considerable potential when it comes to protecting data subjects.4
Calls for papers
The First Workshop on ML, Law and Society will take place in Turin on 22 September 2023, as part of ECML PKDD 2023. The workshop has two tracks (Non-Functional tradeoffs, Law and Society; Knowledge Discovery and Process Mining for Law (KDPM4LAW).), each accepting regular papers, short papers, and extended abstracts of already-published material. Submissions are open until 12 June.
The Lawtomation Days 2023 conference, with the theme The shifting legal landscape of automated decision-making and artificial intelligence, will take place in Madrid on 28 and 29 September 2023. They are accepting abstracts until 15 June.
The OpenForum Academy Symposium will take place on 28 November 2023 in Berlin. They accept contributions on various themes connecting with the social, political, and economic impacts of open-source software and hardware. Abstracts should be sent by 12 June 2023, with full papers to follow by 16 October for the accepted authors.
Thanks for your attention! Please feel free to subscribe and receive future updates if you haven’t done so yet:
Or continue for brief personal rants on football, and the role of randomness, carrying on. And, as always, don’t hesitate to send me an email or contact me elsewhere if you want to reply to something I said here.
Depending on the kindness of strangers
Ten years ago today, Arjen Robben saved my life.
Back then, I was in a very bad place.5 My personal life was in a downward spiral, with a very messy breakup happening during a period of severe personal isolation, as well as the deaths of a few relatives and colleagues. And my academic performance was suffering accordingly: I was quite a good Computer Science student, occasionally even sharp, but my grades were impacted to the extent that my prospects for an academic career seemed to be gone. I will spare you the details—all these things that seem relevant when you are 22 and turn out not to be in hindsight—but something had broken in my mind by May 2013.
At that stage, one of my few connections with reality was football. Due to some vagaries of life, I had become a Bayern Munich supporter a few years before. So, even though I was vanishing from the world and developing all sorts of unhealthy coping mechanisms, I still found the time to follow Bayern in the Bundesliga, DFB-Pokal and the UEFA Champions League. And my interest (read: escapism) was rewarded by a Champions League final between Bayern and Borussia Dortmund, the same team that had consistently kicked Bayern’s ass over the preceding two years.
To make a long story short, I was at the lowest point in my life when the Dutch magician scored the match-winning goal:
This wasn’t Robben’s most impressive goal in a Bayern shirt. Yet, it won the game and the title. And, incidentally, it gave me a moment of joy when that was sorely missing from my life.
Of course, football did not solve my problems. It took me a long time to get my mind somewhat in order. And, in that process, I relied an awful lot on friends and family (old and new), got lots of things wrong, and ended up in places very different—and way more interesting—than what I had in mind in my earlier twenties. And, with all this help and effort, I still owe much to chance encounters and sheer randomness.
But now, at long last, I feel free of the shadow of my meltdown. So, thanks for everything, Arjen.
(And, if you made it to here, you deserve a picture of Winnie. See below)
The transparency requirements in Article 52 AI Act are often described (including by the Commission itself) as creating a fourth tier of “low-risk” AI between the “high-risk” systems and the “minimum-risk” ones. This view, however, is misleading, as the rules of Article 52 AI Act apply to any system that meets their conditions, regardless of its specific risk classification.
But not eliminated, as the providers of high-risk AI systems are still responsible for choosing the technical measures that meet the legal requirements, as well as for operating the risk assessment and post-marketing monitoring systems.
See, e.g., how Fabian Beigang proposes a way to address the various proofs that algorithmic fairness is mathematically impossible.
On a personal note, I feel very relieved that the overall conclusions of practice seem to vindicate some of the points I make with Giovanni Sartor and Juliano Maranhão in our forthcoming commentary on Article 25.
Mentally speaking. However, nobody can accuse me of being an enthusiast of São Paulo state (except for its capital), where I was living back then.