On definitions (AI, Law, and Otter Things #19)
Hello, dear reader! In this issue, I quibble about a few recent developments in EU technology law: a draft European Parliament report, the definition of "critical infrastructure" in the AI Act and the idea of "technological neutrality". These are half-formed thoughts that would benefit from reader inputs, but I hope they turn out to be somehow interesting.
It will not be exactly a surprise to most of my readers that the European Union has produced some sophisticated pieces of technology regulation. While one might question the overall goals or the specific approaches adopted by proposals such as the Digital Services Act or the AI Act, these instruments are usually the product of robust legislative procedures that reflect defensible understandings of technology. In fact, this is one of the reasons — but not the only one, of course — why EU legislation influences global patterns of tech regulation.
So it was pretty amusing to read the draft report authored by the Special Committee on Artificial Intelligence in a Digital Age. In paragraph seven, this draft "[...] notes that AI is the control centre of the new data layer that surrounds us and which can be thought of as the fifth element after air, earth, water and fire [...]", a formulation that is quite...creative, to say the least. But, while this excerpt does not bode well for the contents of the report itself, I have to admit it raises some crucial questions:
Is Luc Besson an MEP now?
Can an Avatar bend data? "Databender" is much cooler as a job title than "Data scientist", especially once we consider that there is about as much science in data science as in law.
If data is an element equal to the four classical elements, it stands to reason that data should be relevant for astrology. Which signs are governed by data? I am guessing Virgo, and probably not Aquarius.
The latter point, in turn, reminds me of a question that comes to my mind once in a while: how should data protection law cover astrological data? From my (admittedly limited) knowledge of these things, horoscopes are reliant on various forms of data about natural persons that would be covered by the definition of personal data adopted by laws such as the GDPR or the LGPD. In addition, the main value people seem to derive from astrological practices is knowledge (usually about oneself) obtained from inferences based on this data. At the very least, this seems to me to be a form of profiling, and some forms of inference might even be covered by the protections afforded to sensitive data (e.g. under Article 9 GDPR).
Of course, these questions are mostly the product of idleness: claims of "astrology-based discrimination" are often bogus — and, even if real, there are not enough people investigating more prevalent forms of discrimination. But this seems the kind of thing one might want to construct an exam question around, and I would definitely read a blog post about this. So long as it is written by someone who believes in astrology or, at the very least, is knowledgeable about the subject matter.
Technical neutrality: what is it good for?
Since I am not that person, I will now direct my attention towards another subject that appears in EU legislative proposals: the idea of technology-neutral regulation. This idea is much more serious than claims about data being the fifth element, which is a good thing, as it is also more common. The ideal of technology-neutral regulation is mentioned in the context of telecommunications regulation, in the GDPR (see Recital 15), and in the AI Act (see page 12 of the Commission proposal), among other appearances. Consequently, a comprehensive view of ICT regulation in the EU (and abroad) requires pinning down what this neutrality means.
Speaking in abstract terms, a technology-neutral regulation is a regulation that promotes its regulatory outcomes regardless of the specific technologies used in its application context. For example, the GDPR applies irrespective of whether the data is processed by small-scale computers, corporate mainframes of the kind used by banks, or the cloud computing platforms that have gained market share over the last decade. In contrast, South Korea adopted in 1999 cybersecurity laws that were not technology-neutral, mandating the use of the ActiveX technology even after it ceased to be a safe technological standard.
The promise of technology-neutral regulation is that, by abstracting away from technical details, one could respond to technological change without needing to go through the entire legislative procedure. In a context of fast ICT innovation, in which technical knowledge might be outdated even before it can be distilled into a form accessible to legislators, this promise responds to fears that technology might disrupt the law or render it irrelevant to current challenges. Yet, this malleability often comes at the price of things dear to our legal systems, such as legal certainty about the applicable technical standards. Because of this, scholars such as Bert-Jan Koops have pointed out that technology-neutral regulation in the real world should be a dynamic mix between abstract rules of a general character and more concrete rules that target specific forms of technology.
Nevertheless, there is a huge challenge in identifying which rules should be concrete (i.e., technology-specific) and which ones can be abstract. In part, this problem stems from a lack of clarity in terminology. Consider, for example, the definition of an artificial intelligence system in the AI Act as "software that is developed with one or more of the techniques and approaches listed in Annex I". As these techniques — machine learning, knowledge representation, and statistical approaches — provide broad coverage of what we mean by AI, this definition is largely future-proof, in the sense that it will not require much legislative change in the future. But it is hardly technology-neutral, since the definition explicitly refers to specific technical approaches, even if broadly construed.
Further complicating things, regulations can be technology-specific even if one does not explicitly refer to existing technologies. As an example, consider the Brazilian Fake News bill (PL 2630/2020). Even though it seeks to address fake news in social media in general, many of its reporting and control measures are based on WhatsApp's communication architecture, such as its message forwarding and user group structures. In doing so, the bill provides a strong nudge for future WhatsApp killers to adopt a similar architecture lest they fail to discharge their duties.
(Similar warnings have been raised against pieces of the EU's Digital Single Market, but I lack the expertise — and, since this is my newsletter, the time — to provide an in-depth examination of such claims.)
Here we have the opposite of what we saw in the AI Act. Rules like the one described above are technology-neutral, given that they do not establish or require any specific technical standards or platforms. But they are not future-proof, as they rely on conceptual frameworks that might be ill-suited for dealing with new technologies or unexpected uses of old technologies. In this case, I believe regulators are better served by relaxing their ideal of technological neutrality and making explicit their reliance on specific technological concepts. Otherwise, we might end up stuck with whatever the regulatory equivalent of technical debt is.
What is critical infrastructure, anyway?
As suggested by the profusion of regulations in the tech sector, digital matters are one of the policy priorities in the European Union. For example, a recent communication by the European Commission highlighted "[r]esilient, secure and trustworthy infrastructures and technologies" as being an integral part of its vision for the future of the EU. It is reasonable to conclude, then, that current EU regulatory efforts are not just directed at consumer-facing systems, and indeed Article 2(2) of the DMA includes in its definition of "Core platform service" various elements that have an infrastructural character.
Curiously, the AI Act does not seem to share this emphasis on digital infrastructures. OK, it is true that Annex III of the regulation mentions that systems used for the "[m]anagement and operation of critical infrastructure" are considered to be high-risk systems. But the Annex immediately restricts this definition to systems "intended to be used as safety components in the management and operation of road traffic and the supply of water, gas, heating and electricity.". This seems a somewhat narrow conception of which infrastructures play a critical role in society.
One might argue that many of the critical forms of digital infrastructure are covered by other items of the same Annex. Indeed, Annex III does not restrict itself to consumer- or citizen-facing applications, thus covering technologies that support many social practices, such as certain forms of judicial and government decision-making. In doing so, it covers much of what one might term as "infrastructural". Yet, it downplays (or fails to acknowledge) the risks that may result from using AI to manage the technological infrastructure. For example, both AWS and Google Cloud offer machine learning solutions for autoscaling, that is, for adjusting the consumption of cloud resources to computing demand. To the extent that such tools are used to support critical services, their use shapes the digital environment to an extent comparable with the risk thresholds adopted elsewhere in the AI Act.
Does this mean we need to change Annex III of the AI Act to deal with that? Maybe not. After all, these adjustments might be better made in other regulatory instruments — for example, in the sectorial norms that govern applications in critical sectors. The DSA, on the other hand, seems the wrong place for this kind of rule, as it is only indirectly connected to content issues. What I would really like to see is a systemic treatment of digital infrastructures and what, if anything, distinguishes them from traditional infrastructure. But, once again, I am not the person to conduct this study.
To conclude, an otter
Thank you for taking the time to read this issue!