Hello, dear readers! It’s been a while since my last issue, as I had to catch up with a few deadlines. Sending all these abstracts seemed a good idea some time ago, and indeed, what I prepared for conferences helped me stop procrastinating on a chapter in my thesis.1 But the best and worst part of a successful anti-procrastination strategy is that you actually have to get some work done, so I didn’t have as much time as I’d like to write a decent newsletter. This is why I’m only getting back to you after two weeks or so.
What I want to talk about today is a topic that has followed me for much of my time as a budding legal scholar: regulation by design. First, I will discuss my relationship with this topic and how it has evolved recently. Then, I will quibble a bit about how people conflate “data protection by design” and “privacy by design”, something that, in my view, is a disservice to both concepts. If neither of these is your cup of tea, jump ahead for some reading recommendations and, after that, open calls for papers and job applications. Or the adorable otter by the end of this issue.
A shift in directions
My first—and so far, only—well-cited paper was on that topic, proposing that technical interventions can and should be used to support the right to contest automated decisions present in Article 22 GDPR.2 After that, I commented extensively on the data protection by design provisions in Brazilian and EU law and provided a more theoretical examination of the long-run effects of design provisions in technology law. I have also written on design interventions in various domains, such as judicial automation, tax law, and healthcare. So, I’ve spent a good part of my time in the last six years or so thinking about how legal requirements can be translated into technical specifications.
At first, my interest in this topic seemed a natural way to tap into my previous education and work experience in computing. When I started writing that first paper, I worked full-time as a data scientist, but it has been five years since I last earned a paycheck for my programming and data analytics skills. Technology has evolved quite a bit since then, to say the least. Though I keep myself posted on the latest technological developments, I can no longer tap from the different perspective that work in data science offered me. As a consequence, I find myself having fewer and fewer interesting things to say about design.
My research agenda, to the extent such a thing exists, has evolved accordingly. I still do much translational work between tech and law, but that work is no longer the same. Whereas I used to propose conceptual tools that lawyers could use to engage with the technologies and processes in software development,3 I now look at these artefacts and processes from a more abstract point of view. By doing so, my goal is to show how the issues surrounding the technical implementation of legal requirements can be similar to traditional regulatory challenges, such as those surrounding the delegation of powers to private actors (usually, but not always, software developers). These parallels, in turn, should allow for a more fruitful dialogue between lawyers and computer scientists on those matters. But they do not, however, focus on the technical intricacies I used to deal with.
There is much to be said about regulation by design from a legal perspective. Regulators (and regulation scholars) need, for example, to get a better grip on the limits of what can be achieved through technical interventions in design. While regulations such as the AI Act and the DMA emphasize technical interventions, some regulatory aims are only achievable by design interventions not on a computer system, but on the organizational context in which that system is used. Additionally, the proliferation of by-design requirements in the EU and beyond suggests the need to look at the conflicts that may emerge between demands that go in different directions. Last but not least, regulated actors need effective guidance on making sense of legal requirements and transforming them into implementable measures, whenever possible. All these directions can enrich our understanding of design as a mode of legal regulation.
However, there is little I can add to these discussions. The more technical lines of research on regulation by design require closer interaction with how systems are designed in practice, and my comparative advantage on this is diluted as time goes by. On the other side, there are still legal questions about design to which I can contribute, but I am more and more drawn to the more general parts of them as opposed to their design-specific notions. As a result, there is very little I can say about regulation by design that feels both (a) interesting and (b) novel.4
Surprisingly, this is not a crisis about my PhD thesis. Its subject matter—the notion of “technology-neutral regulation”—is broader than regulation by design. I believe there is still much I can say about it without becoming Pierre Menard. And I am definitely grateful for all the doors my work on regulation by design has opened to me and all the collaborators I found through it. Nonetheless, I think design will likely be a smaller part of my academic plans for the next few years.
A pet peeve re: privacy by design
Before that, however, I want to ramble about something that annoys me a bit in regulation by design discourse. Data protection by design (DPbD) is often seen as a spin-off, or even a synonym, of privacy by design (PbD). In my view, this conflation is not warranted by the history of DPdB and risks weakening the protections this approach to regulation is meant to offer. I have given up on proselytizing about the difference in my formal writing, but this is my party newsletter and I can cry if I want to. So, here we go.
There are two reasons why DPbD and PbD might be equated: one of them is more straightforward, and the other, more convoluted. The simple reason is that privacy and data protection are often seen as different instantiations of the same ideas, which are distinguished by the legal contexts they are situated in rather than by their essence. Roughly speaking, “data protection” would be a fancy, European way to talk about informational privacy concerns. To some extent, this is true: European data protection law and US privacy law share common origins, and indeed it is common to see the term “data privacy” being used as a cross-cultural compromise.5 Yet, one should not lose track of the fact that case law in the EU and its member states has long distinguished between privacy (protected as the right to a private life under Article 8 ECHR) and the right to the protection of personal data (incorporated into the EU treaties and the CFR as an autonomous right). These rights are interconnected but ultimately distinct; hence, their implementation by design might, in some cases, pull in different directions.
The more sophisticated version of the confusion between both by-design approaches acknowledges the differences between the rights to data protection and privacy. Instead, it relies on the received understanding of the history of by-design regulation in the EU. Privacy-enhancing technologies (PETs) became a current technical term in the 1980s, some time before the 1995 Data Protection Directive included a provision that is seen as the ancestor of DPbD in Article 25 GDPR. Also in the 1990s, privacy and information regulators worldwide have begun to champion a set of design practices labelled as “privacy by design”, drawing from PETs and other best practices to limit the processing of data about individuals. The success of these approaches prompted legislators to add a “privacy by design” provision to the draft GDPR, which eventually became its Article 25. Yet, there are two reasons why one should not be too quick to claim that DPdB is a direct offspring of PbD.
First, because elements of DPbD predate the PbD boom in the 1990s. If one goes back to the 1970 Hessische Datenschutzgesetz, which is often mentioned as a landmark in European data protection, there is already a provision targeted at what we would now call computer systems. Its § 5 establishes certain conditions that databases and information systems must meet regarding the data used for their creation. It does not specify whether said conditions must be met by technical or organizational measures, but then again, neither does the GDPR, which admits both kinds of interventions in data processing.6 So, the idea that computer systems can be suitable targets for data protection law predates PbD by at least a few decades.
Second, the legislative history of the GDPR suggests a deliberate effort to distinguish between DPbD and PbD. The legislator decided to rename the article that eventually became Article 25 GDPR from “privacy by design and by default” to “data protection by design and by default”. This terminological change in itself tells us little about the concepts, as it can be explained away as a result of the decision to regulate privacy as part of a separate ePrivacy Regulation.7 What separates DPbD from PbD, instead, is the legislator’s decision to demand that the former address “the risks of varying likelihood and severity for rights and freedoms of natural persons posed by the processing” rather than limiting it to the data minimization aims that drive PbD. Because DPbD pursues a broader set of aims, the mere adoption of best practices for PbD is not enough to fulfill its requirements and might even lead to an inadequate response to some kinds of risks. Therefore, distinguishing between DPbD and PbD is a terminological quibble, but one that can affect how we frame the risks stemming from the processing of personal data.
Reading recommendations
Ian Bache and others, ‘Multi-Level Governance’ in Christopher Ansell and Jacob Torfing (eds), Handbook on Theories of Governance (Edward Elgar Publishing 2022).
Jenny Bulstrode, ‘Black Metallurgists and the Making of the Industrial Revolution’ (2023) 39 History and Technology 1.
Paola Cantarelli and others, ‘Information Use in Public Administration and Policy Decision-Making: A Research Synthesis’ (2023) 83 Public Administration Review 1667.
Benjamin Hale and Kenneth Shockley, ‘Risk Mismanagement: The Illusion of Control in Indeterminate Systems’ in Adriana Placani and Broadhead (eds), Risk and Responsibility in Context (Routledge 2023).
Lin Kyi and others, ‘Investigating Deceptive Design in GDPR’s Legitimate Interest’, Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems 1.
Merel Noorman and Tsjalling Swierstra, ‘Democratizing AI from a Sociotechnical Perspective’ [2023] Minds & Machines.
Lucy Suchman, ‘The Uncontroversial “Thingness” of AI’ (2023) 10 Big Data & Society 20539517231206794.
Opportunities
ALLAI is recruiting a Junior Associate in Responsible AI. They accept applications until this Friday, 24 November 2023.
CDT Europe is looking for an Advocacy & Communications Officer for their Brussels Office. Applications are open until 6 December 2023.
Microsoft Research, New York City (MSR NYC) is looking for multiple research interns to collaborate with members of the Fairness, Accountability, Transparency, and Ethics in AI (FATE) research group. This is a 12-week, hybrid work internship for people currently enrolled in PhD or JD programs.
CPDP2024 has an open call for papers on the overarching theme ‘To govern or to be governed, that is the question’ of AI governance. Submissions are accepted until 26 January 2024, and the conference will take place in Brussels from 22 to 24 May 2024.
The Venice Privacy Symposium has an open call for papers until 10 January 2024, accepting academic and industry papers ‘that present original research addressing challenges related to data protection compliance with innovative technologies. In particular, we welcome multidisciplinary work that combines legal, technical, and societal expertise.’ Accepted papers will be presented at the conference, which will take place from 10 to 14 June 2024.
Finally, the otters
I hope you enjoyed this issue! Please feel free to hit “reply” to this message (or email me if you read this on the web) in case you want to discuss any of the topics above. And consider subscribing to this newsletter—for free!—if you haven’t done so already:
Also, I must admit I spent more time than I should following the OpenAI soap opera these days.
And, as Juliano Maranhão and I argue elsewhere, also present in Brazilian data protection law, though to a less substantial extent.
For example, my LLM thesis proposed that lawyers could use software life cycle models as tools for identifying regulatory possibilities and risks in AI systems, in a way broadening the pioneer work by Lehr and Ohm.
This is not to say that I cannot make my points better. There is always the possibility of refining my arguments and analyses, in the hope that the tenth restatement of these points will finally persuade people of the One True View about regulation by design. The reader might suspect, correctly in this case, that I am not particularly inclined to this approach to scholarship. It is a valid—and potentially fruitful—way to do research, but it is too hedgehog-like for me.
I am not a big fan of that one, either, as it feels like one started to talk about data protection but remembered mid-sentence that they were talking to an American audience. But many people that are better informed than I stick to it, so I will not charge against this particular windmill right now.
Targeting the systems used for processing is not an unavoidable characteristic of data protection law: the original version of the Loi n° 78-17 du 6 janvier 1978 relative à l'informatique, aux fichiers et aux libertés focuses instead on data subject rights and conditions for the behaviour of what we would now call data controllers.
On the ePrivacy Regulation’s legislative procedure and its current status, see this informative video.