Welcome to another issue of AI, Law, and Otter Things! Today’s issue is somewhat monographical, as I write down some thoughts on regulation by design in AI law. As such, I will skip the usual recommendations section (though I try to include various links throughout the text). But rest assured, there will be cute animals at the end.
Broadly speaking, “regulation by design” refers to regulatory strategies that use the law to specify technical requirements that must be observed in a computer system. Data protection by design,1 for example, requires that data controllers design and use their systems in ways that are compatible with data protection principles, such as data minimization2 or the possibility of contesting automated decisions.3 In doing so, they effectively embed legal norms—or, at least, a particular interpretation of legal text—into the software they govern.
Regulation by design provides a bridge between law and technique. More than that: by hardcoding norms into software code, it promises to eliminate, or at least reduce, the room for non-compliance. This promise can be quite tempting; in fact, my very first piece of legal scholarship offers a proposal to embed the contestation of automated decisions into the design of decision systems. And, in some cases, that promise can indeed be delivered. But one should not expect much from design as a modality of regulation, given its considerable shortcomings and side effects.
We might distinguish, here, between circumstantial and essential limits to by-design approaches. I do not intend to exhaust either category, but a few examples might be enough to exhaust these categories. When it comes to circumstantial limits, one might think of who gets to design. Some public entities (especially in rich countries) and large corporations might have the resources and know-how to exercise control over the design of the systems they use. Contrastingly, smaller actors often rely on software-as-a-service solutions, using technical products developed by other providers. Many users of technologies regulated by design may, in practice, have little possibility of implementing whatever the law requires. Their compliance with the law becomes, therefore, contingent on the behaviour of upstream providers.4
Even those actors who exercise control over design can face challenges in implementing the relevant techno-legal requirements. For example, some legal provisions mandating design requirements require designers to address various principles. These principles are not incompatible in the abstract, but there might be some friction in particular cases. It has been argued, for instance, that the restrictions on the use of sensitive personal data can be an obstacle to measures aimed at detecting and addressing discriminatory algorithms. If a system is particularly prone to discriminatory outcomes, or if it operates in a high-stakes context, designers might be inclined to make more use of sensitive data to compensate for this risk. But any design decision will, in practice, be a decision about how to balance these legal interests in the context of a particular system. And a wrongful choice can have big consequences, not just in financial terms but also in the time needed to fix the hardcoding of decisions later found to be unlawful or otherwise harmful.
In principle, the abovementioned issues can be solved with enough resources and expertise. But other issues are harder to dispel, if they can be eliminated at all. Some time ago, Koops and Leenes (as they often do) anticipated this issue by pointing out that very few legal provisions are formulated as rules that can be directly transformed into code. In most cases, the implementation of law through code will be a translation of open-ended legal text into software instructions, which neither require nor allow for interpretation. And, in this translation process, one might be unable to capture important features of the law.
One such feature is adaptability. By definition, regulation by design seeks to fixate the effects of certain rules: once a requirement is met through code, the ensuing system will produce the same outputs when subject to the same circumstances.5 The law, contrastingly, is much more malleable, as its content is identified through interpretation and, as such, can change with time even if the text remains the same. This means that regulation by design will ossify rules and might, over time, lead to a divergence between software and the law, even if these were initially matched. Any approach to regulation by design must thus be aware of what can be expressed as code and how that expression differs from the direct application of the law. Otherwise, we might end up with rules that produce effects in the world but fail to achieve the results they were meant to.
Regulation by design, therefore, seems ideally suited for situations in which there are legal rules that can be easily cast in an “if–then” format: “if the user has acquired the rights to this content item, then they can access it”, “if the system lacks a legal basis for this data point, then it cannot process it”, and so on. But design measures are expected to do much more than that, especially within European Union law. They have been described as key to compliance with the GDPR, and instruments such as the Digital Markets Act and the infamous ChatControl proposal all include provisions that mandate specific design requirements for ICT systems. In fact, EU legislators and administrators have shown a preference for hardcoding measures whenever possible.6 In the AI Act, this posture is reflected in the design measures stipulated for high-risk AI systems under Articles 10–15, which mandate the hardcoding of several requirements on matters such as logging, data quality, and human oversight.7
In the opposite direction, Brazil makes limited use of by-design provisions in AI regulation. Unlike the local data protection law, which broadly follows the GDPR regarding design requirements, the Brazilian AI bill only requires technical measures for a narrow problem: the adoption of explainability measures for high-risk AI. This requirement goes beyond the transparency requirement of Article 13 AI Act, which only requires that the outputs of a high-risk AI system be intelligible to its users. Sidestepping, for now, the question about the value of these XAI techniques, it seems unlikely that no other AI-related issues can benefit from technical solutions.
I am forced to wrap up this discussion with two old clichés. First, I believe I am not being radical if I suggest that maybe the best approach is neither over-dependence on design nor throwing the baby out with the bathwater. I will not—at least not here—propose an alternative position. Instead, I will leave you with the second cliché: we need to look more closely at the political and legal impacts of embedding particular interpretations of the law into software that might remain operational for years or even decades. So, I will return to this topic in my scholarly work and in future issues of this newsletter. Stay tuned, feel free to share your thoughts, and don’t forget to try the veal.
And now, the otters:
See, e.g., Article 25 GDPR or Article 46 of the Brazilian LGPD.
See, e.g., Article 5(1)(c) GDPR.
See, e.g., the measures required by Article 22(3) GDPR.
Users of software-as-a-service retain, of course, the possibility of not using any given service. But exercising this option may be difficult in practice, especially in situations in which a few providers control the relevant market.
Barring issues such as bugs and changes in the operating environment.
Or EDPB Recommendations 2/2020, which considers that only technical measures can provide enough safety for (some) cross-border data transfers in the absence of an adequacy decision.
Unlike the GDPR, which speaks of “technical and organizational measures”, provisions such as those of Article 14 AI Act restrict themselves to technical measures, and mentions to organizational measures have been contested throughout the legislative procedure.
I agree with you on this, Kars. Organizational policies can be very useful both as an alternative form of delegation of desirable outcomes (you don't hardcode something in code, but you mandate the creation of procedures surrounding the use of the system) as well as for setting meta policies such as review. So, the reduction of design to technical measures, which I take as a starting point for the discussion, feels like something of a missed opportunity for regulation.
Hi Marco, you raise some interesting points. I think one thing to consider is that design can encompass more than merely what is implemented in code, but may also include things like policy. This could go some way towards solving the issue of technical systems becoming out of step with (interpreted) law, if we ensure (by design) that code gets regularly reviewed for such things.