An eventful Monday (AI, Law, and Otter Things #15)
In this issue, I ramble about processors, the AI Act, and the Green Pass. After my rants, the post finishes with an invitation for a cool event on digital constitutionalism.
Monday was quite an eventful day in my personal bubble, with three events that warrant some mention in this newsletter. In reverse chronological order, the first thing I would like to highlight is Apple's recent announcement of the new M1 Pro and M1 Max processors, which claim to achieve impressive processing power while consuming less energy than comparable Intel products. This is a game-changer, especially for those of us who currently rely on discrete graphics cards for gaming processing, as M1 Max's integrated GPU achieves similar performance to discrete NVidia GPUs while consuming up to 70% less energy.
Readers who know me better might find it strange that I am commenting on an Apple event. In fact, I have always avoided the Mac and iOS ecosystems, and Apple's posture in the client-side scanning debate ran completely counter to the concern with privacy that I used to admire in this company. But I admit I am positively surprised by what Apple has achieved in the microprocessor department, especially if the company's promises regarding carbon neutrality hold up in the long run. Even more so if/when Apple starts offering processors for servers: given how much energy data centres currently demand, the potential reduction in energy consumption may provide a significant boost for sustainable computing.
With any luck, decent competition will appear shortly, and I won't be forced to acquire an Apple computer to reap the benefits of this breakthrough. But, at the moment, Apple seems to be quite ahead of the competition when it comes to personal-use microprocessors.
Technical issues and the AI Act
This Monday also saw the final session of the seminar on AI and Practice, which I organised with my colleagues Francesca Palmiotto, Natalia Menendez, and Sarah Tas. The seminar was a three-session effort to analyse what is unique about AI from a regulatory perspective: what kinds of legal issues stem from AI systems that are ill-covered by existing law? To which extent the technical properties of AI systems are relevant for good regulation? We discussed these and other questions in the context of the recent European Commission proposal for an Artificial Intelligence regulation, and the debates have been quite fruitful.
We are working on a collective blog post that presents the key findings of our discussion, and I will share the link here once it is published. I also should write a bit about the AI Act itself in future newsletters. However, I will restrict myself to two impressions that came up during preparation and the debates. The first one is that the proposal is a lot "closer to the metal" than the GDPR or the Digital {Services, Markets} Act: it explicitly refers to concepts such as accuracy and software lifecycles, while establishing many design obligations: see, for example, the transparency requirements from Article 13 or the need to design mechanisms for human oversight under Article 14. But, while I am generally sympathetic to regulation by design, we must be careful not to expect too much from it: not only because avoiding the reproduction of systemic injustice through design requires careful work, but also because of the difficulties that we previously discussed in forecasting technological risks.
A second, more fundamental point that struck me is that describing the AI Act as "an European approach for Artificial Intelligence" implies an overarching vision that does not really come through in the proposal's current form. For all the detailed discussion of requirements that apply to high-risk systems and the Byzantine governance models proposed, the current proposal does not offer clear criteria for understanding why specific applications are prohibited or deemed to pose a high risk (or not). This is troublesome not only in light of the debates around specific classifications — such as the relative absence of attention towards the rule of law or the current permissivity towards biometric surveillance and emotion recognition — but also because there are few mechanisms for updating the lists of high-risk systems. I do not mean, of course, that legislation on AI should wait for a broad consensus on the risks that might come from intelligent systems. But the combination between the absence of clear criteria for risk and the relatively rigid criteria for updating the existing lists of prohibited and high-risk systems brings itself a risk: that we end up with piecemeal legislation that cannot respond in time to the appearance of new, pressing social challenges from the use of AI technologies.
At last, the Green Pass
Finally — or in the first place, if we stick to chronological order —, Monday was also the day on which I finally got my Green Pass. It took me only 134 days since my second vaccination and a lot of begging in bureaucratic Italian, but over the last two weeks or so I finally got some helpful responses at the local health system and the Agenzia delle Entrate.
From a technical perspective, it appears that my problem came from an unnecessary validation that somebody inserted into the system. Whenever a vaccine operator in Tuscany needs to insert somebody's vaccinal dates, they must insert sufficient identifying information. But somebody thought it was a good idea to check whether the inputted Fiscal Code is associated with a Tuscan address. As I had obtained my code back in Brazil, the system registered my vaccination data correctly, but the Fiscal Code field was left empty. And this became a problem, given the Fiscal Code is the only key that strangers without a registration in the local health care system can use to access their vaccine data and generate their Green Pass.
Before that, I had no problems with the famed Italian bureaucracy, and once some helpful soul finally suggested that this might be the problem, it was actually solved pretty fast. But, considering my work on software and the law, I cannot help but think about this issue in terms of how apparently small programming decisions can have a practical impact on our lives and be incredibly difficult to diagnose and address. And I am well aware that I am very privileged in this aspect — legally registered in the country, working for an international organisation... —, so I cannot imagine how things must be for those who are ill-served, ignored or downright targeted by current technical infrastructures. Yet, since this is my personal newsletter, I will allow myself to complain a bit about my personal situation for the time being.
And now, for something completely different...
...I am delighted to invite you to participate in the forthcoming event What Governance for Online Platforms? Towards Digital Constitutionalism. This event is organised by Giovanni de Gregorio and Francisco de Abreu Duarte, and it provides a great space for discussing the promises and challenges of constitutionalism in a digital age.
The event itself will be conducted as a hybrid event on 26 November, hosted at the EUI. You may join us for participating in the sessions, and you are also welcome to actively contribute to the event by emailing the organisers with your CV and a proposed blog post that deals with some aspect of the question What is digital constitutionalism? Blog submissions are accepted until 1 November, and the accepted posts will be published in the forthcoming blog, The Digital Constitutionalist.
Well, this is it for today! Please feel free to reply to this post with any criticism, thoughts or feedback, and thank you for your time.