As the EU turns to the regulation of artificial intelligence through transparency, a critical perspective is more needed than ever, Ida Koivisto argues.
The sudden rise of algorithmic transparency
In the autumn of 2018, a colleague, and a friend, Riikka Koulu (associate professor of legal and social implications of AI at University of Helsinki) suggested we join forces on a research project. I had been delving critically into the ideal of transparency for a good time beforehand, whereas Riikka had been doing research on law and digitalization. Quite naturally, our topic became algorithmic transparency and its promise and perils. Our application was successful, and the project was generously funded by the Academy of Finland (The Promise and Boundaries of Algorithmic Transparency; AlgoT).
I must confess that, before this, I had not thought much about what ‘algorithmic transparency’ could possibly mean and whether it was important enough to merit an entire research project. To me, transparency was a concept of law and democratic theory, which promises immediate visibility for external monitoring of power. During the last couple of decades, however, it had become a nebulous governance catchword, whose connection to technology was far from obvious (a topic explored most thoroughly in my recent book “The Transparency Paradox – Questioning an Ideal”). Discussing the connection between transparency and digitalization with Riikka made it clear to us both that connecting critical transparency studies to the debates on algorithmic governance was necessary.
Little did we know how much en vogue we were. During these first three years of the project, academic interest in digitalization and transparency has grown significantly. Since the inception of the project, we have moved from having a shortage of legal and policy material and academic literature to support our arguments to an overabundance of it (see here).
Much of the literature focuses on AI ethics and outlining the core principles of the ever-digitalizing society. Automated decision-making has so far made the most important context of the requirement of algorithmic transparency. Transparency is called for because it is assumed that it will solve the so-called ‘black box problem’ (uncertainty about how inputs translate into outputs in algorithmic systems) and, by so doing, legitimize automated decision-making and other use of automated tools (see chapter 4 of this book). That said, transparency seems to be becoming an even more fundamental principle in the digitalizing society, as I will shortly argue.
The EU’s digital strategy – in transparency we trust
Literature, of course, mirrors the current legal developments. Recently, the accent of academic debate has moved from AI ethics and self-regulation towards hard law, as algorithmic transparency is gaining legal significance. The EU’s General Data Protection Regulation (GDPR) has been a trendsetter in this: in it, transparency is called for as a protective measure in automated decision-making, as a characteristic of appropriate language, in information furnishing obligations—and as an overarching principle.
However, data protection has proved an important but insufficient way of regulating digitalization, so-called surveillance capitalism, and the use of AI. The EU is in the process of preparing a whole array of legislation to bring about a functioning digital and data-driven market. Transparency seems to be a normatively attractive concept in this process. For example, the proposed Artificial Intelligence Act and the new Digital Services Act both rely heavily on (algorithmic) transparency as a legitimating strategy. Following the entry into force of the latter, the Commission is even setting up a new European Centre for Algorithmic Transparency (ECAT).
It is important to notice, nonetheless, that the proliferation of transparency norms covertly reflects the problem it attempts to solve. If transparency is to help us to understand how power operates, the increasing call for transparency signals decreasing understanding on the operations of the digital society. Indeed, as technology becomes more sophisticated and machine learning algorithms are increasingly adopted, an average individual understands less and less what is going on in the digital chambers of power, be they public or private. In the wake of this, opacity and secrecy—and increasingly, inexplicability, the difficulty in deciphering the operations of an algorithm—have gained a bad name.
The core of legally defined transparency has traditionally been the right to access documents. This meaning is present also in digital transparency. Namely, although documents as paper objects no longer hold privileged importance in the digital society, questions of access linger. Can we access the inner workings of black box algorithms? Can we access our own personal data and its use? How can we assess the trustworthiness of AI systems? Accessibility and proactive information-furnishing obligations are central aspects of transparency also in the suggested legislation
However, power circulates differently in society due to the digitalization of our information and its processing. As a result, access is not enough. It seems that transparency, understood as immediate access, cannot promise truth as it once did. In fact, we can notice a new understanding of transparency in a digital society is emerging; transparency is turning increasingly into understandability. In a way, it is easy to see why: even if information is available, it is often highly specialized, and most of us lack the expertise to assess its validity and reliability. Therefore, we need intermediaries to explain what the specialized technological information means.
Paradoxes of (digital) transparency
It is important to stress that although transparency in the EU is adopted as a one of the leading principles in the Digital Strategy, it is a part of a bigger phenomenon.
In my previously mentioned book, “The Transparency Paradox”, I argue that transparency’s growth in popularity was made possible through two interconnected trends: the idea that transparency is inherently good, and that the actual meaning of the term is becoming harder and harder to pin down. The book provides an account of the hidden logic of the ideal of transparency and its legal manifestations. It shows how transparency is a covertly conflicted ideal. As its main argument, the book argues that counter to popular understanding, truth and legitimacy cannot but form a problematic trade-off in transparency practices.
This conflicted nature of transparency has not hindered its popularity. In fact, in my book, I argue that transparency is a paradoxical concept for several reasons. One of the paradoxes, which very much touches the topic of algorithmic transparency, is its promise of immediacy and understanding. As I argue in my book, transparency specifically privileges immediate seeing over intermediaries and explanations, which are inevitably more mediated. However, if transparency is interpreted as understandability, problems may arise. For example, the academic debate on whether the GDPR grants the right to explanation reflects this form of transparency. If transparency becomes a synonym for explanation, it inevitably loses something from its legitimating power. The core promise of transparency—“do not believe what I say, see for yourself” —would thus be turned upside down, into: “do not believe what you see, let me explain instead”.
This interpretation makes transparency a justification of the power used rather than creating an avenue for democratic monitoring. It justifies the functioning instead of the existence of the power in question. In other words, it becomes macro-legitimation through micro-critique. At the same time, even if transparency builds on the idea of immediate observability and firsthand knowledge, it is paradoxically necessarily a matter of mediation and technology—and mediating technology. Therefore, I am not surprised that demands for explainable AI (XAI) have emerged into the discourse.
A critical approach is necessary
As the EU’s Digital Strategy proceeds in the EU legislative machinery, it is more urgent than ever to ponder the soundness of its core principles, their meaning, and perhaps their unintended consequences. Even though transparency has gained an almost mythical status in our modern society, its inherent tensions have not disappeared but adopted different manifestations. This cannot but have profound consequences on both transparency theory, and, more importantly, the way in which we are governed as denizens of digitalized society.
In our current hyper-mediated condition, immediacy is harder and harder to reach. This is what the growing number of transparency regulations conversely suggests. Therefore, it is necessary to look at transparency as a wider socio-cultural condition, whereby transparency is considered an unalloyed good. Although we worry about transparency in both analog and digital contexts in similar ways, the latter poses technological, regulatory, and ethical challenges, which may not always find a perfect analogy in the analog environment. For this reason, transparency needs to be approached critically as a principle of a digital society.
Although the AlgoT project is coming to an end soon, the triangle of transparency-law-digitalization keeps producing important research topics. It seems that we will stay busy in the Legal Tech Lab—in the Commission’s words—in the Digital Decade.
Ida Koivisto is associate professor of public law at the University of Helsinki. There, she is a member of the Erik Castrén Institute and the Legal Tech Lab. Her book The Transparency Paradox appeared with Oxford University Press in 2022.
This article previously appeared on The Digital Constitutionalist as part of a symposium on AI transparency.