Introduction to AI Accountability & Transparency Series
Join us for conversations on AI accountability and transparency (By James Siderius, Sarah H. Cen, Cosimo L. Fabrizio, Aleksander Madry, and Martha Minow)
Every technology brings unforeseen, sometimes harmful, consequences. AI is no different. It’s a transformative technology that has already demonstrated that it can both help and hurt us. In cases when AI causes harm, we may be interested in asking who bears responsibility, how we can right the wrong, and what measures can prevent similar harms in the future. Motivated by this observation, this series explores topics in AI accountability and transparency, combining the perspectives of computer scientists and legal scholars.
In the last few decades, we have seen groundbreaking developments in artificial intelligence (AI). In the late 2000s, we saw the deployment of online streaming services with AI-driven recommender systems that leveraged personal data to provide tailored experiences to users. In the mid-2010s, we saw the rise of social media platforms like Facebook, Twitter, and Instagram, where AI was used to selectively filter news and entertainment based on fine-grained user data. And most recently, we have seen the rapid adoption of generative AI through chatbots (like ChatGPT) that produce remarkably cogent text and engage in human-like conversations. As AI plays an increasingly integral role in our day-to-day lives, how might we think about its impact on society? What are the unintended side effects of AI, and when should the government step in to regulate AI?
In this sequence of blog posts, we provide the combined perspectives of computer scientists and lawyers with the aim of understanding how AI could be regulated. We focus on AI accountability and transparency. Accountability is a notion of responsibility: to reap the rewards as well as accept the repercussions that follow from one’s actions. Transparency is the act of being open, often via clear and honest communication. We study accountability and transparency because these two concepts go hand in hand: in order to hold AI developers accountable, we must have some understanding of how their AI systems work. Transparency can also prevent over-regulation—it allows the government to monitor AI systems, intervening only when it becomes prudent or necessary.
We begin our exploration of AI accountability and transparency with four chapters, described below.
Chapter 1: Making sense of AI outputs and the importance of specifications
AI is increasingly used to make critical decisions in businesses, courtrooms, and even our daily lives. Much of modern AI is highly complex, with many people knowing how to apply these tools but with an imperfect understanding of how they work. As such, users often treat AI systems as black boxes, interpreting their outputs (e.g., cancer risk scores) in ways that are inconsistent with how the outputs are actually generated. Even computer scientists often disagree on what an AI output conveys. In our first post, we explore the implications of this ambiguity, finding that it can lead to the (unintentional) misuse of AI predictions in consequential decision-making contexts, such as clinical diagnosis, lending, and university admissions. This observation naturally leads to questions of liability. That is, if an AI-assisted decision is “wrong” or “biased,” who bears responsibility: the developer or the user? Although this question is highly nuanced, we discuss how requiring AI specifications (i.e., a minimal description of an AI system that is needed to keep the end-user informed) can help delineate the responsibilities of the AI developer and the AI user.
Chapter 2: What level of access is needed to audit AI?
Even if we succeed in disambiguating AI outputs (as discussed in our first post), AI developers and users will inevitably make mistakes. Auditing—a way of systematically reviewing AI systems—is a way of checking whether an AI system complies with the law or follows industry standards (in some sense, akin to a car inspection for AI systems). The problem is: AI developers rarely grant auditors unfettered access to their technology, and requiring full transparency would hurt the intellectual property of companies actively innovating in AI. So, what level of access is needed to hold AI developers and users accountable? Or even to learn from undesirable outcomes when they occur? Some have proposed that companies should open-source their untrained models (as Twitter did in 2023), while others argue that AI developers should disclose their training data. Still others seek disclosure of choices made during design and development (e.g., a company’s internal auditing processes). There are, of course, pros and cons to each approach. In this post, we review the benefits and drawbacks of auditing AI at four different levels of access.
Chapter 3: “Res ipsa” and the measurable impact of AI
Another approach to improving the oversight of AI system is to apply the principle of res ipsa (“the thing speaks for itself”) to businesses (e.g., online platforms) that use AI in ways that might cause societal harm. For instance, in the case of social media, the algorithmic amplification of toxic content that leads to harm in the real world (e.g., the release of internal documents revealed evidence that certain content on Facebook and Instagram has a measurably negative impact on the mental health of teens) might be grounds for the charge of negligence on the basis that widespread harm occurred—that is, “the harm speaks for itself” (res ipsa). In our third post, we discuss the applicability of res ipsa in AI contexts.
(Coming soon)
Chapter 4: Public health standards as a regulatory model
Finally, we turn to public health regulation and draw parallels to how we might regulate AI technologies. Though far from perfect, public health regulation helps to mitigate harms that can arise in healthcare. For instance, before drugs can go into clinical trial, there is a self-auditing process (known as IND) that requires a thorough self-study of the potential harms that may arise during the trial. Using public health as a case study, we argue that government regulation is similarly needed to mitigate the potential for AI to introduce harms or exacerbate existing ones. We then comment on the importance of trust in AI from a public health standpoint and discuss the value of self-auditing processes in promoting public trust. To conclude, we outline the key differences between health care innovation and innovation in AI and explain why public health regulation may not perfectly apply to regulation of AI.
(Coming soon)
These four posts are intended as a jumping-off point for further conversations on AI accountability and transparency. We hope they will spark curiosity and dialogue about the impact of AI as well as when regulation is (and is not) needed.