On AI Deployment: We need to talk about AI deployment
The first post in our series On AI Deployment (By Sarah H. Cen, Aspen Hopkins, Andrew Ilyas, Aleksander Madry, Isabella Struckman, and Luis Videgaray)
A few weeks ago, Aleksander testified in front of a House subcommittee about Advances in AI. His testimony overviewed a lot of our thinking about AI deployment and, in particular, why the current trajectory of this deployment will likely render some existing AI policy efforts ineffective. We believe that properly understanding these issues should be a priority for decision makers in both government and the private sector. We decided to write a series of blog posts—with this being the first post in this series—to discuss the issues in AI deployment and unpack some of the key emerging risks and opportunities.
AI is a hot topic these days, with everyone from major publications to primetime news to late-night comedy talking about it. In 2023, AI has become mainstream—it’s no longer a technology reserved for technical experts and sci-fi enthusiasts.
There's a good reason that AI is receiving so much attention right now. We're in the midst of a pivotal moment, marked by the advent of generative AI systems like ChatGPT (and now GPT-4), Bard, DALL·E 2, and Midjourney. These tools are directed using simple natural language prompts, making it easy for anyone to use AI, not just engineers or researchers. Indeed, you can communicate with them much like you would with another person, asking questions like “How does Game of Thrones end?” or assigning tasks like “Design a birthday card for my friend who likes cats.” Everyone can now witness firsthand the significant progress that has been made in AI over the past decade.
Still, as much as there is intense discussion around the capabilities, advantages, and dangers of AI systems, there seems to be much less focus on how these systems are put into action and by whom. That is, the issue of AI deployment is mostly absent from the conversation. This is likely to be a critical (and dangerous) omission.
Join us as we discuss AI deployment and its importance in this blog series.
The forking paths of AI Policy
The furor about AI has not been confined to the media or water cooler conversations. It has also captured the imagination of business leaders and policy makers. ChatGPT was among the most discussed topics in the recent edition of the World Economic Forum and, on the same day a few weeks ago, there were hearings in both the House and the Senate on the subject of AI. These developments reflect a growing public policy interest in AI—an interest that, admittedly, took time to build up. Although people have taken notice of the advances in AI since the early 2010s, it wasn’t until around 2017 that policy makers started to realize that AI is a disruptive technology with significant societal implications. Since then, over 50 countries have released AI national strategies, while dozens of documents on “AI principles” have been published by academia, NGOs, and multilateral organizations. There have also been several AI-related legislative developments around the world, including in Europe, the US, and China.
This burgeoning activity in AI policy reflects a general concern that AI impacts society and that this impact must be modulated. At the same time, however, there is disagreement about what risks AI poses and what policy responses are appropriate. We highlight two axes of disagreement: (i) disagreement over AI’s capabilities and (ii) disagreement over AI’s potential impact.
AI’s capabilities
The first axis of disagreement concerns AI’s capabilities. For example, consider the deployment of generative “large language models” (LLMs) such as ChatGPT. These models are very impressive, and yet there is significant disagreement as to whether they have the capability to truly reason (either now, or in the future).
In fact, there is no real scientific consensus on what it means for AI to be able to “reason” about the world, how we would know when AI is able to reason, or whether the current paradigm of AI techniques will ever achieve it at all. Even going beyond these somewhat philosophical questions, there are diverging views on more practical matters: when can we expect self-driving cars to be fully autonomous on public roads? Will AI eventually replace radiologists, and if so, when? Can AI foster financial inclusion in developing nations? Understanding AI’s capabilities is important because it determines the degree to which AI will intervene on society and, as a consequence, will also inform how we should prepare ourselves.
AI’s societal impact
The second axis of disagreement centers on AI’s potential societal impact. There is plenty of debate, for example, over whether AI will empower workers by improving their productivity, or if it will instead drive them into permanent unemployment. Some believe AI has a great potential to deliver essential services to disadvantaged communities (and countries), while others focus on AI’s potential role in deepening inequities, discrimination, and social injustice. Many believe the potential for nefarious uses of AI–including for authoritarian surveillance, addictive social media, the spread of misinformation or the deployment of lethal autonomous weapons–outweighs its benefits to society. The control of AI systems, and whether we are on the path towards increased power concentration or democratization of AI, are also topics of controversy.
Takeaways
Where one’s opinion lies on these two axes of AI’s potential tends to inform their stance on how rapid, aggressive, and far-reaching AI policy and regulation should be (and we will explore some of these issues in our upcoming blog posts).
As important as these two axes are, however, the unique technical and economic characteristics of AI—characteristics that we will discuss in upcoming posts—suggest that AI’s impact on society will largely be shaped by the specifics of its deployment. And, as we will see in this series, trends in current AI deployment practices do not bode well for the future. The ultimate success of AI policy will depend crucially on changing—or, at least, mounting a proper policy response to—these trends.
As we explore these topics together, we do so with the explicit goal of combining technical and policy perspectives. While there is ample consensus that AI policy should emerge from diverse and interdisciplinary perspectives, in practice the computer science and policy arenas remain quite isolated from each other. This needs to change. We need more policy infused by science and more science infused by the reality of policy. That is why we (a group of computer scientists and an economist with a career in public policy) embark on this journey together. We hope you join the conversation.
In our next post, we discuss a newly emerging issue in this context: complexity in AI supply chains; why it matters, and why business leaders and policy makers should be paying attention.
A big thank you to David Goldston for his invaluable feedback on this piece.
Did you see this?
"The Loneliest Girl in LA"
E. Jean Carroll
Jun 18, 2023