What’s this blog about?

This blog explores the implications of AI & tech on society and policy making. We unpack various topics, including social media, AI supply chains, AI accountability, and more. Our goal is to provide insights that can meaningfully contribute to ongoing conversations on AI and to reach any interested reader, regardless of their background. Subscribe to stay up-to-date!

What have we posted?

So far, we have four separate series, described below.

Series #1: Social Media

Social media is one of the primary sources of information for people worldwide. In this series, we pose and answer lingering questions about social media, including:

  1. Should social media be regulated? (Read here)

  2. What are the current approaches and roadblocks to regulating social media? (Read here)

  3. Why can’t social media be regulated like radio or TV? (Read here)

  4. What can we learn from how the U.S. regulated newspapers, financial brokers, tobacco companies, and public utilities? (Read here)

Read about the series

Series #2: On AI Deployment

As AI moves from computer science labs into the real world, what challenges will we face? In this series, we address various topics and concerns, such as:

  1. Recent advances in AI have garnered a lot of attention, and we should carefully consider how AI is deployed into society. (Read our first post.)

  2. Due to the advent of large generative AI models, such as GPT-4, the AI system we interact with is built from many AI components glued together, creating AI supply chains. (Read our second post.)

  3. A few companies (such as OpenAI and Google) dominate the upstream AI market. Whether or not there is competition or concentration depends on several important factors. (Read our third post.)

  4. AI supply chains already exist. We mapped out several of the complex supply chains that are already out there. (Read about our dataset.)

Read about the series

Series #3: Public Testimonies

One of our authors, Prof. Aleksander Madry, has testified before both the U.S. House and Senate. We’ve posted (lightly edited versions of) his public testimonies as well as follow-up Q&A. Read more here.

Read the testimonies

Series #4: AI Accountability & Transparency

AI is a two-sided coin. While it creates significant value, it can also cause significant harm. Accountability is a necessary ingredient to building trust between AI and the society on which it intervenes, and it is only possible with some degree of transparency. That is, to hold AI developers accountable, we need some insight into how their AI systems work. In this series, computer scientists and lawyers come together to discuss topics in AI Accountability & Transparency:

  1. Introduction to AI Accountability & Transparency Series (read here)

  2. Making Sense of AI Risk Scores (read here)

  3. Hold AI Designers Accountable: How Much Access is Needed to Audit an AI system? (coming soon)

  4. Ensuring an Adequate Mechanism of Relief for AI-Induced Harms (coming soon)

  5. A Public Health Approach to AI Regulations: Lessons from the Pharmaceutical Industry (coming soon)

Read the series

Who are we?

We’re a group of students and professors from MIT and Harvard.

Subscribe to Thoughts on AI Policy

A newsletter from MIT & Harvard researchers on regulating AI technologies.

People

MIT faculty. Working on making machine learning better understood and more reliable.
PhD student at MIT studying machine learning robustness.
CS PhD student @ MIT CSAIL advised by Aleksander Madry. Working on ML, HCI, and policy.
J.D. Candidate at Harvard Law School
Ph.D. student in Computer Science at MIT, working at the intersection of machine learning theory and AI ethics.
Senior Lecturer at MIT Sloan. Director, MIT AI Policy for the World Project, Co-lead MIT AI Policy Forum.
MEng @MIT CSAIL in the Mądry Lab
Assistant Professor at the Tuck School of Business at Dartmouth College