What’s this blog about?
This blog explores the implications of AI & tech on society and policy making. We unpack various topics, including social media, AI supply chains, AI accountability, and more. Our goal is to provide insights that can meaningfully contribute to ongoing conversations on AI and to reach any interested reader, regardless of their background. Subscribe to stay up-to-date!
What have we posted?
So far, we have four separate series, described below.
Series #1: Social Media
Social media is one of the primary sources of information for people worldwide. In this series, we pose and answer lingering questions about social media, including:
Should social media be regulated? (Read here)
What are the current approaches and roadblocks to regulating social media? (Read here)
Why can’t social media be regulated like radio or TV? (Read here)
What can we learn from how the U.S. regulated newspapers, financial brokers, tobacco companies, and public utilities? (Read here)
Series #2: On AI Deployment
As AI moves from computer science labs into the real world, what challenges will we face? In this series, we address various topics and concerns, such as:
Recent advances in AI have garnered a lot of attention, and we should carefully consider how AI is deployed into society. (Read our first post.)
Due to the advent of large generative AI models, such as GPT-4, the AI system we interact with is built from many AI components glued together, creating AI supply chains. (Read our second post.)
A few companies (such as OpenAI and Google) dominate the upstream AI market. Whether or not there is competition or concentration depends on several important factors. (Read our third post.)
AI supply chains already exist. We mapped out several of the complex supply chains that are already out there. (Read about our dataset.)
Series #3: Public Testimonies
One of our authors, Prof. Aleksander Madry, has testified before both the U.S. House and Senate. We’ve posted (lightly edited versions of) his public testimonies as well as follow-up Q&A. Read more here.
Series #4: AI Accountability & Transparency
AI is a two-sided coin. While it creates significant value, it can also cause significant harm. Accountability is a necessary ingredient to building trust between AI and the society on which it intervenes, and it is only possible with some degree of transparency. That is, to hold AI developers accountable, we need some insight into how their AI systems work. In this series, computer scientists and lawyers come together to discuss topics in AI Accountability & Transparency:
Introduction to AI Accountability & Transparency Series (read here)
Making Sense of AI Risk Scores (read here)
Hold AI Designers Accountable: How Much Access is Needed to Audit an AI system? (coming soon)
Ensuring an Adequate Mechanism of Relief for AI-Induced Harms (coming soon)
A Public Health Approach to AI Regulations: Lessons from the Pharmaceutical Industry (coming soon)
Who are we?
We’re a group of students and professors from MIT and Harvard.