Discover more from Thoughts on AI Policy
Why Can't We Regulate Social Media Like Previous Media?
In the third post of our blog series on social media, we ask: What makes social media such a unique regulatory challenge?
Like newspapers, radio stations, and TV programs, social media platforms curate and deliver information. So, how come we can’t simply regulate social media the same way we regulate other forms of media? In this third post of our blog series, we’ll unpack four characteristics that, together, make social media uniquely challenging to regulate. These characteristics suggest that there are a few principles from previous media regulations that we can transfer to social media but that we may need to look to other types of regulations in order to fill in the gaps (see our next post).
Social media platforms are not the only nor the first organizations to curate information. Traditional media—including print, radio, and TV—also do so, begging the question: What differentiates social media from other forms of media? And what stops us from regulating it in the same way we’ve regulated print, radio, and TV?
In many ways, social media faces the same age-old media problems (like misinformation), and social media regulations must work around the same, long-standing legal issues (like free speech protections) as any other media regulations.
But despite the similarities between social media and the forms of media that came before it, there are a few key differences that matter when it comes to regulation. The courts, for one, have recognized some of them. For example, in Reno v. ACLU, the U.S. Supreme Court ruled that in comparison to radio and TV, policymakers would need a stronger argument in order to restrict content on the internet, in part due to the fact that the internet is “the most participatory form of mass speech yet developed,” and internet restrictions analogous to those on radio and TV would put an “unacceptably heavy burden” on free speech. Various other court cases—like FCC v. Pacifica and the 2014 Google Spain case on the Right to be Forgotten—have also echoed the idea that internet media require new forms of regulation.
This trend suggests that the courts won’t necessarily let policymakers take previous media regulations and apply them directly to social media. In this post, we'll unpack four key differences between social media and the forms of media that came before it:
Social media’s wide and deep reach into users’ lives;
Its ability to (algorithmically) personalize content to each user;
The replaceability of individual content creators on social media; and
The participatory nature of social media.
Ultimately, these differences can help lawmakers identify what they can borrow from existing media regulations as well as where they must innovate.
Ubiquity: Social Media’s Pervasive Reach
The first key feature of social media is its pervasive reach into users’ lives. For one, social media has an exceptionally wide reach. Because the content on social media is crowdsourced from users and content creators, we can find almost anything on social media—from how our friends are doing and what politicians said to who won last night’s basketball game and what skin care products are trending. Indeed, while the content on TV is more limited in scope because it’s determined by higher-ups (like TV executives), the content on social media can touch on countless facets of our lives. By delivering information on such a wide range of topics, social media can influence what we wear, what we eat, who we vote for, and even where we choose to live.
The reach of social media is not only wide, but also particularly deep. Platforms track actions that we typically don’t show to others, such as what we search, read, and watch in the privacy of our own homes. They even gather data on how we scroll down our feeds and when we fast-forward a video. Platforms don’t just know a lot about us—they can also show content that’s more deeply personal than the average news article. Your feed may tell you, for example, who got into the college you were gunning for or show you a video on an obscure topic that means a lot to you.
Because social media touches on so many facets of our lives, social media regulations will need to capture a much broader range of issues (from misinformation to user privacy) than previous media regulations. But other forms of media (like books) can have wide reach and touch on personal issues, so what else makes it different from other forms of media?
Personalization: A Different (Algorithmically-Tailored) Reality for Each User
There’s another component of social media that, in combination with its pervasive reach, makes its content particularly powerful, and that’s personalization. That is, the content a user sees is tailored to that specific user’s habits and tastes. Indeed, while a radio station might change its music based on location, or a news channel might adjust its message based on its target audience, neither can tailor its content to each individual listener or reader, let alone do it in real time.
Although personalization can improve the user experience, it shatters any notion of a shared reality between users. Since each user sees only their feed and not that of anyone else, users experience “realities” that are not only different from one another (which one might argue is already the case for viewers of Fox News vs. CNN), but also unknowable to one another. (Imagine a world where both you and your neighbor watch CNN, but you see different news, presented from different perspectives, with no way of knowing what the other person sees.) This fragmentation weakens accountability. Because the public can’t view what platforms recommend to each user, it is difficult for regulators to monitor platforms and detect any issues as they arise.
The fact that social media shows users different “versions of reality” also limits each user’s ability to adjust for institutional bias. The media can never be fully objective—in fact, a diversity of perspectives is often a positive indicator of freedom of press and expression. Still, people can usually adjust to different media if they know how each deviates from the norm. For example, most generally know that what’s shown on Fox News is more right-leaning than what’s shown by the New Yorker, and media-literate readers can account for these slants. Personalization, however, robs users of the ability to discern how the information that they see differs from the norm.
Information Dispersion: Creators are Replaceable, Platforms are Not
Social media has made being a content creator easier than ever before and given microphones to communities that are underrepresented in traditional media. However, by lowering the barrier to entry, platforms have also diluted the importance of each individual creator. This enables content mediators, like platforms, to pull from one of many that deliver the same message.
This replaceability of content creators has several consequences. For one, content creators tend to focus on their own survival—that is, on being boosted by the platform in order to maintain and grow their following. So, instead of working on establishing credibility to build a support base, creators gather followers by making themselves irresistible to the platforms’ algorithms.
The replaceability of creators also makes it difficult to police social media under regulatory frameworks that go after “publishers” but not “mediators.” For example, Section 230 of the U.S. Communications Decency Act (see our second post for details) says that a social media platform cannot be held responsible for the content published on their platform, drawing a clear line between publishers (i.e., content creators) and mediators (i.e., platforms). But because content creators have become effectively replaceable, removing or banning one source of harmful content does little more than open a vacuum for other content sources to fill. Indeed, just as we can’t ensure food safety by litigating every farmer one-by-one, we can’t expect to be able to solve social media’s problems by targeting each individual content creator. Rooting out “bad” publishers and leaving platforms untouched is unlikely to have the same level of impact as it has on print, radio, or TV.
Although Section 230 legally protects content mediators, the replaceability of each individual content creator has arguably changed the role of platforms. In some ways, platforms have become modern-day “publishers.” The argument goes: because platforms pick from such an enormous body of content, they can construct meaningful “sentences” using each piece of content as “words.” If platforms are deemed modern-day publishers, it may be possible to hold them responsible for the content they distribute despite laws like Section 230. There’s support for this line of reasoning, but it’s unclear whether this argument will catch on.
Power of The Masses: Social Media is Participatory
By handing the job of content generation to users and creators, platforms give up a bit of top-down control over what appears on social media. Although they have full control over the content recommendation algorithms, platforms don’t have direct control over the content itself. For example, unlike newspaper editors, platforms don’t determine what tomorrow's posts will cover.
Because social media is so participatory, issues of free speech and censorship become particularly delicate. Social media has provided an important platform for free expression (more so than print, radio, or TV because nearly everyone can publish their thoughts on social media), and the state is therefore reluctant to intervene. As a consequence, lawmakers must be careful not to design laws that unintentionally infringe on free speech (more on this in our previous post).
The wants and whims of users not only influence the production of content, but also its evolution. The virality of a post, for example, is largely driven by users. Moreover, users can “like” posts, comment on them, reshare with changes, and so on. These interactions then become part of the content. (For example, a post’s top comments and number of likes usually appear directly below it, becoming part of the post itself.) Although viewers and readers also interact with traditional media, these interactions have limited impact on the content. A newspaper’s readers, for instance, don’t drive virality in the same way users do on social media, and discussions about a movie don’t appear next to it on TV.
The fact that users drive virality and “annotate” content as it evolves further complicates social media regulation in two ways. First, it becomes difficult to pin down precisely who is responsible for each piece of content and its popularity—after all, many users might have a hand in how it evolves. Second, it makes it hard to know whether new regulations—such as required changes to a platform’s algorithm or monetization policies—would have the intended downstream effects because users may respond to the changes in unexpected ways.
In recent years, growing awareness of the power that platforms like Facebook, Twitter, and TikTok possess has led to rising calls to regulate social media. Yet despite a shared desire to regulate, it’s unclear what the best approach would be.
In some sense, a natural approach would be to regulate social media akin to the way print media, radio, and TV are regulated. But as discussed in this post, there are a few key differences between social media and the forms of media that came before it.
These differences suggest that we may need to look beyond media regulations for inspiration. We unpack this perspective in the next post, and discuss both (a) what we can draw from previous media regulation; and (b) a few compelling (though imperfect) connections between the regulation of social media and the regulation of public utilities, financial brokers, and addictive substances.
In many ways, social media is just another form of media that faces age-old problems (like misinformation). But despite the similarities between social media and the forms of media that came before it, there are a few key differences that matter when it comes to regulation.
To understand when and why policymakers may need new strategies for social media, we highlight four key characteristics of social media that, together, make it uniquely challenging to regulate.
These four characteristics of social media are: (i) its wide and deep reach into our lives; (ii) that social media can personalize content to each viewer; (iii) the replaceability of content creators; and (iv) the participatory nature of social media.
On their own, none of these four characteristics is unique to social media. In combination, however, they make it difficult to regulate one aspect of social media without damaging another, pinpoint responsibility for problems that arise in social media, hold platforms accountable, and more.
Continue reading our blog series for more!
Thanks for reading Thoughts on AI Policy! Subscribe for free to receive new posts and support my work.