What’s There to Know About Regulating Social Media in the US?
In the second post of our blog series on social media, we run through the main regulatory hurdles and proposals for regulating social media.
The rising influence of social media has led many to demand change. And yet, regulators disagree on how to regulate—and sometimes even whether to regulate—social media. In this second post of our blog series, we’ll discuss why designing social media regulations is difficult and describe the legal boundaries that lawmakers must work within. We’ll then survey some of the regulatory proposals on the table, which we divide into four categories: regulating the content, regulating the algorithm, regulating the platform, and regulating access.
Social media platforms wield an enormous amount of power. By controlling the flow of information in an ever-growing sphere of users, content creators, and advertisers, they’ve become a decisive influence in our online ecosystem.
How social media platforms operate has gone largely unchecked, but there have been rising calls to regulate them. Ever since we learned that Facebook allowed third parties to access personal data without our consent, lawmakers have been pushing to increase privacy protections. Studies showing that social media hurts the mental and physical health of its users—especially young ones—have parents demanding action. And the rise of misinformation and hate speech on social media has many worried that, without regulation, the future of democracy is bleak.
At the same time, there’s been pushback against regulation. Some are concerned that regulation will stifle innovation and make social media, well, worse. Others believe that there’s no way to curb misinformation, calls to violence, and the like without restricting free speech (a right we hold dear). Even among those who agree that regulation is needed, there’s no real consensus on what to do, how aggressive to be, or even what the exact problems are.
The ongoing debate on whether and how to regulate social media is complex, but we’ll try to make sense of it in this post. To keep our discussion focused, we’ll mostly stick to talking about the US.
We’ll start by describing some of the legal roadblocks to crafting social media regulation: Section 230, the First Amendment, and trade secret protections. We’ll then run through existing regulatory proposals, which we categorize into four prevailing approaches based on whether they target (i) content, (ii) algorithms, (iii) platforms, or (iv) access.
Regulators Aren’t All-Powerful
So, why can’t we just restrict hate speech or remove misinformation from Twitter? How come a parent can’t sue TikTok if their child dies after trying a TikTok challenge? What makes it so difficult for you to learn what Facebook knows about you and why it showed you that ad?
Answering these questions isn’t straightforward. One of the main reasons is simply that there are limits to what lawmakers can do. Efforts to hold platforms accountable must respect existing laws and protections. Plus, because the social media ecosystem is crowded with stakeholders, it’s difficult to design a regulation that helps one stakeholder without unintentionally harming another. Below, we’ll discuss a few (though not all) of the legal roadblocks that anyone wishing to regulate social media in the US will have to face and how they impact different stakeholders.
I. Communications Decency Act
Section 230(c)(i) of the 1996 US Communications Decency Act protects social media platforms (and, more broadly, “internet service providers”) from being treated as the “publishers” or “speakers” of any content that they distribute, as long as platforms do not create the content. Not only that, but under Section 230(c)(ii), platforms are also allowed to remove anything they wish, as long as they do so in “good faith,” even if the removed content is protected by the First Amendment.
This law was a big win for the early internet community because it prevented sites like NYTimes.com or Reddit.com for being sued for what their readers posted in their online forums. But Section 230 was designed for a version of the internet that looks vastly differently from the internet we use today, and it’s recently become a get-out-of-jail-free card for social media platforms.
For instance, when misinformation spread on Facebook and Twitter during the 2016 U.S. election, many wondered why social media platforms emerged unscathed. It turns out that Section 230 protected them from liability for any misinformation that users created and shared. Indeed, as long as platforms do not create the content, platforms are not responsible for libel, slander, deepfakes, or any other objectionable content that appears on their sites. Plus, platforms can remove content as they wish, meaning that when they remove legal content, regulators can’t go after them for it.
In recent years, however, policymakers have become increasingly frustrated with the long leash that social media giants enjoy under Section 230. There are rising calls to repeal or revise it as well as some high-profile (and ongoing) Supreme Court cases centered around it. Still, Section 230 has been instrumental to the US’s thriving internet ecosystem and in protecting internet freedoms more broadly. So, while there’s a chance it’ll be repealed, there’s also a good chance that Section 230 or something similar will remain intact, meaning that any US regulation passed in the near future will likely need to co-exist with it.
II. Free Speech Protections
In the US, free speech is a fundamental right protected under the Constitution’s First Amendment. It allows citizens to express themselves without interference from the government, except in special cases (for instance, when speech constitutes defamation, obscenity, or blackmail).
Free speech protections—while critically important—make regulating social media difficult. Because free speech rights are taken very seriously in the US, attempts by the government to remove or restrict content are usually thwarted by First Amendment protections.
For example, let’s say that policymakers pass a law requiring platforms to remove any misinformation on their sites. Unless there’s a universally agreed upon definition of misinformation (which there is not) and a way of identifying misinformation with 100 percent accuracy (which there is not), attempts to remove misinformation may unintentionally take down innocent posts because platforms would rather be overcautious than be sued for distributing illegal content. Laws that have this effect—that lead to the removal of legal speech (even “accidentally”)—are unlikely to hold up in court (Smith v. California).
These strict standards don’t leave much wiggle room for regulation. There are, however, exceptions to the rule, and some laws have survived even though they do limit speech. For example, in the late 1970s, the US Supreme Court ruled in FCC v. Pacifica that lawmakers can limit harmful content on radio broadcasts because the radio was “uniquely pervasive” and could invade on “the privacy of the home.” Another exception is defamation law. Although criminalizing defamation restricts speech, these laws are seen as essential to protecting people’s right to reputation and therefore allowed. (As a testament to the strength of free speech protections, however, publishing a false claim is not necessarily illegal—when it comes to public figures, it only counts as defamation if it also results from malicious intent, a notoriously difficult standard to prove.)
III. Trade Secret Law
One thing that gives social media platforms an edge over their competitors are their algorithms, which determine, for example, how they match content or ads to users. In the US (and often elsewhere), these algorithms are legally protected as trade secrets, which means that platforms aren’t required to give anyone access to their algorithms. So, although people may speculate about whether Twitter's algorithm amplifies misinformation or whether Instagram tries to infer your race, trade secret laws make it hard to verify these claims. In rare cases outside of social media, companies have been legally compelled to give third parties (e.g., auditors) access to trade secrets, but social media giants will likely fight tooth and nail against any mandates of this sort.
Trade secrets are so strongly protected largely because lawmakers hesitate to do anything that might curb competition and innovation, especially in the US. They prefer to step in only when a company’s actions are shown to hurt consumers. As such, if there’s enough evidence of harm to users, lawmakers might require social media platforms to give auditors and researchers (limited) access to their algorithms. But there’s a chicken-and-egg problem here: lack of access makes it difficult to prove harm; and without proving harm, it’s hard to justify access.
To make matters even more interesting, the definition of a “trade secret” is fairly broad under US law. Anything is a trade secret as long as (i) a company tries to keep it confidential and (ii) doing so brings the company economic benefit. This definition has led some to argue that even the data that platforms collect might be protected under trade secret law.
Four Approaches to Regulation: Their Upsides and Downsides
Laws like those described above affect what regulators can and cannot do. Working within these constraints is no easy task, so what have lawmakers come up with so far?
The space of proposed social media regulation is vast (and frankly, overwhelming). So rather than enumerate everything that’s on the table, we’ll first divide the proposals into four categories, then discuss their strengths and weaknesses. Specifically, we separate the proposed regulations based on what they regulate: (A) the content that appears on social media, (B) the algorithms that drive social media, (C) the platform as an entity, and (D) the ways that users, like minors, access the platform.
A. Regulating the Content
Approaches that regulate the content focus on the information users see (as opposed to, for instance, the algorithms generating their newsfeeds). The reasoning behind content-based regulation goes: if content is what influences users, and we know that certain types of content are bad, why not directly limit them?
Identifying and curbing harmful content (such as hate speech and misinformation)—by denying Section 230 immunity to platforms that host such content—fall under this category of regulation. This category also encompasses, for example, proposals to create a social media version of the Fairness Doctrine. (The Fairness Doctrine was a law originally conceived for radio and TV and was enforced until the advent of cable TV in the late 80s. It mandated that topics of public importance, like elections, be presented to audiences in a fair and balanced way.)
The main problem with this approach is that, as we’ve already hinted at before, identifying harmful content is error-prone and sometimes impossible because the label “harmful” is inherently subjective. In many cases, holding platforms liable for harmful content would cause them to be overcautious and to remove lawful content, which—as we’ve already discussed—may constitute a First Amendment violation. (It’s worth noting, however, that social media platforms already detect and remove certain types of harmful content, like child pornography and calls to violence. Because these actions are taken by private companies rather than the state, they don’t run into constitutional issues. Plus, platforms are protected by Section 230.)
Another (less direct) proposal for regulating content seeks to authenticate users. The motivation here is to remove bots, detect (malicious) foreign interference, and trace misinformation to its source. Authentication alone, however, is unlikely to fully eliminate harmful content, as it doesn’t stop authenticated users from posting and sharing it. Additionally, because authenticating a user typically requires access to the user’s personal information, this approach needs to be implemented carefully to mitigate possible violations of user privacy.
B. Regulating the Algorithm
Rather than directly regulating the content, a second approach to regulation hones in on how content is curated: namely, on the algorithms that match content to users. After all, these algorithms determine which posts appear on users’ feeds, what advertisements users see, which posts go viral, and more. By regulating algorithms, lawmakers can tackle content distribution (and, consequently, creation) trends, rather than trying to regulate each individual piece of content one-by-one.
There are many possible ways to oversee algorithms. Regulators could monitor the algorithmic inputs (e.g., whether the algorithm that determines my advertisements is allowed to use my race as an input), the algorithm’s objective function (e.g., how my feed-generating algorithm balances my well-being against the platform’s profits), metrics that the algorithm uses (e.g., whether clicking on or angry-reacting to a post indicates that I’d like to see similar posts in the future), and more.
The problem is that algorithms are hidden behind trade secret laws. And even if trade secret laws didn’t exist, social media algorithms have become so complex that it’s unclear what regulations are actually feasible, whether a regulation would have the intended effect, and how one can design regulations that don’t become obsolete as soon as platforms move onto new algorithms.
C. Regulating the Platform
Zooming out even further, lawmakers could regulate the platforms themselves. This so-called entity-based approach is more of a catch-all and is attractive because, even though content and algorithms change, platforms are likely here to stay. However, properly applying this approach requires care. In particular, focusing so much on how we think platforms should behave might yield regulations that have unintended side effects.
For instance, many have suggested classifying social media platforms as common carriers (like phone companies) or public utilities (like power companies). Doing so would allow the government to place requirements on how platforms “carry” information (e.g., in a neutral way) or distribute information “utilities” (e.g., in a fair way). Others have cautioned against this approach, arguing that it will have negative downstream effects such as cementing existing platforms and stifling competition.
Other platform-level regulations focus on antitrust and company ownership laws. Here, the goal is to prevent any one platform or individual from having undue influence over information. There are, however, some challenges to this approach. For one, breaking platforms up (e.g., as a part of an antitrust proceeding) wouldn’t necessarily have the intended effect. Even if multiple platforms exist, users are often reluctant to leave their current platform because they would have to rebuild their social network on a new platform. For another, putting restrictions on media ownership may go against the current trend to loosen these laws.
There are many other platform-level regulations that center on privacy and transparency, among other issues. These include laws that would govern how platforms track and store data, what platforms must publicly disclose, how platforms monetize information, and more. All of these run into challenges and tradeoffs of their own—but we’ll come back to them in a later post.
D. Regulating Access
Lastly, some have suggested regulating access to social media. In particular, due to recent studies showing that social media can hurt the mental health of teens, lawmakers have considered imposing age restrictions for social media. Along a similar vein, there have also been proposals—motivated by research on social media addiction—to allow users to set time limits on their social media use (by, for example, giving a user access to only the platform’s basic functionalities after the user exceeds their daily time limit) in order to curb social media addiction.
Although there is precedent for restricting the internet access of children, regulating social media access for adults will probably be met with raised eyebrows. Let’s face it: adults don’t like to be told what to do, and the state doesn't like to mess with user agency. (Notable exceptions are issues of public health, such as tobacco use. We’ll come back to this point in a later post.)
Conclusions
Regulating social media isn’t easy. Many roadblocks—such as Section 230, free speech protections, and trade secret laws—stand in lawmakers’ way. And even proposals that manage to navigate these roadblocks are often held up by other issues, such as the lack of algorithmic transparency, the complexity of the underlying systems, or just that it’s hard to keep up with how quickly social media is evolving.
So, what should lawmakers do? In our next post, we’ll present a framework for thinking about social media that might help to identify the gaps in social media regulation and, in turn, motivate where and how the state should (and should not) step in.
Takeaways
In recent years, there have been rising calls to regulate social media due to its role in issues of misinformation, privacy, mental health, hate speech, and more. However, there has also been pushback against regulations due to concerns that they will stifle innovation, restrict free speech, or hurt social media users further.
Several laws and legally protected rights make regulating social media in the US difficult, including Section 230 of the Communications Decency Act, the First Amendment right to free speech, and trade secret laws.
There have been plenty of proposals to regulate social media. We can categorize these proposals into one of four types: regulating the content, regulating the algorithm, regulating the platform, and regulating access.
All four approaches have benefits and shortcomings and in the end, regulations will likely need to draw from all four. In the next few posts, we’ll explore what regulatory approaches may and may not prove useful.
Continue reading our blog series for more!