Regulating Social Media: Lessons from Existing Regulation
In the fourth post of our blog series on social media, we look beyond media regulations to ask: What can we learn from how we've regulated other industries?
Regulating social media is no easy task, but it’s not the first time that lawmakers have had to design laws for a new, booming industry. In fact, many of the characteristics that make social media so powerful and difficult to regulate have arisen (and been dealt with) in other contexts too. In this fourth post of our blog series, we’ll look at four industries—each of which resemble social media in some way—and how they are regulated. We’ll distill the key principles behind regulations in these industries and unpack what (if anything) we can adapt for social media.
As we discussed in the first post of our series, social media platforms have an immense impact on their users and, in turn, society at large. Although much of this impact has been positive, there have been rising calls to regulate social media’s negative effects, as we saw in our second post. But designing regulations is no easy task.
Luckily, most regulations don’t need to be (and, in fact, rarely should be) designed from scratch. Instead, policymakers usually begin by figuring out whether the current issue of interest can be regulated like something we’ve seen before. The question then becomes: In what ways can previous regulations help policymakers design regulations for social media?
A natural place to start—especially for a problem as daunting as social media regulation—is to look at how lawmakers regulate other industries. In each case, we’d like to understand:
What policies did lawmakers enact and what was their justification?
What from these policies can we (and can we not) adapt to social media?
Below, we focus on four case studies. Specifically, we look at how policymakers in the US have regulated (i) newspapers, (ii) financial brokers, (iii) tobacco companies, and (iv) power providers. Although none of the four resemble social media platforms perfectly, we find that each case offers a different, potentially useful perspective on how we might regulate platforms.
Can We Regulate Platforms Like … Newspapers?
A compelling way to view social media—one that’s heavily suggested by the name itself—is as a continuation of traditional media (that is, print, radio, and TV). In our last post, we found that there are a few key differences between traditional media and social media that prevent us from lifting laws from traditional media regulations. That said, we can still glean some important lessons from the way traditional media is regulated.
So, how do we regulate traditional media? In the US, we’re actually quite reluctant to, in part because of how much we value First Amendment rights—notably, freedom of expression and freedom of press (which we discussed a little bit in our second post). In most cases, the courts only accept regulations that restrict these rights if the state establishes: (i) that there is a compelling public interest and (ii) that there is no alternative that could have accomplished the same goal without being as restrictive. (Importantly, one of these alternatives is to not regulate at all and leave things up to the free market.)
Because of this reluctance to interfere with media, the way the state regulates print, radio, and TV varies based on need. Let’s look at print media. For the most part, lawmakers tend to leave print media alone—why? Because they’ve found that free market forces are usually strong enough to correct for public interest issues like harmful content and misinformation. The Supreme Court, for example, has argued that a reader is unlikely to unintentionally consume content that is harmful for them. Unlike with radio (where a child may accidentally hear inappropriate content that is playing in the background), someone reading a newspaper must first choose to read an article (see FCC v. Pacifica). This and other factors mean that, for the most part, the least restrictive regulation for print media is no regulation at all.
But there are situations where the state steps in. Free market forces don’t always prevent a newspaper from publishing things that are false, and when false information hurts someone’s reputation, it becomes a problem. So, even though the state is hesitant to interfere with print media, they draw the line at defamation—it is illegal to make false statements that injure someone’s reputation (although the requirements for establishing that a statement is defamation varies by state).
When it comes to the media, we see this pattern (of no regulation unless there is a compelling state interest and, even then, only if it limits First Amendment rights less than any alternative) arise again and again. It takes different forms for print, radio, and TV, but the pattern holds true.
So, what’s the takeaway? Even if it’s easy to show a compelling state interest for regulating social media (e.g., misinformation), it’s much harder to design a regulation that doesn’t limit (or minimally limits) free speech. Especially when there are non-regulatory forces (like competition or public outcry), the courts are less likely to uphold media regulations.
Summary:
Like traditional media, social media platforms also distribute information, suggesting that we can look at the regulation of print, radio, and TV for inspiration.
When it comes to media regulations, lawmakers are limited by the strength of First Amendment—particularly, free speech—protections in the US. In general, existing media laws (i) are justified by a compelling state interest and (ii) only stand if a less restrictive alternative that could have accomplished the same goal does not exist. Social media regulations will likely need to abide by the same standards, and establishing (ii) is often the most difficult step.
Can We Regulate Platforms Like … Financial Brokers?
As we discussed in our last post, there are some aspects of social media that have no analog in traditional media. So, while existing media laws provide some ideas for how to regulate social media, a few issues are likely to fall through the cracks. To this end, let’s look at whether the way we regulate financial brokers can help us design laws for social media platforms (we described several parallels between platforms and brokers in our first post).
As discussed in that post, the connection between social media platforms and brokers is most obvious when thinking about how platforms control information. In traditional media, information usually flows in one direction: from content creators (e.g., journalists) to the audience (e.g., readers). In social media, on the other hand, platforms control a highly active exchange of information. In one direction, platforms route content (e.g., posts and videos) from creators and advertisers to users. In the other, platforms route user data (e.g., click statistics and search trends) from users to advertisers, data collection companies, and more.
The way that social media platforms oversee (and profit from) this exchange of information resembles how brokers mediate (and again, profit from) financial transactions between buyers and sellers. On one side, users “pay” for content with their time, attention, and data. On the other, advertisers and others pay for access to users and their data. Although the connections between brokers and platforms isn’t perfect, can the way we regulate financial brokers give us some insights into how to regulate platforms?
We can begin to answer this question by looking at how the Security and Exchange Commision (SEC) regulates brokers. As an example, the SEC’s “best interest” law imposes four requirements on broker-dealers:
That the customer is informed of the costs and limitations of the services provided;
That the broker exercises reasonable care to understand the risks of the service they provide and puts the needs of the customer ahead of their own;
That the broker identifies and discloses all conflicts of interest; and
That the broker creates, maintains, and enforces written policies and procedures to ensure compliance.
These principles are built on the idea that brokers must behave responsibly when handling a person’s financial assets. If we think that platforms should also behave responsibly towards users, looking at SEC laws might be a good starting point. The lawmakers could, for instance, require that a platform (1) allow its stakeholders (namely, users) to make informed decisions about the content they’re shown, (2) exercise care to act in users’ best interest, (3) declare conflicts of interest, and (4) publish their editorial and algorithmic policies.
Of course, this analogy isn’t perfect. Some might argue that it makes sense to require brokers to act in an investor’s best interest, but a user’s “best interest” in social media is less well defined. Others might say that while it makes sense to regulate the handling of a person’s financial assets (because these assets are considered a person’s property), regulating the handling of a person’s information (whether given or received) is less compelling.
Summary:
Social media platforms oversee and profit from the exchange of content and data between users, content creators, and advertisers in a way that resembles how brokers mediate (and profit from) financial transactions.
Broker regulations require that brokers keep investors informed, disclose conflicts of interest, exercise care to understand the risks they may impose on investors, and more. Many of these requirements may also make sense when regulating and auditing social media platforms.
Can We Regulate Platforms Like … Tobacco Companies?
Both the perspectives above miss one crucial aspect of social media: its addictiveness. Recent studies have confirmed its pull on users, attributing 31 percent of social media use to self-control issues. To some, this addictiveness, combined with the negative effects of social media (particularly, on teenagers), makes social media a public health issue.
In the US, the hallmarks of public health regulation are (a) access restrictions, and (b) more stringent speech regulations. One of the most well-known examples is the regulation of tobacco:
Access restrictions are the most direct form of regulation when it comes to public health. For example, one must be over the age of 21 to buy cigarettes, and retailers must obtain a license to sell them.
When addiction and harm are involved, courts are generally more willing to allow restrictions on commercial speech. For example, in 1970, the US passed a law that barred cigarette companies from advertising on television and radio. (By the way, this trend does seem to be reversing. Courts have recently ruled in favor of commercial speech.)
What would such regulation look like for social media? As we discussed in our second post, there are lots of ways to restrict access. One option would be age gating social media in the same way one can’t sell tobacco to those under 21. (It’s not clear though whether age gating will work. Even so, age-based measures are gaining popularity and even being adopted in various places, like the UK.) Another option would be to compel platforms to let users control their usage, e.g., by setting time limits.
As for restrictions on commercial speech, whether or not social media is a public health issue could justify regulations on what content platforms distribute. In particular, platforms could be required to limit content that poses a public health risk. The problem that most (if not all) of these proposals face is that, unlike cigarette ads—which are pretty clearly harmful to public health—it’s not clear exactly what qualifies as “a public health risk” on social media.
Indeed, an important difference between social media and cigarettes is that we simply don’t have enough information on the nature of the harms of social media. We don’t know just how addictive social media is or the root causes of harm. Tobacco regulations in the US came about after research conclusively tied cigarettes to negative health outcomes. The ways in which social media poses a public health crisis is still an active area of research.
Summary:
If something is considered a public health issue, both politicians and courts are willing to adopt more aggressive regulations. A public health crisis can lead to restrictions on access (e.g., to cigarettes) or on commercial speech (e.g., the ability to advertise).
Should certain aspects of social media be deemed a public health risk, lawmakers may be justified in restricting access (e.g., instituting age gates) or speech (e.g., limiting the amplification of content that can lead to eating disorders).
Can We Regulate Platforms Like … Power Providers?
The final perspective we’ll explore is one that frequently arises—namely, that social media platforms are public utilities. We discussed this perspective a bit in an earlier post, but we’ll take a slightly deeper dive here.
Public utilities are generally defined by two characteristics: first, that they provide an essential service in the public interest and, second, that they are natural monopolies (i.e., they are in a sector where monopolies are hard to avoid). Think, for example, of a power company. Providing power—like electricity—to homes is an essential service of public interest. In addition, because the infrastructure required to deliver power is costly, starting a new power company when one already exists isn’t financially appealing.
Public utilities allow the government to regulate them heavily in return for special status and protection (in fact, they’re sometimes referred to as “legal monopolies”). For example, in order to improve the consumer experience, power companies can receive special rights (e.g., the ability to invoke eminent domain) from the government in exchange for meeting a social good regulation (e.g., keeping prices low and ensuring broad access to the service).
The argument for treating social media platforms as public utilities goes: platforms have become one of our primary sources of news and thus provide an essential information service. But due to various factors (like the fact that building a social network on a new platform is time-consuming), users can’t switch between platforms easily, so there’s little incentive for new platforms to compete with existing ones. In other words, they are a natural monopoly. Under this argument, social media platforms satisfy both properties of public utilities.
Thinking of platforms as public utilities would let the government increase oversight over their behaviors. As a public utility, platforms like Twitter may be required to uphold the social good by, for example, limiting how much promoted content is allowed or providing balanced political perspectives (in the spirit of the Fairness Doctrine). Such regulations aren’t always profitable for the platform. Reducing the amount of promoted content, for instance, would hurt Twitter’s profits. But, in return for being regulated, Twitter would receive benefits like financial subsidies or regulatory protections.
Whether social media companies should be regulated as public utilities, however, is still hotly debated. Many question whether platforms provide essential services. Others argue that social media platforms aren’t “natural monopolies”—that social media is just in its early days. If so, then calling a platform a “natural monopoly” and protecting its status as a public utility might unintentionally reduce competition and, as a result, innovation.
Summary:
Even though social media platforms are not currently classified as public utilities, there are arguments that they are and should be regulated accordingly. Doing so would give the government more oversight over how platforms behave.
Critics say that viewing social media platforms in this way will stifle competition and innovation.
Conclusions
Designing social media regulations is challenging. In general, lawmakers don’t regulate private companies unless they absolutely have to, so what’s the justification for regulating platforms? And, if there are any, how should we regulate them?
To answer these questions, we looked at existing regulations. We focused on four case studies: (i) newspapers, (ii) financial brokers, (iii) tobacco companies, and (iv) power providers. Each of these four perspectives brings us one step closer to understanding why the US government regulates some things but leaves others alone.
Just as importantly, existing regulations can provide a roadmap for new ones. Instead of designing regulations from scratch, we can build on what we already have, discarding what doesn’t fit and devising new regulations when there are gaps. In our next post (coming soon!), we’ll apply these principles and begin outlining possible paths forward.
Takeaways
Designing social media regulations is challenging for reasons we covered in the second and third posts of this series. So, instead of designing regulations from scratch, lawmakers often turn to existing regulations for inspiration. In this post, we look at how lawmakers regulate different industries and discuss whether and how such regulations can be adapted for social media.
Specifically, we explore how the US regulates: (i) newspapers and other media, (ii) financial brokers, (iii) public health risks, and (iv) public utilities like electricity.
In each case, we find that there are connections to social media. We identify a few principles that lawmakers may be able to use when designing social media regulations as well as key differences that prevent us from blindly applying existing laws to social media.
Read our blog series for the full picture!