When we talk about Section 230 in a vacuum, separate from antitrust policy, or public utility policy, we miss how it works — or rather, how it fails, and, more importantly, how to coherently set up a new, comprehensive regulatory regime with a thriving informational sphere. Section 230 was born along with consolidation, and instead of just talking about full or partial repeal, we should be talking about a new vision for communications infrastructure in America.

By Zephyr Teachout1

 

It was probably about five years ago that I realized I had been totally wrong on Section 230, a kind of depth of wrongness that can make one more humble not just about the particular policy area, but all policy areas. I had adopted the view that Section 230 was fundamentally necessary for the flowering of decentralized, creative, online life, and that without it, there would be an internet, but it would be a highly controlled internet, one without joy or quirks, one that was organized from the top down. I advocated for it and campaigned on it.

There are two primary reasons for my wrongness about 230. First, I came to the law through copyright, and while I am by no means a free culture warrior, I have real and ongoing concerns about the monopolistic lock up of culture, and control that flows therefrom, in our current intellectual property regimes. As a result, many of my late aughts-early 2010s objections to touching 230 came not from a deep belief in the importance of platform immunity (I’ve had antipathy to Google’s gross power for a long time), but to a singular discomfort with the reach of copyright laws and their timespans, a set of laws that works with patent and trademark to close down creative remixing. Through the lens of draconian copyright enforcement, Section 230 seemed to be a blessing, or at least necessary. I welcomed it as an indirect relaxing of the most stringent and difficult to justify aspects of copyright law.

But, as I said, I was wrong. First, the evidence from Canada and elsewhere (where there is no 230, but, remarkably, an internet, and generativity), gives the lie to the oft-used argument that repeal of Section 230 would destroy creativity and connection. Second, Section 230 has done nothing to stop the top-down lock up of the internet, as a handful of companies now play an interlocking feudal role, deciding what content and what kinds of applications are preferred, preferring content that serves their interests.

I share my mea culpa because it is not unusual — Section 230 is routinely described in ways that are not wrong, but not right either, and is routinely used — either in offense or defense — as standing for something else.

This “arguing about something else” feature of Section 230 persists. An enormous amount of the debate about it is not about 230 at all, or only orthogonally. Sometimes it is in active bad faith. Sometimes 230 questions get carelessly mashed together with a different — and interesting! — possibility, the possibility that we decide, legislatively, to extend the logic of Marsh v. Alabama to certain forms of social media and search architecture.

Sometimes the bad faith is not careless, but purposeful, as when Congressman Jim Jordan uses 230 to derail serious antimonopoly efforts. In the House Antitrust Subcommittee last year, for instance, he would regularly show up to try to get the lawmakers doing their job off topic, saying that 230 should be the real focus. Jordan and others have pushed the fanciful idea that repeal would allow people to sue for being kicked off a platform — and while of course an amendment of any bill could include anything, the implicatures in his arguments are just wrong and often too confusingly wrong to disentangle.

A more generous way to understand some of the ongoing confusion of tongues is that Section 230 does not exist in a vacuum, and when we talk about Section 230 in a vacuum, separate from antitrust policy, or public utility policy, we miss how it works — or rather, how it fails, and, more importantly, how to coherently set up a new, comprehensive regulatory regime with room for tort law.

Section 230 isn’t just tort law, or copyright law, but coexists with an architecture of power, and a vision about the appropriate business models for social and search and coexists with antitrust law, and it is impossible to talk seriously about reform without laying out either a dystopian general vision of what we are trying to avoid, or, preferably, laying out a vision of what we are trying to achieve.

So let’s look at 230 in this contextual way, starting with the historical context of its birth. The 1996 Telecommunications Act not only enshrined Section 230, but also enabled consolidation in telecoms. And so when Section 230 created broad immunity for illegal behavior on “interactive computer services” it did so in the context of a pro-consolidation policy, a policy that supported both immunity and monopolization. This consolidation in telecom was part of a broader pro-monopoly vision that was ascendant in the Clinton years. 25 years after Reagan’s presidency, Bork’s antitrust paradox found its greatest champions in Clinton’s team, as Barry Lynn details in Liberty from All Masters (McMillan, 2020).

The short term result of the 1996 law, was a burst of innovation and investment, a profusion of experimentation, because companies simply didn’t have to wonder whether they were making money off of enabling illegal deals or violating copyright. Effective immunity from antitrust threat, along with the Microsoft case limiting Gates’ ability to control, and along with effective immunity from tort provided a major spur for cash to flow into serious and exciting experimentation, as well as into half-baked ideas.

It was a time of an almost embarrassing level of naivete about power and business models. The presumption was that the information flow was wide open, that communications networks would compete and proliferate, not consolidate and calcify. It was an article of faith that even without a regulatory authority telling it otherwise “the internet” would discover a monetization method that wasn’t just ads, more ads, and more targeted ads. As Larry Page and Sergey Brin presciently argued in 1998, Google could not do its core function of search well, with integrity, if the business model was advertising. Because when advertising drives the architecture, as they argued, there are inherent conflicts between the advertiser and the user, and those conflicts will be resolved in favor of the advertiser. Search would serve sellers, not people, and search – and later social media – would put their energy into capturing and controlling users.

But, much like the “we will find a new for-profit business model for local news” fantasy that occupied the aughts, the non-advertising business model never emerged. The sustained fantasy of an alternative, that would be discovered by the market, simply allowed for the suspension of regulatory intervention until the ad model had not only taken hold, but conquered the most basic functions of the internet. These quasi-religious postulates about power and business models infected regulators, who didn’t see their role where the market would fix it.

The FTC and DOJ were, for instance, on cruise control in the Borkian worldview. They simply ignored monopolistic threats, totally failing to see the collective power of accumulating companies, and the power of growing chokepoints. Google in particular engaged in an all-out buying spree, rolling up a huge array of digital advertising companies and companies that were soon rolled into the digital advertising umbrella of Google, including buying over a company a week in 2011, and spending nearly $25 billion on 145 companies, some small but some enormous, like Maps, Youtube, Android, and DoubleClick, which allowed Google to control the entire advertising infrastructure. The combination of advertising data and user data created a juggernaut that crowded out potential competitors.

Facebook came to acquisitions later, but pursued a similar strategy, this one more focused on spying on potential competitors, and then choosing to either crush them or acquire them. The FTC sat by, as it had during the Google spree, while Facebook bought potential competitor Instagram and potential competitor WhatsApp, while also buying Onavo, a service that allowed it to watch how people used their competitors.

In short, while the regulators allowed them to gobble up power, Section 230 gave Google and Facebook tort immunity. The double immunities fed on each other, allowing for the companies to compete on quality less while engaging in more experimentation on the human brain and algorithmic decision-making and systems for extracting ad dollars. It allowed them to take over the news industry, steal ad dollars from journalists, and avoid the liabilities that journalists face.

Now, over a quarter century after 1996, the three different forces that merged in the late 1990s have created monsters. Tort immunity, antitrust weakness, and a dangerous business model have converged to create some of the most dangerous anti-democratic institutions, right at the heart of our democracy and communications networks. Today, Google and Facebook have billions of users locked in across multiple properties, share data sets across those properties, and — per the Page/Brin warning — prioritize advertisers’ needs and their own needs against the needs of consumers, which is to say, removing illegal or false content that makes money is not a high priority.

When it comes to the content on their site, they lack both of the traditional mechanisms used to ensure that content is safe: competition and tort law. They operate outside of competition, atop a market, not within it, and they operate outside of law, because of Section 230. And, to add insult to injury, the business model means they have an incentive to push whatever content is the most addictive and viral, an incentive that works at direct cross purposes to both tort and anti-monopoly law.

What does this mean for today? For one, it means that when we talk about reforming Section 230 without talking about also reforming the pro-merger policies of the last 30 years, actively breaking up the leviathans, and directly considering whether we need to regulate the business model, we are looking at reforms that address only some of the problems that the Reagan and Clintonian regime put in motion. It’s as if, after a car crash, we are thinking about repairing the frame but leaving two blown out tires.

So yes, we should scrap the 230 framework. It seems fairly straightforward that Facebook and Google should be at least as responsible for the content that they carry that news organizations should be, so long as they serve their current function. In other words, companies are in the business of curation, which they clearly are. Their business model is curation. Any company that makes money off of advertising, and chooses to prioritize advertising either through promotion (Youtube, Facebook) or in Search results (Google Search), should not also be allowed to say that if they are aware, or reckless, about illegal content they take no liability for it.

That doesn’t mean that no immunity should exist for any business, just that we shouldn’t use 230 as the starting place for reform. If nothing else changes, and no new antitrust laws are put in place, it would be better to be clear that tort liability attaches to mega-platforms that are aware or reckless about illegal content on their sites. The question is how.

We can’t go back to Kansas or 1996, so let’s start with what we want. There are four different visions of search and social media that could be expressed in a new vision of where immunity should lie. All of them involve full or partial repeal of 230 and different visions of what our communications infrastructure should look like.

Option 1 is that big social media platforms should be treated like mega publishers This is the most obvious, but the most fraught. In the publisher model, Facebook and Google are playing the curatorial role that publishers play, and should be liable just as Fox, the New York Times, or The Enquirer are liable. It is especially galling for publishers that these tech giants escape liability because they not only control the flow of information, and therefore can extract value from news producers, but directly compete with publishers for advertising dollars. It is only fair, in this vision, that Google and Facebook should be subject to the rules of other curators.

While it might be better than nothing, this model, and vision of platforms is deeply disturbing, because it doesn’t address the consolidation of power. It would elevate the radical notion that democracy can exist with a handful of uber-editors (Google, Facebook, a few others), who have responsibilities not to share illegal content but are otherwise free to make massive, systemic choices about the shape of our thoughts and ideologies, about what can be promoted and what cannot. To accept them as our Editors-in-Chief would be to enshrine a terrifying private, centralized model of information flow.

Option 2 is that immunity should track traditional concepts in tort, like ability to pay, best cost-avoider for harms, and justice. Immunity under this vision could either grant judges a great amount of discretion, or legislatively track some of the elements of tort law — large providers would be liable, small ones with no meaningful capacity to protect against illegal commenters would not. Companies who make money off of selling ads by promoting and sorting user content are the best positioned to protect against illegal content. They have the cash, of course, but they also have the technological wherewithal. They are currently making fistfuls of cash from anonymous lies. And as it stands, no one can be sued – the anonymous Facebook poster is vanished, and the platform can shrug.

The difference between the tort vision and the publisher vision is that the former would simply remove 230, whereas the latter would remove 230 for big platforms who engage in curation. In this sense, the tort vision is more democratically coherent; the reason for immunity is that there are certain small entities that lack the capacity to stop illegal activity and still thrive, whereas the big ones don’t fit with that. However, this vision without more still allows for these gross concentrations of informational and gatekeeping power.

Option 3 is the “break ‘em up” vision, radical decentralization. This could happen with or without the removal of immunity. In this vision, we need to follow the long American tradition of breaking up and decentralized communications systems, and so Facebook and Google should be broken into hundreds of interoperable parts. Perhaps a tiny category of immune companies might still exist, but only those that play truly no curatorial role, and certainly no company that makes money off selling ads, and the fact of radical decentralization would justify that lingering immunity. The challenge with this model is that there are genuine, positive network effects that come from some degree of centralization in search and social.

Option 4, the public utility option, would involve a classification of some (or all, but more likely some) of the large public platforms as public utilities, in exchange for an entire set of responsibilities and immunities that flow from their centralized public roles. Immunity would exist for those properties in search and social, but it would be as part of a bargain of publicness — these companies would be split off from all other properties, the business model of targeted advertising would be banned, and basic non-discrimination principles of equal access would apply to these publicly-designated companies.

Option 5 is my own preference, and it involves all of the above, along with banning the behavioral advertising business model altogether. My own view is that we should be repealing all of Clinton’s 1996 framework, not just 230. Clinton installed the double-immunity vision, one that is both excessively fearful of the capacity of the internet to exist in a world in which laws exist (it can!), and excessively optimistic about power. We should replace this with a vision in which there are multiple search engines, and multiple social media sites, which are interoperable with each other, and which do not use targeted advertising. Social and search are both essential public infrastructure, and targeted advertising undermines and even destroys their public value. Social and search should be subject to basic open access nondiscrimination principles and should have principles of open access. And only those social and search companies that do no curation should be able to claim immunities. Social and search, as essential public infrastructure, should not be allowed to own properties that seek access to their platforms, because that creates a conflict.

As you can see, Section 230 “repeal” is part of this vision, but it plays a supporting role, not the major role. Taking on concentration and the business model are the leading elements. That’s because much of the harmful content that goes viral online isn’t illegal, and many of the pathologies of power aren’t fixed with removing immunity.

In other words, maybe when we talk about Section 230, we should embrace that for some of the harms, we need to really be talking about breakups and banning targeted ads, not just removing immunity.

These are just thumbnails. Instead of talking about how to fix a badly thought out law, we should be laying out a vision of how we want to see search and social, as a matter of business model, decentralization, liabilities and relationship to journalism. There is no way around it; we are not just solving problems, we are building a vision of social and search public infrastructure.

 


1 Zephyr Teachout is Law Professor at Fordham Law School and a former Democratic gubernatorial and congressional candidate in New York.