Dear Readers,

The issue of how to manage online content scarcely leaves the headlines. The key issues include questions of content moderation, algorithm transparency, and the potential for online platforms to be used to abuse or victimize certain individuals or groups. This can take the form of hate speech or personal harassment, among other forms of problematic speech.  

Recent controversies have included the alleged use of social media platforms to spread disinformation surrounding recent elections, whether certain political figures should be excluded from key social media platforms, and the underlying question of whether given online platforms (and individuals) simply hold too much power over public discourse.

Perhaps inevitably, these questions have led to calls for regulation. But this opens a Pandora’s Box. What entities should be regulated? What type of content should fall within the scope of such regulation? What criteria should be used to determine what type of content is acceptable and what is not? And who should make such determinations? Should the industry self-regulate or is greater government oversight required? And how is such regulation to be squared with the fundamental value of free speech (as understood under the First Amendment to the U.S. Constitution, Article 10 of the European Charter of Fundamental Rights, and countless human rights instruments and constitutions worldwide)? 

The timely articles in this Chronicle address these and other issues in light of the latest developments the world over. 

Natascha Just assesses the relevant provisions of the forthcoming European Digital Services Act (“DSA”). As the article notes, the DSA is one of many pieces of legislation that seek to negotiate  a workable social contract, taking into account the interests of various participants in terms of social roles and acceptable online behavior. In particular, the article assesses how the DSA addresses questions of platforms’ responsibility for content moderation by creating an asymmetric system of due diligence obligations. 

In turn, Terry Flew raises a home truth inherent to any attempt to regulate online content. The distinctive platform business models of the key players raise significant challenges for regulators. To keep up with the pace, the paper argues that regulators need to “think like Google”: they need to be able to adopt holistic strategies that can apply across industry silos and different regulatory responsibilities. 

Scott Babwah Brennen & Matt Perault turn to a more specific concern: the banning of former U.S. President Donald Trump from several online platforms. The paper argues that we still lack sufficient empirical analysis of the positive and negative consequences these bans have had on public discourse and extremism. The article makes an important contribution by setting out 13 metrics that analysts or researchers could consider as a means of evaluating the impacts of this (and other) bans of public figures from social media platforms.

Imanol Ramírez turns to the difficulty of reaching consensus over how online content should be moderated around the world. The result has been regulatory fragmentation, which has increased the cost of operating in digital global markets due to greater entry and expansion barriers. Nonetheless, policymakers and researchers could take advantage of these divergences to test and prove the effects of different regulations by using the rich dataset such fragmentation produces. Specifically, the rich dataset divergences produce can be used order to question and test the effectiveness of different approaches through empirical methods. This would allow policymakers to better understand how the different regimes shape the conduct of intermediaries and make policy decisions accordingly.

Gregory Day explores the social and societal costs of market concentration in digital platforms, specifically as they relate to mental health and perceptions of beauty. Image and video sharing markets are particularly concentrated, with only three major players accounting for the majority of usage worldwide. This fascinating piece outlines a problem known to aesthetics scholars, but which has evaded legal scholarship: the effect of tech programs on perceptions of beauty and attendant dangers. It then discusses the growing demand for regulations of certain types of apps, platforms, and tech companies in order to present potential ways that the law could ameliorate some of the alleged harms.

Finally, Marta Cantero Gamito notes that any policy choices made as regards the regulation of online content will impact freedom of expression, as the new rules promote a sort of “standardization” of content moderation procedures. Compliance with such regulations might be ensured by adopting recognized European and international standards. That said, there is the risk that a “one-size-fits-all” approach would run the risk of compromising constitutional pluralism and result in preventive censorship across platforms. As the article warns, critical political decisions should not be lost in seemingly technical discussions.  

In sum, this set of articles provides a fascinating snapshot of how the content regulation debate is currently evolving. As many of the authors note, it is early days yet, and this is a debate that will doubtless rage on for years to come. As the first generation of content regulation makes itself felt, platforms, users, and public figures alike will have much to say as to how such regulation applies in practice.

As always, many thanks to our great panel of authors.

Sincerely,

CPI Team

Click here for the full TechREG® Chronicle.