Dear Readers,

The issue of how to manage online content scarcely leaves the headlines. The key issues include questions of content moderation, algorithm transparency, and the potential for online platforms to be used to abuse or victimize certain individuals or groups. This can take the form of hate speech or personal harassment, among other forms of problematic speech.  

Recent controversies have included the alleged use of social media platforms to spread disinformation surrounding recent elections, whether certain political figures should be excluded from key social media platforms, and the underlying question of whether given online platforms (and individuals) simply hold too much power over public discourse.

Perhaps inevitably, these questions have led to calls for regulation. But this opens a Pandora’s Box. What entities should be regulated? What type of content should fall within the scope of such regulation? What criteria should be used to determine what type of content is acceptable and what is not? And who should make such determinations? Should the industry self-regulate or is greater government oversight required? And how is such regulation to be squared with the fundamental value of free speech (as understood under the First Amendment to the U.S. Constitution, Article 10 of the European Charter of Fundamental Rights, and countless human rights instruments and constitutions worldwide)? 

The timely articles in this Chronicle address these a

...
THIS ARTICLE IS NOT AVAILABLE FOR IP ADDRESS 216.73.216.249

Please verify email or join us
to access premium content!