The newly-introduced Digital Services Act (“DSA”) sets as its ambition ensuring a “safe, predictable and trusted online environment” by targeting the spread of illegal content, on the one hand, and the spread of harmful content, like disinformation, on the other. It imposes particular due diligence obligations on very large online platforms, like Facebook and Twitter, to achieve this end. But the vagueness of the provisions, the deference afforded to these platforms, and the disjointed approach to harmful content like disinformation specifically may hamper the DSA’s ability to fulfil its promise. This article sets out the key provisions of the heightened due diligence framework, the underlying compromises made during the negotiations, and the lingering challenges that lie ahead, particularly with a new leader – and self-proclaimed “free speech absolutist” – at the helm of Twitter.
By Katie Pentney[1]
I. INTRODUCTION
The long-awaited Digital Services Act (“DSA”) was finally signed into law by the European Union on October 19, 2022, after lengthy drafting and hard-fought negotiation processes.[2] The flagship Regulation harmonises existing rules applicable to internet intermediaries and imposes new transparency and accountability requirements on online platforms, as well as heightened due diligence obligations on so-called “very large online platforms” (“VLOPs”) like Facebook, Google (YouTube) and Twitter.[3] The stated objective of the D
...THIS ARTICLE IS NOT AVAILABLE FOR IP ADDRESS 18.97.14.88
Please verify email or join us
to access premium content!