Social media companies have slashed hundreds of content moderation jobs. The string of firings has been part of the ongoing wave of tech layoffs, and has now stoked fears among industry workers and online safety advocates that major platforms are less capable of curbing abuse than they were just months ago, possibly placing the personal information and even safety of users and consumers at much higher risk.
Tech companies have announced more than 101,000 job cuts this year alone, on top of the nearly 160,000 positions lost over the course of 2022, according to tracker Layoffs.fyi. Among the wide range of job functions affected by those reductions are “trust and safety” teams — the units within major platform operators and contracting firms they hire to enforce content policies and counter conducts such as hate speech, spreading misinformation, and other offences.
Read more: John Oliver Calls For The Break Up Of US Tech Giants
Earlier this month, Alphabet reportedly reduced the workforce of Jigsaw, a Google unit that builds content moderation tools and describes itself as tracking “threats to open societies,” such as civilian surveillance, by at least a third in recent weeks. Meta’s main subcontractor for content moderation in Africa said in January that it was cutting 200 employees as it shifted away from content review services. In November, Twitter’s mass layoffs affected many staffers charged with curbing prohibited content like hate speech and targeted harassment, and the company disbanded its Trust and Safety Council the following month.
Postings on Indeed with “trust and safety” in their job titles were down 70% last month from January 2022 among employers in all sectors, the job board told NBC News. And within the tech sector, ZipRecruiter said job postings on its platform related to “people safety” outside of cybersecurity roles fell by roughly half between October and January.