Facebook will pay $52M compensation to content moderators who developed PTSD from seeing 'rape, murder and suicide'
The social media giant has finally agreed to pay compensation to employees who were forced to watch horrific images as part of their low-paid jobs.
Current and former employees claim they developed post-traumatic stress disorder and related conditions from viewing graphic material such as rape, murder and suicide.
The preliminary settlement was filed in San Mateo Superior Court in California, technology news site The Verge reported.
Selena Scola sued Facebook two years ago and alleged that she developed PTSD after regularly viewing images of rape, murder, and suicide, the news outlet reported.
She said she developed symptoms after nine months on the job at the Mark-Zuckerberg-led company, according to The Sun.
Former employees in at least four states say Facebook did not provide a safe workspace, according to the lawsuit obtained by The Verge.
The outlet found that moderators were making as little as $A44,000 per year.
Facebook has agreed to make changes to its content moderation tools as part of the settlement.
The tools will include muting audio by default and changing videos to black and white, according to The Verge.
The modifications will impact all moderators by next year.
“We are grateful to the people who do this important work to make Facebook a safe environment for everyone,” Facebook said in a statement.
Meanwhile, Facebook unveiled an initiative Tuesday to take on “hateful memes” by using artificial intelligence, backed by crowd sourcing, to identify maliciously motivated posts.
The leading social network said it had already created a database of 10,000 memes - images often blended with text to deliver a specific message - as part of a ramped-up effort against hate speech.
Facebook said it was releasing the database to researchers as part of a “hateful memes challenge” to develop improved algorithms to detect hate-driven visual messages, with a prize pool of $A155,000.
“These efforts will spur the broader AI research community to test new methods, compare their work, and benchmark their results in order to accelerate work on detecting multimodal hate speech,” Facebook said in a blog post.
Facebook’s effort comes as it leans more heavily on AI to filter out objectionable content during the coronavirus pandemic that has sidelined most of its human moderators.
Its quarterly transparency report said Facebook removed some 9.6 million posts for violating “hate speech” policies in the first three months of this year, including 4.7 million pieces of content “connected to organised hate.” Facebook said AI has become better tuned at filtering as the social network turns more to machines as a result of the lockdowns.