Content moderation at Facebook and Google is a very, very dark place

Moderators for Facebook and Google are required to accept exposure to graphic and violent material for five hours per day. The Verge reports on The Terror Queue, and the PTSD, anxiety and depression that follow.
Dark web visualisation
Content moderators remove violent and extreme material from the web. Image: Gerd Altmann from Pixabay

Content moderation – the dark side

We all want online content to be moderated. It is essential to protect the vulnerable and impressionable. The internet can be a force for good or can be a very sinister place.

Footage of extreme violence and pornographic material are highly disturbing and carry a tacit risk to the mental health of viewers.

Firms such as Google and Facebook contract out their moderation to a company in Austin, Texas called Accenture.

The front line

Moderators are the police of the internet and are assigned to queues of content. These queues include child exploitation, hate and harassment and adult pornography.

Casey Newton reports for The Verge how moderators in the US are recruited for their language skills.

Newton interviews ‘Peter’ who is from the Middle East and speaks seven languages. He works in the Violent Extremism queue and was recruitedto accurately identify hate speech and terrorist propaganda and remove it from YouTube‘.

Staff care

In March 2018 YouTube limited moderator viewing time to four hours per day. CEO Susan Wojcicki said that reducing viewing hours and introducing wellness benefits would support frontline staff.

However, if yesterday’s report is anything to go by, this is not what is actually happening.

Peter, speaking to The Verge, says he has struggled with his health and temper. He is losing his hair.

He says, ‘every day you watch someone beheading someone, or someone shooting his girlfriend. After that, you feel like wow, this world is really crazy. This makes you feel ill. You’re feeling there is nothing worth living for. Why are we doing this to each other?

How can this be the best way to remove violent content from the internet? In this age of algorithms and AI developing at light speed, surely there is a better way?

Content counselling

Accenture faces allegations of pressurising counsellors to disclose confidential discussions with employees. Accenture says that they have ‘confirmed that these allegations are without merit‘ but it seems a bit strange that at least one therapist has resigned.

Henry Soto brought a lawsuit against Microsoft in 2017 as reported by The New Yorker. Soto worked for Bing as a moderator and developed PTSD with the tipping point being a video of a girl being sexually abused and murdered.

The lawsuit was based on the absence of adequate counselling and inept systems to mitigate psychological harm.

Microsoft said that it ‘takes seriously its responsibility to remove and report imagery of child sexual exploitation and abuse being shared on its services, as well as the health and resiliency of the employees who do this important work.’

Seeing is believing. I couldn’t find the outcome to the lawsuit so would make a guess at a confidential settlement.

Digital impression of civil unrest
Moderators must watch up to 5 hours of graphic footage per day. Image: Image by engin akyurt from Pixabay

A better way

I don’t know the right answer, but I think drawing attention to the dark side of content moderation is important.

Facebook’s Community Standards say that they ‘remove content, disable accounts, and work with law enforcement when we believe there is a genuine risk of physical harm or direct threats to public safety.’ YouTube is also cracking down on video game violence. But who bears the cost?

I wanted to find out whether AI is powerful enough to bear the brunt of moderating the worst of content; unfortunately, the answer seems to be ‘not yet’.

Ofcom commissioned Cambridge Consultants to investigate the possibilities earlier this year. The outcome is that fully automated content moderation isn’t yet at the level where it can take over the job.

The report summarises that whilst AI isn’t yet at a level where it can do the job independently, ‘AI-based content moderation systems can reduce the need for human moderation and reduce the impact on them of viewing harmful content‘.

Platforms need to prioritise in R&D to revolutionise their content moderation systems. In the meantime, staff need to have strict limitations set on their exposure. At the very least, the job must include extensive counselling support. To watch the worst of the world every day should carry a generous salary to reflect the harmful nature of the job.

For all the content moderators out there; know that we are all thankful. Protecting the young, the vulnerable and the impressionable is a role we cannot place a value on.

What do you think? Is there a better way? How can platforms moderate their content with less impact on their frontline staff?

1 Shares:
avatar
  Subscribe  
Notify of
You May Also Like