Social Media Content material Moderator Sues TikTok for PTSD

A social media content moderator is suing TikTok, a popular video app, for psychological trauma that developed from 12-hour shifts moderating endless graphic videos.

Candie Frazier works for Telus International, a Canadian contract company that provides moderation services for social media apps like TikTok. Frazier filed a complaint with the California Central District Court in December alleging that TikTok and parent company ByteDance are not providing adequate support to the mental wellbeing of their contracted moderators, whose job it is to remove violent, graphical, and otherwise inappropriate content from the platform.

TikTok’s popularity skyrocketed after the pandemic lockdowns, particularly among Millennials and Generation Z. In September, TikTok was seeing 1 billion users every month.

In her complaint, Frazier explains that moderators need to watch “three to ten videos at a time,” with only 25 seconds of reviewing each one. The complaint stated that the videos contained violent content such as “animal cruelty, torture, suicides, child abuse, murder, beheadings and other graphic content”.

As a result, Frazier developed symptoms of PTSD, including anxiety, depression, insomnia, and “excruciating nightmares.” The complaint stated: “She often lies awake at night trying to fall asleep and playing videos that she has seen in her head. She has severe and debilitating panic attacks. “

According to Frazier, only one 15-minute break is allowed in the first four hours of her work day, with an additional 15-minute break every two hours thereafter. In addition, Frazier claims that ByteDance “severely punishes” any extra time that is withdrawn from video moderation, despite the emotional disturbances many workers experience throughout the day.

Hilary McQuaide, a TikTok spokeswoman, told The Verge:

Our security team works with third-party companies on the vital job of protecting the TikTok platform and community, and we’re expanding a number of wellness services to help presenters feel mentally and emotionally supported.

James Vincent, “TikTok was sued by a former content host for allegedly failing to protect their mental health” on The Verge

The lawsuit calls for TikTok to provide more frequent pauses, as well as more visual and audio tools (like blur and mute options) so moderators can protect themselves from the full load of what they’re seeing.

Psychological trauma isn’t new to content moderation

TikTok is not unique among social media platforms for these problems. Moderators on Facebook, Google, and YouTube have reported similar problems. In 2020, content moderators were awarded $ 52 million in a settlement against Facebook for psychological trauma.

The Verge’s Casey Newton has been collecting the stories of social media content moderators for the past several years, sharing their experiences, and drawing attention to the dark side of social media operations. In a chilling article, Newton stated that there is a 50-fifty chance for moderators on Facebook of developing mental health problems as a result of their work.

In a 2019 exposé entitled “The Trauma Floor,” Newton documented the panic attacks, anxiety, and depression these workers experienced. The lack of support and empathy from leadership has created a toxic work environment, with many employees consuming black humor, alcohol, marijuana, and even sex during work hours to cope with the violence, abuse, and hatred that the review them regularly.

According to these former moderators, Google, YouTube and Facebook did not reveal during the application and training process how much disturbing content they would regularly moderate.

“You always see death, every day,” a former content moderator for Facebook told Newton in a short YouTube documentary (see below). “You see pain and suffering. And it only makes you angry because they aren’t doing anything. The stuff that is deleted ends up back there anyway. “

Is AI the solution?

If the human psyche is too fragile to handle the amount of graphic content posted on the internet every day, what is the solution? A Wild West-style internet in which violent and descriptive photos and videos circulate freely? Or could artificial intelligence effectively replace these workers?

Social media apps have started to use more artificial intelligence algorithms to automatically remove inappropriate content without human supervision. The technology is imperfect, however, which requires humans to continue to do the work where the AI ​​fails.

Facebook’s use of AI to moderate its platforms has been put to the test in the past, with critics pointing out that artificial intelligence lacks the human ability to judge the context of much online communications. Especially when it comes to issues like misinformation, bullying, and harassment, it can be almost impossible for a computer to know what it is seeing.

Facebook’s Chris Palow, a software engineer on the company’s Interaction Integrity team, agreed that AI has its limitations, but told reporters that technology could still play a role in removing unwanted content. “The system is about bringing together AI and human reviewers to make fewer mistakes,” said Palow. “The AI ​​will never be perfect.”

When asked what percentage of the posts misclassify the company’s machine learning systems, Palow didn’t give a straight answer, instead observing that Facebook will only allow automated systems to operate without human oversight if they are as accurate as human reviewers. “The bar for automated trading is very high,” he said. Nevertheless, Facebook is constantly adding more AI to the moderation mix.

James Vincent, “Facebook Now Uses AI To Sort Content For Faster Moderation” at The Verge

In the meantime, the internet remains as free of graphic material as it can be in an imperfect world because of the work of humans, not machines. “But the danger to human life is real,” writes Newton, “and it will not go away.”

Continue reading:

AI is not ready to moderate content! With COVID-19 quarantines on human moderators, some are turning to AI to keep the bad things off social media. Large social media companies have long wanted to replace the moderators of human content with AI. COVID-19 quarantines only exacerbated this discussion. (Brendan Dixon)

Facebook moderators are not what we think they are. Companies offer horrific working conditions in some cases because they believe that AI will soon gain the upper hand. And if that doesn’t happen – and maybe can’t – what is the backup plan? Complain?

Yes, there are ghosts in the machine. And one of them is you. You power the AI ​​as you prove your humanity in the CAPTCHA challenges that are flooding the web. AI systems are not an alien brain that develops in our midst. (Brendan Dixon)

Comments are closed.