Connect with us

Technology

Mistreated moderators and the pervasive violence of the internet

Editor

Published

on

Recently, The Verge published a look inside one of Facebook’s deals with a content moderating contractor. Facebook hires these moderators to screen posts reported by users for violating their community standards. These moderators look at reported posts and decide whether to delete them or allow them. Author Casey Newton was able to convince some former Facebook moderators, who are generally prohibited from discussing their work by NDAs, to tell her about their experiences. Their stories are deeply upsetting; they are routinely forced to witness extreme violence, constantly monitored and held to incredibly high standards for speed and accuracy. Accuracy is determined by how often moderators’ decisions agree with the decisions of slightly more senior moderators; more senior moderators are given a random sample of a regular moderators’ processed posts and asked to make their own judgments. At Cognizant, for example, moderators must be “accurate” at least 95 percent of the time. Within the Cognizant work site Newton examines, some moderators have responded to constant exposure to the worst of Facebook by buying into the conspiracy theories. One person genuinely believes the Earth is flat, another has become convinced that 9/11 was not a legitimate terrorist attack and another denies that the Holocaust took place.

Reading Newton’s piece was odd to me because it was eerily similar to the experiences of censors in China, which I am currently researching for a literature review. China has made all website owners liable for content on their website, so the vast majority of censorship is actually performed by employees of social media sites. Website employees tasked with moderating content at Beyondsoft, a Chinese techservices company contracted by social media platforms, and Cognizant, an American company contracted by Facebook, are required to lock their phones in small lockers while at work and perform content moderation using computers with limited capabilities. Both companies ask that workers screen a dauntingly high number of posts per day, although Beyondsoft’s targets are higher (it’s difficult to compare exact numbers because Facebook posts may be longer than the ones Beyondsoft screens).

There are, however, some interesting differences between Facebook moderators’ work and that of Chinese social media censors. Although both companies have training programs, Beyondsoft’s program must teach employees about censored information. Many employees learn about the 1989 Tiananmen Square Demonstrations for the first time during Beyondsoft’s training. Chinese employees are required to have in-depth, detailed knowledge of all of the most controversial parts of Communist Party of China (CCP) history, and they are expected to use that knowledge to censor social media in order to protect the CCP.

Yet that cognitive dissonance might be less overwhelming than the trauma Facebook’s moderators experience. Newton reports that many of her sources found their work depressing, anxiety-inducing and horrifying. It is apparently not uncommon for employees to use alcohol, marijuana or other drugs to get through a day of screening posts. Dark humor, including jokes about self-harm, is common at Cognizant.

Last year, for my PWR 1 class, I wrote a paper on white supremacy on 4chan. A surprising number of mass shootings are committed by individuals, usually young cisgender white men, who spent a great deal of time on websites like The Daily Stormer or 4chan’s /pol/ board (a word of warning: Both of those sites contain graphic and disturbing content, and I would not recommend visiting them). Dylann Roof credited online white supremacy with inspiring his actions. Perhaps foolishly, I attempted to gain insights about why white supremacy appeals to people and even convinces them to commit terrible crimes. I attempted to do this by reading and analyzing content from 4chan’s /pol/ board.

Because this project was of my own design, and I was able to choose when and how to read the messages I collected, my experience was likely a great deal less severe than that of Cognizant employees. It was still frightening and deeply unpleasant. I learned a new vocabulary of hate, an entirely new language of slurs and insults designed to reinforce bigotry. I learned that white supremacists are at once creative in their expressions of hatred and utterly original in the content of their ideas.

I did, to some extent, accomplish my goal. I learned that these communities seem to offer users a sense of power, uniqueness and support, as long as the user is male and white. They offer a prepackaged sense of purpose (to protect the white race) and identity (a member and protector of the white race). But I also found myself constantly sad, anxious and frustrated; finishing the paper offered an enormous sense of relief and alleviated most of my malaise, but I can’t quite leave it behind.

I haven’t visited 4chan or any of the other sites I researched in nearly a year. Nevertheless, I cannot forget that every one of the posts I read was written by a human being, who can vote and buy a gun. I am living in the same America I was before this project, but now I am playing a never-ending guessing game. I look around and try to find the /pol/ users, The Daily Stormer viewers, the people who spew hateful things online and then go to the grocery store as if nothing’s wrong. I can’t find them, but now I know they must be somewhere, and I can’t quite stop looking.

I don’t know how to fix our internet problem. Bigotry and violence have permeated every platform, from 4chan to Facebook, and asking people to monitor this deluge of posts means subjecting human beings to nonstop hate. It’s clear that Facebook should be paying people more, pressuring them less and providing better mental health services. But that doesn’t really fix the problem. The source of moderators’ trauma will not change, no matter how well  Facebook treats them.

At the risk of sounding un-American, I might suggest learning from China. Many Chinese social media platforms preemptively prevent people from posting content that contains certain words. Others automatically delete posts with those words. Facebook and other companies can simply ban obviously offensive terms (thanks to my excursion into 4chan, I have a long list of terms that no one except white supremacists use). Freedom of speech is important, but it only extends to the government, meaning that the government cannot censor private citizens’ speech but private companies can. Private companies are not under any obligation to provide a platform for bigotry.

Ultimately, though, the internet seems to be an expression of society — particularly, the parts of society that people don’t like to bring up face-to-face. If social media platforms want to prevent the worst parts of society from running rampant on their sites, they must either employ moderators, and subject those moderators to traumatizing posts, or somehow eradicate bigotry and violence in society as a whole. Looking at that choice, it’s not hard to see why Facebook chose the moderators.

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Technology

Putin wants his own private internet

Editor

Published

on

By

New Russian laws could soon isolate the country’s Runet from the rest of the internet as it seeks to tighten its grip on the information that flows in and out of the country.

A new bill, backed by President Vladimir Putin and Moscow lawmakers, is currently being pushed through parliament which would create a single command post from which local authorities can manage and even halt information flowing across the internet in Russia.

The country’s so called “Sovereign Internet” bill is being portrayed by Putin as a defensive response to the Trump Administration’s new cyber strategy that would allow the US to launch offensive measures against Russia and any other nation states known for committing nefarious activities online.

Andrei Soldatov, author of “The Red Web: The Kremlin’s Wars on the Internet”, told Bloomberg that he thinks the law isn’t aimed at foreign threats but at quelling civil unrest, saying:

“This law isn’t about foreign threats, or banning Facebook and Google, which Russia can already do legally. It’s about being able to cut off certain types of traffic in certain areas during times of civil unrest.” 

Sovereign internet

The law, currently in draft form, was co-authored by KGB veteran Andrei Lugovoi who’s wanted in the UK for the murder of a renegade agent, is actually a mixture of several bills, some of which have been in development for years.

According to Putin, the ultimate goal is to ensure that the Runet continues to function in the event that the US tries to block Russia from accessing the rest of the internet.

If the bill does pass, the country would install special boxes with tracking software at the thousands of exchange points that link it to the rest of the web. These units would feed data into a central nerve center from which regulators could analyze web traffic and reroute traffic that they do not deem appropriate for the Russian populace.

Russian censorship has grown stronger in recent years and if Putin has his way, the country’s internet will soon resemble that of China’s where access to the outside web is blocked by the Great Firewall.

Continue Reading

Technology

The Internet Has Become A ‘Completely Out-Of-Control Monster,’ Warns Successor Of Man Who Created It

Editor

Published

on

By

Nearly 30 years ago a group of scientists at a Swiss physics institution came up with a novel idea to share data and work between themselves across the globe. The groundbreaking concept was the brainchild of Sir Tim Berners-Lee, whose vision for a “decentralized information management system” eventually gave birth to the world wide web.

Fast forward three decades and the internet has invaded all corners of the globe and governs all areas of life. It has become a power without equal. Some have suggested it has become a Frankenstein’s monster which needs to be reined in, and fast

One such person is Francois Fluckiger, the man who would become Sir Tim’s successor at CERN, the European Organization for Nuclear Research.

The Daily Mail reports that Fluckiger believes fake news, privacy threats, and online bullying have all conspired to turn the internet into a “completely out-of-control monster.”

When Sir Tim left CERN’s web team in 1994, Fluckiger picked up the reins. He has since retired, and although he has hailed the web as one of the three major inventions of the 20th century, he believes it has morphed into something almost unrecognizable from its early days.

“One has to ask oneself if we did not, in the end, create a completely out-of-control monster,” said Fluckiger.

Continue Reading

Technology

Bali’s silent day: No flights, internet on New Year

Editor

Published

on

By

Bali’s airport will close for 24 hours, the internet will be turned off and streets emptied as the island in Indonesia observes its New Year with an annual day of silence.

‘Nyepi’ begins at 6 AM on Thursday, clearing beaches and all public spaces of people except for special patrols to ensure silence is observed. For the second year, phone companies will turn off the mobile internet on the island, home to more than four million people.

Balinese will stay indoors, covering windows and keeping lights off for the day of reflection.

“A day of silence to mark Saka (Balinese calendar) New Year for us is an opportunity to restart life with a pure heart,” said Wayan Gota, a hotel manager in Kuta, one of the island’s tourist hotspots.

Continue Reading

Chat

Trending