Home Technology news Better than nothing: a look at content moderation in 2020

Better than nothing: a look at content moderation in 2020



“I do not think so it’s just for a private company to censor politicians or the news in a democracy. »- Mark Zuckerberg, October 17, 2019

“Facebook removes Trump’s post on Covid-19, citing rules of disinformation” – The Wall Street Journal, October 6, 2020

For more than a decade, the attitude of the biggest social media companies towards monitoring disinformation on their platforms has been best summed up by Mark Zuckerberg many timessay again warning: “I firmly believe that Facebook shouldn’t be the arbiter of the truth of everything people say online. Even after the 2016 election, as Facebook, Twitter and YouTube faced mounting backlash for their role in spreading conspiracy theories and lies, businesses remained reluctant to take action against it.

Then came 2020.

Under pressure from politicians, activists and the media, Facebook, Twitter, and YouTube all made policy changes and enforcement decisions this year they had long resisted – from labeling fake news from important accounts to the attempt to thwart the viral spread with the deletion of messages by the President of the United States. It’s hard to say how successful these changes have been, or even how to define success. But the fact that they have taken the steps marks a radical change.

“I think we’ll look at 2020 as the year when they finally agreed to have some responsibility for the content of their platforms,” said Evelyn Douek, an affiliate of the Berkman Klein Center for Internet and Society at Harvard. “They could have gone further, they could do a lot more, but we should be happy that they are at least in the game now.”

Social media has never been completely free for everyone; the platforms have long watched the illegals and the obscene. What has emerged this year is a new willingness to take action against certain types of content just because it’s bogus – expanding the categories of prohibited material and more aggressively enforcing policies already in place. The immediate cause was the coronavirus pandemic, which superimposed an information crisis on a public health emergency. Social media executives quickly saw the potential of their platforms to be used as vectors for lies about the coronavirus which, if believed, could be fatal. They vowed from the start both to try to keep dangerously false claims off their platforms and to point users to accurate information.

One wonders if these companies foresaw how political the pandemic would turn out to be, and Donald Trump the main purveyor of dangerous nonsense – forcing a showdown between the letter of their policies and their reluctance to enforce the rules against powerful public officials. By August, even Facebook would have the temerity to go down a Trump article in which the president suggested that children were “virtually immune” to the coronavirus.

“Stating things to be wrong was the line they wouldn’t cross before,” said Douek. “Before that,” they said, “falsehood alone is not enough”. That changed over the course of the pandemic, and we started to see them being more willing to take things down, just because they were wrong.

Nowhere has public health and politics interacted more fuelfully than in the debate over postal voting, which emerged as a safer alternative to in-person polling stations – and was immediately demonized by Trump as a Democratic ploy to steal the election. The platforms, perhaps eager to erase the bad taste of 2016, have tried to forestall the onslaught of mail-vote propaganda. It was postal voting that led Twitter to break the seal on apply a fact check label to a tweet from Trump in May that made false statements about the postal voting process in California.

This trend reached its apotheosis as the November election loomed, as Trump broadcast his intention to challenge the validity of any vote that went against him. In response, Facebook and Twitter announced elaborate plans to counter this push, including adding disclaimers to premature victory claims and specifying which credible organizations they would rely on to validate election results. (YouTube, in particular, made much less to prepare.) Other measures include restricting political ad purchases on Facebook, increasing the use of human moderation, inserting reliable information into user feeds, and manual intervention to block the spread of potentially misleading viral disinformation. As the New York Times writer Kevin Roose observed, these steps “involved slowing down, shutting down or hampering essential elements of their products – in fact, defending democracy by worsening their applications.”




Please enter your comment!
Please enter your name here

Exit mobile version