[ad_1]
Still, Stalinsky said he sided with “dismantle it” when it comes to violent extremism. Its perspective has been forged by years of monitoring activity against Islamic terrorists. ISIS, in particular, has become famous for its strategic use of social media in the 2010s. “It might sound crazy, but without Twitter Daesh would not have been ISIS,” he said. “They used it so effectively for recruiting, to spread their ideology, to grow.” After the 2014 murder of American journalist James Foley, Stalinksy said, Twitter took the issue seriously and largely purged ISIS from its platform. Banned from major social networks, the group migrated to Telegram and other chat apps.
The platforms were critical for years for dealing with white nationalism more indulgent than Islamic extremism. As right-wing internal terrorists use social media for recruiting, however, the last-minute measures announced last week are likely too late to have an impact on the violence surrounding the inauguration. Recruitment, as it stands, has been going on for years. YouTube has been shown to make it easier communities to be formed around radical right-wing views; Facebook’s recommendation algorithms have notoriously oriented people in more extreme groups. It is also difficult to compare the rioters on Capitol Hill directly to Daesh. It is an ad hoc alliance with a specific and immediate goal – to keep Trump in power – rather than an ideological organization with fixed long-term ambitions. While some appear to belong to organized militias and white supremacist groups, many tributaries feed the Stop the Steal River, including adherents of QAnon, who are not inherently organized around violence, and people who just believe Trump’s claims that the country is being robbed them and feel motivated to act.
Indeed, providing a forum for election lies is probably the most important way social media platforms have contributed to the current atmosphere of political violence, and it is also the one that is most clearly too late. for a quick fix. Facebook and YouTube are shutting down accounts that repeat lies about a stolen election, but at this point tens of millions of Americans are already believing the false claims. For businesses to make a difference here, they should have started much earlier.
To be fair, in some ways made start earlier. (Much less YouTube, which tends to to go out with less aggressive when it comes to disinformation.) In the months leading up to and following the election, companies unprecedented efforts to point users to accurate information and apply fact-checking labels to allegations of electoral fraud. These measures do not appear to have been effective, but it is understandable why companies were reluctant to start cutting every post challenging the election results. It is untenable for a platform of any size to control all bogus content, especially when it comes to politics, which is trying to convince voters to accept a certain version of reality. In an age of intense polarization, it is not always clear which lies will be the ones that will trigger the violence until it happens.
It is a mistake, however, to analyze the guilt of social media purely in terms of the binary decision to remove or leave something out. The effect of these companies on speech is much more deeply rooted in their core design, which puts engagement above all else. To understand one way this plays out, I highly recommend a recent New York Times item by Stuart A. Thompson and Charlie Warzel. They analyzed the public Facebook posts of three far-right users, including one who was in the crowd outside the Capitol on January 6. All three, the authors found, started posting normal stuff, with limited reaction. Once they’ve moved on to extreme messages – whether it’s encouraging ‘Stop the steal’ protests, Covid denial, or spreading false statements about the count of rigged ballots – their commitment soared: more likes, more comments, more shares. More attention.
[ad_2]