Donald Trump’s accounts were banned on Twitter, Facebook, and a host of other platforms. Every last of @ realdonaldtrump’s 47,000 tweets vanished from the site in an instant, from birther’s lies and electoral conspiracy theories to the 2016 taco bowl tweet. In an explanatory blog post, the company cited the attack on the Capitol and “the risk of further incitement to violence” that could arise by allowing more Trump tweets. Its multiplatform withdrawal drew cheers from many, as well as the anger of more than a few Trump supporters. The bans have also raised concerns that companies have gone too far in exercising their power to shape what users see.
In fact, Trump is far from the only one having his content removed by a tech company. With surprising regularity, online platforms flag or remove user content they deem objectionable. Twitter’s recent ban on 70,000 accounts associated with QAnon reflects other initiatives the company has taken to tackle extremist groups. It has banned over a million accounts associated with terrorist groups, including a large number of accounts associated with the Islamic State. In the first half of 2020 alone, Twitter suspended around 925,000 accounts for breaking the rules.
While the removal of some content might be seen as a matter of national security or security, the practice also occurs in much more mundane situations. Yelp (where I have consulted in the past), for example, has gathered hundreds of millions of reviews of local businesses, and it’s been shown that impact on business results. Its popularity has created new challenges, including bogus reviews submitted by disguised companies trying to bolster their reputation online (or break down their competition). To fight review fraud, Yelp and other platforms flag notice they deem spam or objectionable and remove them from the main listings on the page. Yelp puts them in a section called “Not Currently Recommended,” where they don’t count in the ratings you see on a business page. The goal of approaches like this is to make sure people can trust the content they see.
In a 2016 article Posted in Management science, my colleague Giorgos Zervas and I found that about 20% of Boston restaurant reviews were removed from the main Yelp results pages. Platform-wide estimates show even higher content removal rates, with around 25-30% of all reviews not showing on major business review pages. Yelp is of course not alone in this practice. Tripadvisor and other review platforms as well invest in deleting reviews that seem likely to be wrong.
Online marketplaces also have a habit of firing users of the platform for bad behavior. In one series of articles, my associates and I found widespread evidence of racial discrimination on Airbnb. In response to our research and proposals, together with pressure from users and policy makers, the platform has embarked on a wide range of changes aimed at reducing discrimination. One of those steps (which we proposed in our research) was to create new terms of service that required users to agree not to discriminate on the basis of race in their acceptance decisions. The new terms had a huge bite: Airbnb ended up firing over a million users for refusing to accept them. Uber also has a history of removing users, from drivers who don’t maintain a high enough rating to 1,250 runners who have been banned from the platform for refuse to wear a mask during the pandemic.
All of this shows the power of platforms to shape the content we see and an often overlooked way in which platforms wield that power. Ultimately, removing content can be valuable to users. People need to feel secure to participate in markets. And, it can be hard to trust review websites riddled with fake reviews, rental housing websites teeming with racial discrimination, and social media platforms that are megaphones of disinformation. Removing bad content can create healthier platforms in the long run. There is a moral argument for banning the president. There is also a business case.