[ad_1]
At the end last year, there were few better symbols of bad faith politics than section 230 of the Communications Decency Act, the law that grants online platforms legal immunity for content generated by users. users. After a rather sleepy existence since its adoption in 1996, Article 230 has become an unlikely Rallying cry for a subset of Republican politicians who brazenly blamed him for letting social media platforms discriminate against conservatives. (Actually, the law has nothing to do with partisan balance, and if anything allows platforms to retain more right-wing content than they otherwise would.) In the home stretch of its re-election campaign, Donald Trump has started dropping Article 230 references in his stump speeches. It all culminated with a pair of depressing Senate auditions who, while nominally about Section 230, were little more than PR stunts designed for Ted Cruz to get clips of himself berating Twitter CEO Jack Dorsey. Senate Democrats weren’t covering themselves in glory, either.
So it’s a bit surprising to see a legislative proposal on Section 230 that thoughtfully, albeit imperfectly, tackles some of the law’s most glaring issues. The SAFE TECH Act, a bill introduced Friday morning by Democratic Senators Mark Warner, Mazie Hirono and Amy Klobuchar, is an encouraging sign that members of Congress are paying attention to the smartest criticisms of Section 230 and trying to find appropriate solutions.
First of all, a brief reminder is in order. Section 230 was passed in 1996 to encourage the nascent interactive Internet platforms – message boards, at the time – to self-moderate. The first part of the law states that “interactive computer services” are not legally responsible for user-generated content. The second part says that they are free to moderate this content without being responsible for it. This solved the dilemma of a business putting itself at greater legal risk by being more proactive in monitoring harmful content.
In recent years, the law has given rise to much debate. Defenders of Section 230 credit it for enabling the modern Internet to flourish. They argue that interactive websites would be unimaginable without it, crushed under threat of legal action from anyone offended by a comment, post or customer review. Critics of the law counter that Section 230 allows companies like Facebook and YouTube, as well as the darker people down below, to enjoy hosting harmful content without having to bear the costs of cleaning it up.
Some of the questions raised in this debate are difficult to answer. But some are quite easy. This is because judges have interpreted Section 230 immunity so broadly that it has led to legal results that seem patently perverse. Today, Section 230 protects gossip sites that actively encourage users to submit nasty rumors and even take revenge for pornography, essentially legalizing a harassment-based business model. Until Congress intervened recently, it protected sites like Backpage, which were created to facilitate prostitution. This allows businesses to get by even when they have been told they are being used to harm people. In now notorious case, a man’s ex-boyfriend impersonated him on Grindr, the popular gay dating app, sending a flood of men to his home and work in search of sex. Grindr ignored the victim’s calls to do something. After the victim was prosecuted, a federal judge ruled that Section 230 protects Grindr from liability.
The law even applies to commercial transactions whose consequences are felt in the physical world. In 2012, a Wisconsin man murdered his wife and two of his colleagues with a gun he had purchased at Armslist, a “gun market.” Because he was subject to a restraining order, he was legally prohibited from owning a firearm. Armslist allowed him to bypass this. The victim’s daughter sued and the Wisconsin Supreme Court ultimately ruled that section 230 made Armslist immune, because the advertisement for the weapon was posted by a user.
[ad_2]