If you want to learn why and how Section 230 came to be (spoiler alert: porn websites are partly to be blamed) check out the video I produced for Retro Report, in partnership with our friends at Vox.

Section 230 Was Created to Shield Sites Like Pornhub. It Might Be Killed for the Same Reason.

Democrats and Republicans agree that the law doesn’t serve Americans but disagree on why. Will a repeal help any of their cases?

Joseph Hogan
Retro Report
Published in
4 min readDec 29, 2020

--

Somewhere between the early days of Usenet and The Well in the 80s and the rise of Facebook and Twitter in the 2000s, the Internet became our public square. At some point we just accepted that much of our talking, connecting and arguing would happen online. But we failed to remember that the platforms on which we do all that talking and connecting and arguing aren’t actually public; they’re run by corporations, which set the rules for what can appear on their platforms.

Bill Gates to NBC’s Tom Brokaw: “Well, it’s very hip to be on the Internet right now.”

Now, with disinformation flooding the web and Big Tech trying (or not trying) to stop it, Democrats and Republicans want a greater say in those rules.

President-elect Joe Biden says that platforms like Facebook should do more policing of disinformation. But GOP lawmakers like Josh Hawley and Tom Cotton claim that those same platforms censor conservative views and should do less policing.

Those aims run against each other. But both sides are calling for revisions to, or an outright repeal of, the same little 26-word section of the law that they take as the heart of the issue. If you follow President Trump on Twitter, you’ve seen him slam the caps lock to renounce it: Section 230 of the Communications Decency Act.

President Trump has repeatedly called for the repeal of Section 230. Without it, platforms like Twitter would be liable for his posts.

What is Section 230?

Section 230 shields internet companies from legal liability for content posted by users.

Let’s say a user posts something defamatory in the comment section of a website. The website operator is free to take down that post or leave it up. Either way, thanks to Section 230, the object of the defamatory comment can’t sue them.

Like so much about the internet, we simply accept this state of affairs as a given. But it wasn’t always so. At Retro Report, in collaboration with Open Sourced by Vox, we made a video about the origins of Section 230. It features a crusading anti-porn senator from Nebraska and Bill Gates explaining how “hip” the internet is (in other words, peak-90s content).

But what you need to know right now is this: In the mid-90s, courts held that internet companies that moderated user content — to keep their sites family-friendly, say — were legally liable for anything their users posted. Conversely, if websites didn’t moderate their content — if they allowed their users to say whatever they wanted, however defamatory — the websites wouldn’t be liable, because they’d never established editorial control over the content.

Congress passed Section 230 to let internet companies off the legal hook so they’d be free to moderate. An entire industry — Facebook, Twitter, YouTube, even Google — grew on that legal foundation.

So what’s the problem?

Well, for one, Section 230 provides legal immunity without requiring any particular kind of content moderation. So, internet companies can kick back, not bother with moderation, and enjoy not being sued.

Consider Pornhub. After Nick Kristof of The New York Times reported that the platform included videos of rape and sexual assault of minors, and Mastercard and Visa stopped allowing their cards to be used on the site, Pornhub changed its policies and removed the vast majority of its content. But it’s not legally liable for those videos: the victims can’t successfully sue them. (One exception would be if the platform facilitated sex trafficking, a carveout of Section 230 immunity from a 2018 law known as SESTA-FOSTA).

Section 230 provides legal immunity without requiring any particular kind of content moderation. So, internet companies can kick back, not bother with moderation, and enjoy not being sued.

That’s why some critics think Section 230 should be revised so that internet companies, if they want immunity, would be required to make good-faith efforts to moderate content responsibly.

But would repealing the law help Democrats, who are worried about disinformation, or Republicans, who want platforms to be politically neutral in their content moderation decisions?

1) Well, it might not help Democrats, actually.

Disinformation and hate speech — kooky conspiracy theories, racist content — are what some legal experts call “awful but lawful.” Saying, for instance, that the Covid vaccine has a government microchip in it might be foolish, but it isn’t illegal. So even if internet companies were liable for such user content, they still couldn’t be sued for much of it.

2) And it would probably help Republicans even less.

Section 230 doesn’t require, as some Republicans claim, that internet companies be politically neutral, and repealing it wouldn’t necessarily make them neutral either. It could actually lead platforms to deactivate accounts that might be a liability.

But getting too wonky about Section 230 risks missing the bigger point about why Democrats and Republicans have both set their sights on it. They seem to be acknowledging, and fighting over, the point with which we started: the internet, our public square, isn’t public. It’s dominated by a few large companies whose decisions, about content moderation and a lot else, have significant consequences for our politics and our future.

So whatever comes of the debate over Section 230, it’s not going to be the end of the debate over Big Tech.

This article has been updated to correct an error. Senator James Exon represented Nebraska, not Oklahoma.

Joseph Hogan is a fact-checker at Retro Report and produced the video “Trump and Biden Both Want to Repeal Section 230. Would That Wreck the Internet?

--

--