In today’s Finshots, we explain why everyone is talking about an obscure US law known as Section 230
In 1995, Prodigy, an internet company of the pre-Google and Facebook era, was sued for $200 million. The complaint by Stratton Oakmont, an investment firm, was simple — Prodigy accepted user comments. It even moderated it and decided which comments would be published and which would be discarded. So, if Prodigy chose to publish a defamatory and scandalous comment by a user that wasn’t backed by facts, then that’s Prodigy’s fault.
And a court in New York agreed. They said, “Look, if you’re moderating content from your users, you’re an editor and a publisher. Just like a newspaper. And if a newspaper can be sued, so can you. If you don’t moderate anything, then that’s fine. You won’t be sued.”
So the folks in power in the US decided that this wasn’t great for the future of the internet.
Why’s that, you ask?
Well, the ruling meant that online platforms would simply choose not to moderate content anymore. And the internet could become a cesspool of hate and toxicity. Platforms would need to exercise their discretion and not worry about being sued. That’s the only way the internet could work in the future.
So they decided to tweak the law and introduce Section 230. With just 26 words in 1996, the internet changed forever.
“No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”
Simply put, if you post something nasty about your local politician, internet platforms like Facebook or Twitter can’t be responsible for it. Only you can be sued because you wrote the content. Platforms were off the hook. They could go about their business as usual and not worry about being sued for people’s comments on their website. All they have to say is, “Remember Section 230!”
The internet platforms of today — Twitter; LinkedIn; Heck, even a restaurant review website — exist solely because of this law. Everyone’s benefited from Section 230. You, me, the platforms.
But remember, this law was enacted over 25 years ago. Back when only 40 million people used the Internet across the world. Internet and technology have changed tremendously since then. Over 3.5 billion people use social media platforms today. And platforms don’t just moderate content. They play another role — they amplify certain content with their algorithms. All to keep users engaged. And that’s the bone of contention.
See, in 2015, terrorists struck Paris. They blew up bombs, fired guns, and 130 innocent people died. And the parents of one of the victims decided to sue Google.
Wait…what connection does Google have with the terrorists?
Well, the argument is that YouTube’s algorithms are the problem. That these algorithms recommend harmful content. It pushes radical videos to certain users. And that such content peddled by terrorist organizations helped the bad folks recruit these brainwashed individuals who carry out attacks.
That means when YouTube actively ‘recommends’, they aren’t simply moderating. They’re not just playing the role of an editor. The algorithms are created by YouTube’s staff to push certain content ahead of others. So the responsibility for the content shifts to them. And Section 230 shouldn’t apply anymore.
Simply put — Recommendations of content aren’t the same as display of content.
On the face of it, the argument makes sense and is quite appealing, right?
But it’s quite a thorny issue. Remember, it’s the very same algorithms that actually keep the platform relevant for you and me. If we’re constantly watching Bollywood videos, we don’t want Chhota Bheem to pop up in between, right?
And it’s not just YouTube. Twitter’s ‘For You’ section is based on algorithms too. It sees what kind of posts you engage with. And creates a special feed that it thinks you’ll like. Every social media company has algorithms that push content to keep things relevant. But the content isn’t created by the platform itself.
So, the question is — should these algorithms be protected under Section 230?
And it’s not just algorithms we have to worry about these days. We have AI-powered search engines too — Google Bard and Microsoft Bing 2.0.
See, regular search engines simply post links to content from other sources. But ‘conversational’ search engines are a different breed. They might take information from across the web. And they then summarize it in a nicely palatable way. In the search engine’s own words.
Much like how at Finshots we do our research using secondary sources and write out a story for you. But we typically give you the links to these sources. We point you to the place where we got the information from.
Now an AI search engine may not do that. It might simply write out its own summary for your query. In its own words. And that may include false information. It could have defamatory language and content.
So should they be protected under Section 230?
Anyway, we don’t have the answers to any of this yet. We’ll have to wait until June to see what the US court thinks. But what we can say is that the 26 words that created the internet of today won’t be enough for the internet of the future.
PS: Sure, the decision in the US could set a precedent for other countries to regulate platforms in a certain manner too. But larger regions have already moved past what the US is doing to some extent. Take the EU, India, and South Africa for instance which already say that only ‘passive’ platforms are safe from liability. And that any online website that ‘organizes’ content in any way could be held accountable.