On January 8th, Twitter permanently suspended President Donald Trump’s account from the platform after a mob of protesters ransacked the legislator’s offices in the Capitol. Facebook, Snapchat and Instagram also booted the president off their apps and many other companies followed suit soon after. And while this move was unprecedented for the most part, it wasn’t exactly unexpected. So in today's Finshots, we look at how the reluctant gatekeepers became the active moderators of 2021.


Business

The Story

Social media apps have refused to actively moderate content for many years now. But that equation has fundamentally changed in the past year. As one article in the Atlantic notes —

At the end of May, Twitter for the first time labeled a tweet from the president of the United States as potentially misleading. After Donald Trump falsely insisted that mail-in voting would rig the November election, the platform added a message telling users to “get the facts.” Within a day, Mark Zuckerberg, Facebook’s founder and CEO, had appeared on Fox News to reassure viewers that Facebook had “a different policy” and believed strongly that tech companies shouldn’t be arbiters of truth of what people say online. But come November… much of Trump’s Facebook page, as well as more than a third of Trump’s Twitter feed, was plastered with warning labels and fact-checks, a striking visual manifestation of the way that 2020 has transformed the internet. Seven months ago, that first label on a Trump tweet was a watershed event. Now it’s entirely unremarkable.

That being said, these platforms are still grappling with some fundamental questions about moderation and it’s all coming to a head with the recent suspension of President Trump’s account.

The Kierkegaardian conundrum

Søren Kierkegaard was a Danish philosopher, most famous for expounding the difficulties in choice making. After receiving his doctorate in Philosophy and breaking off his engagement with a girl named Regine Olsen, he wrote in his most popular work — “Marry, and you will regret it; don’t marry, you will also regret it; marry or don’t marry, you will regret it either way… This, gentlemen, is the essence of all philosophy.”

It might not seem profound at first sight and even juvenile if you aren’t intimately familiar with his body of work. But Kierkegaard was perhaps the first western philosopher to properly inquire about the implications of our choices. “Is there no right choice after all?”

“Or is the right choice the one that carries the least amount of regret?"

Social media companies have nothing to gain via active moderation. It is a needless cost head, takes a lot of effort and invokes the wrath of those who are censured. They could simply let people post what they want and mint money off of it. But this is easier said than done.

Facebook and Twitter don’t just provide a platform for people to express themselves, they amplify their voices. And with that, you have an added set of responsibilities. When Trump supporters stormed the Capitol a few days back, many people pinned the blame on Twitter for providing the president with a megaphone to call on this frenzied mob. So the cost of non-intervention in some cases could involve  active social harm. What do you do in such cases?

Well, you ban the perpetrators!

But that only triggered another wave of discussion? Have the reluctant gatekeepers now become arbiters of free speech? Do these institutions wield excessive power? Granted they are private organizations and they can choose to bump off people for policy violations. But how exactly do they enforce their own policies? Are they consistent with their application? Or do they lack significant oversight?

More often than not, there is very little communication from these agencies on how they enforce their policies. On other occasions, they offer a vague explanation citing policy violation without describing the actual reasons in full detail. And while it’s no easy task to moderate millions of accounts, the collective consensus is that uniform enforcement is still a distant dream.

But let’s suppose platforms do in fact start consistently moderating content.  Is it an effective strategy to curb hate and misinformation? Well, we don’t know yet. But what we do know is that on some occasions, the very act of de-platforming can have the opposite effect as originally intended. When Twitter blocked a story from the “New York Post” about President-Elect Joe Biden’s son, Hunter Biden, the suppression only drew more attention to the article. It was almost as if the very act of censoring information aided in its discovery. So one could argue that de-platforming Trump only furthers his cause. Maybe a new echo-chamber will emerge somewhere else. Maybe Trump and his followers will create a new platform.

Maybe…

But this also begets another pressing consideration. What happens when lawmakers intervene? Legislators in the US and in India, are already talking about the possibility of policing the likes of Twitter and Facebook. If that happens…When that happens, it won’t end well for these social media giants. So right now they are in a precarious spot. They’ve crossed the Lakshmana Rekha and there’s no going back. The only problem however— The road ahead is uncertain and the likes of Twitter and Facebook have the unenviable task of finding the middle ground.

“Damned if you do, damned if you don’t”

In the meantime, let us know what you think of the matter. Should Twitter and Facebook be allowed to act as de-facto gatekeepers or do you think they're playing God here? Let us know your thoughts here and we will retweet the best comments.

Until then...

Share this Finshots on WhatsApp, Twitter, or LinkedIn.