Following reports of genocide in Myanmar, Facebook banned the country’s top general and other military leaders who were using the platform to foment hate. The company also bans Hezbollah from its platform because of its status as a US-designated foreign terror organization, despite the fact that the party holds seats in Lebanon’s parliament. And it bans leaders in countries under US sanctions.
At the same time, both Facebook and Twitter have stuck to the tenet that content posted by elected officials deserves more protection than material from ordinary individuals, thus giving politicians’ speech more power than that of the people. This position is at odds with plenty of evidence that hateful speech from public figures has a greater impact than similar speech from ordinary users.
Clearly, though, these policies aren’t applied evenly around the world. After all, Trump is far from the only world leader using these platforms to foment unrest. One need only look to the BJP, the party of India’s Prime Minister Narendra Modi, for more examples.
Though there are certainly short-term benefits—and plenty of satisfaction—to be had from banning Trump, the decision (and those that came before it) raise more foundational questions about speech. Who should have the right to decide what we can and can’t say? What does it mean when a corporation can censor a government official?
Facebook’s policy staff, and Mark Zuckerberg in particular, have for years shown themselves to be poor judges of what is or isn’t appropriate expression. From the platform’s ban on breasts to its tendency to suspend users for speaking back against hate speech, or its total failure to remove calls for violence in Myanmar, India, and elsewhere, there’s simply no reason to trust Zuckerberg and other tech leaders to get these big decisions right.
To remedy these concerns, some are calling for more regulation. In recent months, demands have abounded from both sides of the aisle to repeal or amend Section 230—the law that protects companies from liability for the decisions they make about the content they host—despite some serious misrepresentations from politicians who should know better about how the law actually works.
The thing is, repealing Section 230 would probably not have forced Facebook or Twitter to remove Trump’s tweets, nor would it prevent companies from removing content they find disagreeable, whether that content is pornography or the unhinged rantings of Trump. It is companies’ First Amendment rights that enable them to curate their platforms as they see fit.
Instead, repealing Section 230 would hinder competitors to Facebook and the other tech giants, and place a greater risk of liability on platforms for what they choose to host. For instance, without Section 230, Facebook’s lawyers could decide that hosting anti-fascist content is too risky in light of the Trump administration’s attacks on antifa.
This is not a far-fetched scenario: Platforms already restrict most content that could be even loosely connected to foreign terrorist organizations, for fear that material-support statutes could make them liable. Evidence of war crimes in Syria and vital counter-speech against terrorist organizations abroad have been removed as a result. Similarly, platforms have come under fire for blocking any content seemingly connected to countries under US sanctions. In one particularly absurd example, Etsy banned a handmade doll, made in America, because the listing contained the word “Persian.”
It’s not difficult to see how ratcheting up platform liability could cause even more vital speech to be removed by corporations whose sole interest is not in “connecting the world” but in profiting from it.
Despite what Senator Ted Cruz keeps repeating, there is nothing requiring these platforms to be neutral, nor should there be. If Facebook wants to boot Trump—or photos of breastfeeding mothers—that’s the company’s prerogative. The problem is not that Facebook has the right to do so, but that—owing to its acquisitions and unhindered growth—its users have virtually nowhere else to go and are stuck dealing with increasingly problematic rules and automated content moderation.
The answer is not repealing Section 230 (which again, would hinder competition) but in creating the conditions for more competition. This is where the Biden administration should focus its attention in the coming months. And those efforts must include reaching out to content moderation experts from advocacy and academia to understand the range of problems faced by users worldwide, rather than simply focusing on the debate inside the US.