Twitter and Instagram just removed antisemitic posts from Kanye West and temporarily banned him from their platforms.
It’s the latest example of … what? How good these tech companies are at content moderation? Or how irresponsible they are for “muzzling” controversial views from the extreme right? (Defenders of West, such as Indiana Attorney General Todd Rokita, are incensed that he’s been banned.) Or how arbitrary these giant megaphones are in making these decisions? (What would Elon Musk do about Kanye West?)
Call it the Kayne West paradox: Do the social media giants have a duty to take down noxious content or to post it? And who decides?
Facebook (and its Instagram), Google’s YouTube, Twitter, and TikTok are the largest megaphones in world history. They’re contributing to the rise of neofascism in America and around the world, inspiring mentally-disturbed young men to shoot up public schools, and spreading dangerous conspiracy theories that are dividing people into warring camps.
They’re also among the richest and most powerful corporations in the world — headed by billionaires like Mark Zuckerberg, and soon, very likely, Musk (who has promised to allow Trump back on Twitter).
And they’re accountable to no one other than their CEOs (and, theoretically, investors).
It’s this combination — huge size, extraordinary power over what’s communicated, and utter lack of accountability — that’s become unsustainable.
So what’s going to happen?
Last week, the Supreme Court agreed to hear cases involving Section 230 of Communications Decency Act of 1996, which gives social media platforms protection from liability for what’s posted on them. Plaintiffs in these cases claim that content carried by the companies (YouTube in one case, Twitter in the other) led to the deaths of family members at the hands of terrorists.
Even if the Supreme Court decides Section 230 doesn’t protect the companies — thereby pushing them to be more vigilant in moderating their content — the plaintiffs in another upcoming case (NetChoice v. Paxton) argue that the First Amendment bars these companies from being more vigilant.
That case hinges on a Texas law that allows Texans and the state’s attorney general to sue the social media giants for unfairly banning or censoring them, based on political ideology. Texas argues that the First Amendment rights of its residents require this.
So, do the social media giants have a duty to take down controversial content or to post it? And who decides?
It’s an almost impossible quandary, until you realize that these questions arise because of the huge political and social power of these companies, and their lack of accountability. (The Court seems to recognize this. When the justices decided in May to temporarily stop the Texas law from taking effect while legal battles continued, Samuel Alito noted that the Texas law “addresses the power of dominant social media corporations to shape public discussion of the important issues of the day” [my emphasis].)
In reality, Facebook (and its Instagram), Google (and its YouTube), Twitter, and TikTok aren’t just for-profit companies. My betting is that the Supreme Court will treat them as common carriers, like railroads or telephone lines. Common carriers can’t engage in unreasonable discrimination in who uses them, must charge just and reasonable prices, and they must provide reasonable care to the public (transit providers are expected to keep bus and train passengers safe, for example).
In a Supreme Court decision last year, plaintiffs claimed that the @realdonaldtrump Twitter account was a public forum run by the president of the United States, and Trump’s blocking of users stifled free speech. The Court dismissed the case as moot, since Trump is no longer president. But in a 12-page concurring opinion, Clarence Thomas gave a hint of what’s to come. He argued that Twitter's ban showed that the real power lay with the large social media platforms themselves, not the government officials on them, and that “the concentrated control of so much speech in the hands of a few private parties” was unprecedented.
Thomas noted that Section 230 gives digital platforms some legal protection related to the content they distribute, but Congress “has not imposed corresponding responsibilities.” He then cited a 1914 Supreme Court ruling that making a private company a common carrier may be justified when “a business, by circumstances and its nature…rise[s] from private to be of public concern,” — leading Thomas to argue that “some digital platforms are sufficiently akin to common carriers … to be regulated in this manner.” He concluded that "[w]e will soon have no choice but to address how our legal doctrines apply to highly concentrated, privately owned information infrastructure such as digital platforms."
Other justices have made similar remarks. If the Court decides the social media giants are "common carriers," then responsibility for content moderation would shift from these companies to a government entity like the Federal Communications Commission, which would regulate them similarly to how the Obama-era FCC sought to regulate internet service providers.
But is there any reason to trust the government to do a better job of content moderation than the giants do on their own? (I hate to imagine what would happen under a Republican FCC.)
So are we locked into the Kanye West paradox — or is there an alternative to the bleak choice between leaving it up to the giant unaccountable firms or to a polarized government to decide?
Yes. It’s to address the fundamental problem directly — the monopoly power possessed by the social media companies. The way to do this is apply the antitrust laws, and break them up.
My guess is that this is where we’ll end up, eventually. There’s no other reasonable choice. As Winston Churchill is reputed to have said, “Americans can always be trusted to do the right thing, once all other possibilities have been exhausted.”
What do you think?