Answering the title question, “Yes, I think so.” I’ve argued for a while that, insofar as possible, sites like Twitter and Facebook should hew as closely to the First Amendment as does any government organization, including public schools. This should also hold for private schools, at least as a principle, since if one accepts the many arguments for free speech, there is no relevant difference between public and private venues in their need for free speech—and that includes social media.
Remember, American courts have carved out well know exceptions to the First Amendment—exceptions like no personal harassment or atmosphere of harassment in the workplace, no false advertising, no child pornography, no speech that is intended to incite imminent violence, no defamation, no threats, and so on. That pretty much covers all the relevant issues, except for “hate speech,” which is now the main reason people want to cut back on First Amendment rights. (I disagree, of course, as the definitions of hate speech are usually “speech that I don’t like”, and at any rate the truly invidious forms are already covered as court-mandated exceptions to First Amendment rights).
In this article from The Dispatch, editor David French (also a contributing writer for The Atlantic), makes the argument I reprised above: stop kvetching about what kind of speech we should have on Twitter: just let ‘er rip according to how the courts have construed the First Amendment. That is, I think, what Elon Musk wanted, but the entire Internet now seems bogged down on endless discussions of how to “moderate” Twitter, combined with endless nonstop demonization of Musk (I have no dog in the pro- or anti-Musk arguments).
This article isn’t too long, but makes a persuasive case for taking moderation of social media (and private universities) to First Amendment levels. Click to read (you may have to start a free account using your email:
I’ll give some excerpts. First, French shows that the Twitter arguments show good parallels with failed college “speech codes”:
A few years ago I was invited to an off-the-record meeting with senior executives at a major social media company. The topic was free speech. I’d just written a piece for the New York Times called “A better way to ban Alex Jones.” My position was simple: If social media companies want to create a true marketplace of ideas, they should look to the First Amendment to guide their policies.
This position wasn’t adopted on a whim, but because I’d spent decades watching powerful private institutions struggle—and fail—to create free speech regulations that purported to permit open debate at the same time that they suppressed allegedly hateful or harmful speech. As I told the tech executives, “You’re trying to recreate the college speech code, except online, and it’s not going to work.”
And I like French’s potted history of College speech codes:
At the risk of oversimplifying history, here’s the short story of modern university censorship. As American universities grew more diverse, a consensus emerged in universities both public and private that schools should strive to create a “welcoming” environment for students and faculty, with particular attention paid to protecting students from discrimination on the basis of protected categories such as race, sex, sexual orientation, and gender identity.
Federal and state laws required colleges and universities to protect students from harassment on the basis of protected characteristics. But schools wanted to go further. They wanted to make sure that students and faculty were protected from psychological discomfort. The speech code was born.
At the same time, however, schools were still eager to proclaim their support for academic freedom and free speech. So the message to the campus community boiled down to something like this—all speech is free except for hate speech. But what was hate speech? The definitions were broad and malleable.
Temple University, for example, banned “generalized sexist remarks.” Penn State University declared that “acts of intolerance will not be tolerated,” and defined harassment as “unwelcome banter, teasing, or jokes that are derogatory, or depict members of a protected class in a stereotypical and demeaning manner.”
One of the worst speech codes I ever read was enacted at Shippensburg University, a public university in Pennsylvania. The policy was remarkably broad: “Shippensburg University’s commitment to racial tolerance, cultural diversity and social justice will require every member of this community to ensure that the principles of these ideals be mirrored in their attitudes and behaviors.”
It doesn’t take a legal genius to realize that these speech rules were so broad that they granted administrators extraordinary power over free speech. Combine that power with the ideological blinders that are inherent to any political monoculture, and you have a recipe for staggering double standards in censoring political and religious speech. I could fill an entire newsletter with stories of such abuses.
Again, private universities are at liberty to restrict speech any way they want, but once they state that they have a principle of free speech, then they must stick by it and can, in fact, be sued for abrogating it. But I see no reason why any private school should restrict free speech.
They can, of course, restrict it in a way that doesn’t disrupt the mission of the university, as by banning loud demonstrations that would deplatform a speaker. French discusses other permissible restrictions.
And so it should be, says French, with social media:
The same message should apply to social media. As a private company, you can choose to become, say, a “progressive social media platform” or a “website for Christian connection and expression” and govern yourself accordingly. But if you hold yourself out as a place that welcomes all Americans, then you’re courting disaster if you depart from the lessons learned from constitutional law.
And that’s Twitter. And I agree.
French also favors “viewpoint neutrality,” which is simply recasting free speech to add that no speech should be banned simply because of “the underlying viewpoint of the speaker.” This rule, and the First Amendment adherence, has of course been adopted by the University of Chicago in its Principles of Free Expression and its Kalven Principles.
After emphasizing that all speech rules should be as clear as possible, because vague or “overbroad” rules can also chill speech, French then recommends three principles for implementing First Amendment guidelines by social media companies:
How does all this apply to Twitter, Facebook and every other large social media platform on the planet? First, it means giving up the quest for a free speech utopia and embracing viewpoint neutrality. There is no way to create any meaningful free speech environment that allows for actual debate while protecting participants from hurtful ideas or painful speech. Executives at Twitter or Meta are no better than college administrators at crafting the perfect speech code. The brightest minds have already made that effort, and even the brightest minds have failed.
Second, it means moderating on the basis of traditional speech limits. Even institutions that embrace viewpoint neutrality will place limits on speech. They’ll have to. If there is one thing we know from decades of experience with the internet is that completely unmoderated spaces can and do become open sewers that are often unsafe for children and deeply unpleasant for adults. Unmoderated spaces can become so grotesque that they’re simply not commercially viable.
“Viewpoint neutral” is thus not a synonym for “unmoderated.” Consistent with viewpoint neutrality, a platform can impose restrictions that echo offline speech limitations. Defamation isn’t protected speech. Neither is obscenity. Harassment is unlawful. Invasions of privacy (doxxing, for example) should face sanctions. Threats and incitement violate criminal law. A platform can say, “Children are present. No nudity.”
It is easy to imagine different rules that make it easier to talk about issues and harder to target individuals. Examples of viewpoint-neutral time, place, and manner regulations that could prevent, for example, some of the worst conduct on Twitter could include limiting or eradicating the quote-tweet function, limiting the visibility of replies to other users’ tweets, or limiting the ability of users to reply or interact with tweets of people they don’t follow.
Third, it means embracing clarity and transparency. Make rules clear. Create an appeals process when users are penalized. No human institution is ever going to apply its rules perfectly, and accountability is necessary. Secrecy in decision-making can impair trust every bit as thoroughly as flaws in the substance of the decisions made.
In the end, French argues—and again I agree—that Musk’s takeover of Twitter should prompt us to rethink the notion of free speech on social media. As French says in his last sentence,
New platforms can benefit from old principles, and when it comes to managing a marketplace of ideas, centuries of First Amendment jurisprudence can help light the way.
I’m sure some readers will disagree, arguing about Russian bots, hate speech, or other issues. Feel free to weigh in below.