Diana MacPherson called my attention to this new post by Twitter on conduct that they’re going to block. And they’re starting with religion. Click on the screenshot to read:
Here’s what Twitter says:
We create our rules to keep people safe on Twitter, and they continuously evolve to reflect the realities of the world we operate within. Our primary focus is on addressing the risks of offline harm, and research* [JAC: they give two studies in the article’s footnotes] shows that dehumanizing language increases that risk. As a result, after months of conversations and feedback from the public, external experts and our own teams, we’re expanding our rules against hateful conduct to include language that dehumanizes others on the basis of religion.
Starting today, we will require Tweets like these to be removed from Twitter when they’re reported to us:
Twitter notes that if you’ve already put one of these up, it will be removed but your account won’t be blocked. But after the rule was set (July 9, 2019), accounts may be deleted if they start posting stuff like the above. (But how would you know? Who reads Twitter-policy updates? Shouldn’t you at least get a warning?)
But note that they’re starting not with ethnicity, race, or other common subjects said to attract “hate speech.” They’re starting with religion. Why? Here’s what they say:
Why start with religious groups?
Last year, we asked for feedback to ensure we considered a wide range of perspectives and to hear directly from the different communities and cultures who use Twitter around the globe. In two weeks, we received more than 8,000 responses from people located in more than 30 countries.
Some of the most consistent feedback we received included:
Clearer language — Across languages, people believed the proposed change could be improved by providing more details, examples of violations, and explanations for when and how context is considered. We incorporated this feedback when refining this rule, and also made sure that we provided additional detail and clarity across all our rules.
Narrow down what’s considered — Respondents said that “identifiable groups” was too broad, and they should be allowed to engage with political groups, hate groups, and other non-marginalized groups with this type of language. Many people wanted to “call out hate groups in any way, any time, without fear.” In other instances, people wanted to be able to refer to fans, friends and followers in endearing terms, such as “kittens” and “monsters.”
Consistent enforcement — Many people raised concerns about our ability to enforce our rules fairly and consistently, so we developed a longer, more in-depth training process with our teams to make sure they were better informed when reviewing reports. For this update it was especially important to spend time reviewing examples of what could potentially go against this rule, due to the shift we outlined earlier.
But this doesn’t at all explain why they started with religion. The next bit is said to help explain “why religion first?”, but it doesn’t seem to, either:
Through this feedback, and our discussions with outside experts, we also confirmed that there are additional factors we need to better understand and be able to address before we expand this rule to address language directed at other protected groups, including:
How do we protect conversations people have within marginalized groups, including those using reclaimed terminology?
How do we ensure that our range of enforcement actions take context fully into account, reflect the severity of violations, and are necessary and proportionate?
How can – or should – we factor in considerations as to whether a given protected group has been historically marginalized and/or is currently being targeted into our evaluation of severity of harm?
Well, you could say that delineating “hate tweets” and enforcing rules consistently is easier with religion than, say, gender or race, but I don’t think so. In both cases you have to separate hatred for people with dislike of policy (e.g. “Deport all Muslims” vs. “Islamic doctrine is often oppressive”; or “Send blacks back to Africa” vs. “Affirmative action is wrong”). Note that both examples, which involve religion and race, show the potential blurring of lines, for sentiments against affirmative action or against Islamic doctrine can be and have been deemed “hate speech”.
This blurring is why I object to Twitter doing this kind of policing, as drawing lines will be arbitrary. But if they feel they have to draw lines, then the tweets above, which are bigoted against people, are clearly reprehensible. And since Twitter is a private company, they can do what they like. But I want them to hew to the First Amendment as closely as possible, and the tweets above don’t violate that.
Diana felt more strongly than I, and told me this (quoted with permission):
It sounds like a bad idea all around to me. How many times have religious groups had atheists banned from social media just for being atheists? So now if someone criticizes a religion, is that going to be counted as violating their rules? And why religious groups that get special protection? Twitter calls them marginalized – really? Christians are marginalized? It just seems like really faulty thinking all around.
I’ve seen reasonable speech characterized as hate speech too often to immediately get on board with Twitter’s rules. Yes, the examples above are beyond the pale—if you must police speech on a social-media platform. But there will be many other examples where criticism of religion might be either chilled or censored. To many, completely innocent pictures in my tweets—like animal pictures that come from my websites—are labeled by Twitter as “sensitive material” that you have to click to see. I think that’s because I tweet Jesus and Mo cartoons, which got me censored in this way.
Although Twitter still allows us to Post Jesus and Mo strips, it also acts as an informant when somebody else objects to “sensitive” material, as when Maajid Nawaz tweeted Jesus and Mo as well:
Twitter’s formally informed me Pakistani authorities notified them that the above violates Pakistan’s blasphemy law. Punishment for this in Pakistan is death. I’m Pakistani origin & visit family there. Twitter has a moral duty to tell me who precisely is trying to have me killed pic.twitter.com/OiyZh2hQy4
— Maajid أبو عمّار (@MaajidNawaz) May 12, 2019
Does Twitter need to inform Nawaz that his content violates Pakistani law? Shouldn’t Twitter just tell Pakistan to “bugger off”?
Well, at least Twitter doesn’t ban the cartoons in the way that WordPress does to help out the Pakistani government when it accuses me of “Jesus and Mo”-related blasphemy.
The more I ponder this, the more I’m coming around to Diana’s point of view, and thinking that so long as social media doesn’t violate the First Amendment principles of free speech as interpreted by the courts, it should allow everything to be posted.
Do the rules above seem reasonable, or do you, like me, see a slippery slope?