Seeing my “sensitive content” on Twitter

October 17, 2021 • 2:15 pm

All the posts I make here go automatically to Twitter, and since those include the Jesus and Mo posts, which regularly get reported as “offensive” (those posts are also removed in Pakistan thanks to the cooperation of WordPress), many of my completely apolitical and  innocuous posts get a “trigger warning”, like this one from yesterday. Below is how I see it:

But some people, like reader Cate, see it (and many other of my tweets) as below: having “potentially sensitive content”. You have to click “View” to see the links and other things (in this case, a picture of a bird).

Cate went through the “change settings” procedure, which I replicated; this enables you to always see the full tweet with “sensitive content” without clicking “View”. I’ve added screenshots; her sequence of steps, which is easy, is indented.

Ok, here’s the maze they put you through:
On the lefthand side menu of Twitter, choose “More.”

 

From that menu, choose “Settings & Privacy”:

 

Then “Privacy & Safety”:

 

When you click the “Privacy and safety” arrow to the right, this is what you see:

 

Now click the arrow to the right “Content you can see” –and check the box next to “Display media that may contain sensitive content.”
Once it’s checked, as below, you’re (supposedly) home free:

So if you’re annoyed by these “sensitive content warnings,” do what’s above. There’s little to lose unless some content really disturbs you. Like pictures of birds.

 

14 thoughts on “Seeing my “sensitive content” on Twitter

  1. Perhaps the content filters get triggered because “birdshot” can mean a type of shotgun ammunition.

  2. Twitter once applied a trigger warning to my entire account. It was flagged as displaying graphic/salacious content. I had to have someone famous and influential with a blue check write them to get it removed.

    My account happens to be as benign as bird pics. But it used to have pics of museum paintings and statues. Somebody, it seems, went through the effort to report the statues as offensive. But instead of Twitter saying anything to me, the weird label got slapped to my account, which meant that employers and colleagues who came by saw the message implying I’m inappropriate before anything else. It asked them if they wanted to see my tweets given the supposedly sensitive content. Can you imagine?

    Most pornstars aren’t graced with such a restriction.

    But also at the time this all occurred, I had written essays on various woke topics. I’m highly suspicious that those essays are what led to the censure. Jordan B. Peterson had tweeted an essay if mine, which led Twitter to lock me out of my account for “suspicious activity”.

    My account has also been shadowbanned more than once. At present, it isn’t. And there is no warning. But I haven’t written essays in a few years.

    Anyway, thanks for the post on how to get your tweets seen when the issue is with personal settings. It is an ordeal!!

    (The issue with my account could not be fixed with personal settings.)

  3. I don’t see the problem. The posts may be innocuous and apolitical but they are side by side with political content some of which might be regarded as controversial by some. So I don’t see the problem with a social media company having a policy in place to point this out.

    They are a commercial company with a wide audience and they don’t individually adjudicate each and every individual case. These policies are going to be inappropriate in some cases but you have to remember that not all blogs are well intentioned as this one is.

    1. You don’t see the problem? They are saying that tweets may contain sensitive content when they don’t. And if what I write about regularly is enough to get anything from my site flagged and blocked until you click a box, then anybody who writes anything that EVERYBODY doesn’t agree with runs the same risk.

      1. How does an algorithm decide if a link in a tweet is to potentially offensive content or not?

        An algorithm expanding a URL into a snapshot of the content can’t make that sort of decision. It can’t parse text and decide if it offensive or not (AI hasn’t reliably progressed that far). It can’t analyse pictures and reliably decide if they are potentially offensive or not. So for URL’s that are flagged as having generated a lot of complaints in the past, it can either expand the URL or not expand it and put a message.

        Twitter have obviously decided that their algorithm should handle URL’s in the second way.

        I am not saying it is ideal, but I would be interested in how anyone could suggest that algorithm could be made to work better for a site such as Twitter.

  4. I got sent to Twitter jail once for making a ‘just wait till they find out they’ve been drinking dihydrogen monoxide!’ joke that apparently violated the Twitter self-harm policy. Also, my android just tried to autocorrect self-harm to self-care.

  5. Maybe Mastodon is worth a look. It’s a social network software that’s kind of like Twitter, but decentralized, non-commercial, and part of the even bigger and more diverse Fediverse, “an ensemble of federated (i.e. interconnected) servers that are used for web publishing (i.e. social networking, microblogging, blogging, or websites) and file hosting, but which, while independently hosted, can communicate with each other.” (Wikipedia)

    Mastodon uses community-based moderation, where every single server can set its own code of conduct. “Mastodon’s founder Eugen Rochko believes that small, closely related communities deal with unwanted behaviour more effectively than a large company’s small safety team.” (Wikipedia)

    Thus, if your posts get moderated on one server, you can just move to another whose moral values are more in line with your own.

    Just a thought. I know that Twitter, as flawed as it may be, has a far greater reach for publications.

  6. It seems “small people” are much more at the whims of algorithms or political interventions, wheras media corporations get a free pass. Even media organisations that are actually not that important on the internet, but are owned by powerful people, get preferential treatment.

    Meanwhile, everyone works for free, essentially, to make the content, and are additionally exploited by massive data theft where none of us even know how much our privacy is worth.

  7. I got a Facebook suspension earlier this year. The photo I commented upon was in a guitar group and had Jeff Beck, Jimmy Page, and (I think) Tony Iommi, and Steven Tyler, standing together, side-by-side: a buddy shot. A viewer (referring to Tyler) posted, “What’s HE doing there?” I replied, “Dude Looks Like A Lady?” Apparently that is offensive to the LGBTQ rules despite a song of that title being one of Aerosmith’s biggest hits, sung by that same Steven Tyler.

Leave a Reply to Michael Sternberg Cancel reply

Your email address will not be published. Required fields are marked *