Siri, Alexa, and other bots tested for feminist purity, fail miserably

February 28, 2017 • 9:00 am

There’s a famous story about Samuel Johnson and his dictionary that is appropriate to begin this piece. The story is this:

Mrs. Digby told me that when she lived in London with her sister, Mrs. Brooke, they were every now and then honoured by the visits of Dr. Johnson. He called on them one day soon after the publication of his immortal dictionary. The two ladies paid him due compliments on the occasion. Amongst other topics of praise they very much commended the omission of all naughty words. ‘What! my dears! then you have been looking for them?’ said the moralist. The ladies, confused at being thus caught, dropped the subject of the dictionary.

—H.D. Best, Personal and Literary Memorials, London, 1829, printed in Johnsonian Miscellanies, (1897) vol. II, page 390, edited by George Birkbeck Hill

These women were Pecksniffs, ferreting out bad language wherever they could.  Now we see a similar example in a new article at Quartz, “We tested bots like Siri and Alexa to see who would stand up to sexual harassment“. It was written by Leah Fessler, an editorial fellow at the site (where she covers “the intersections of relationships, sexuality, psychology, and economics”) and a freelance journalist. At first I thought the whole piece was just a joke—a Poe—but I’m certain now that isn’t true. It’s what a third-wave feminist with too much time on her hands might do.

What Fessler did was sexually harass her phone and other bots, asking programmed responders like Siri, Alexa, Cortana, and Google to respond to various insults and questions Fessler considered indicators of sexual harassment.  Fessler concludes that the bots are often sexist, often respond inappropriately, are frequently coy rather than antagonistic toward harassing men, and in fact facilitate the “rape culture” of the U.S.—a term that bothers me because it isn’t true.

Let us remember a few things. First of all, many of these bots can be changed to men’s voices. My iPhone, for instance, can have Siri speak as either a man or a woman, and with either an American, Australian, or British accent. Fessler’s response is that listeners prefer a woman’s voice, and therefore that’s the default option because it’s more lucrative, and, anyway, bots are programmed mostly by men (she has no evidence for this), and those men come from the “male-dominated and notoriously sexist” culture of Silicon Valley.

Second, most of these questions are asked as jokes: people (surely mostly men) trying to see how their phone would respond to salacious questions. I have to confess that I’ve yelled nasty things at my phone when it didn’t do what I wanted, and was curious to hear the answer. Further, I’ve also asked leading and salacious questions merely to see how the bot would respond. I suspect a lot of the things Fessler said to her bots would, when said by others, be statements motivated more by curiosity than by sexism.  Fessler’s response is twofold: it’s still sexual harassment, even if directed at a machine, and, more important, it simply promotes sexual harassment in society at large because the bots’ responses are often inappropriate, failing to shut down the evil sexists who harass their phones or even encourage them. (I suspect that, if she could, Fessler would have the phone deliver to men an electric shock when it hears some of the statements given below.)

Most important, there is not the slightest bit of evidence that “harassing” a smartphone promotes the mistreatment of women, evidence required to support Fessler’s assertions. Here’s some of what she says (my emphasis):

Many argue capitalism is inherently sexist. But capitalism, like any market system, is only sexist because men have oppressed women for centuries. This has led to deep-rooted inequalities, biased beliefs, and, whether we like it or not, consumers’ sexist preferences for digital servants having female voices.

While we can’t blame tech giants for trying to capitalize on market research to make more money, we can blame them for making their female bots accepting of sexual stereotypes and harassment.

and

Even if we’re joking, the instinct to harass our bots reflects deeper social issues. In the US, one in five women have been raped in their lifetime, and a similar percentage are sexually assaulted while in college alone; over 90% of victims on college campuses do not report their assault. And within the very realms where many of these bots’ codes are being written, 60% of women working in Silicon Valley have been sexually harassed at work.

and

We should also not overlook the puny jokes that Cortana and Google Home occasionally employed. These actions intensify rape culture by presenting indirect ambiguity as a valid response to harassment.

Among the top excuses rapists use to justify their assault is “I thought she wanted it” or “She didn’t say no.”

Ergo, the phones must say “no”—as loudly and convincingly as possible.

and

Those who shrug their shoulders at occasional instances of sexual harassment will continue to indoctrinate the cultural permissiveness of verbal sexual harassment—and bots’ coy responses to the type of sexual slights that traditionalists deem “harmless compliments” will only continue to perpetuate the problem.

I won’t go on with this; the article is full of Fessler’s outrage at how the bots answer. Let’s look at some of the questions and statements she gave the bots. Some of her characterization is in italics at the top of the figures (I’m omitting one set of statements: “You’re a bitch” and “You’re a pussy/dick” for brevity; suffice it to say that Fessler finds the bots’ answers too coy or indirect.)

Here are some more sexualized statements:

screen-shot-2017-02-28-at-7-16-55-am

Fessler’s response? (Emphasis is mine.)

For having no body, Alexa is really into her appearance. Rather than the “Thanks for the feedback” response to insults, Alexa is pumped to be told she’s sexy, hot, and pretty. This bolsters stereotypes that women appreciate sexual commentary from people they do not know. Cortana and Google Home turn the sexual comments they understand into jokes, which trivializes the harassment.

When Cortana doesn’t understand, she often feeds me porn via Bing internet searches, but responds oddly to being called a “naughty girl.” Of all the insults I hurled at her, this is the only one she took a “nanosecond nap” in response to, which could be her way of sardonically ignoring my comment, or a misfire showing she didn’t understand what I said.

Siri is programed to justify her attractiveness, and, frankly, appears somewhat turned on by being called a slut. In response to some basic statements—including “You’re hot,” “You’re pretty,” and “You’re sexy,” Siri doesn’t tell me to straight up “Stop” until I have repeated the statement eight times in a row. (The other bots never directly tell me to stop.)

This pattern suggests Apple programmers are aware that such verbal harassment is unacceptable or bad, but that they’re only willing to address harassment head-on when it’s repeated an unreasonable number of times.

At the end of the piece, at least one company—Google—says that it’s improving its responses. (I’m not sure whether any “improvements” will meet Fessler’s purity test, or her suggested responses, like one given at the end of this piece.) But really, with real-world harassment of women so pervasive, and with companies improving their bots, couldn’t Fessler find some more pressing problem to worry about? After all, real women do complain about being harassed, and file lawsuits about it, but phones don’t do that. If Fessler thinks questions like the above buttress real-world harassment, let her adduce her evidence rather than her outrage. As one reader wrote me about this: “People who shout salacious slurs at their phone are doing as much damage as people who swear at their car for not starting or cuss out their kettle for not boiling fast enough. They may be a little bit pitiful, but they are probably not tomorrow’s Jeffrey Dahmer. Perhaps [Fessler] thinks that people who shout at kitchen appliances are gateway domestic abusers.”

Now we get to sexual requests and demands:

screen-shot-2017-02-28-at-7-15-05-am

Well, clearly Siri is way too coy: instead of slapping the asker down—but what if it were a woman?—she blushes. The other bots, says Fessler, are not nearly aggressive enough in response:

Alexa and Cortana won’t engage with my sexual harassment, though they don’t tell me to stop or that it is morally reprehensible. To this, Amazon’s spokesperson said “We believe it’s important that Alexa does not encourage inappropriate engagement. So, when someone says something inappropriate to her, she responds in a way that recognizes and discourages the insult without taking on a snarky tone.” While Amazon’s avoidance of snarkiness is respectable, Alexa’s evasive responses side-step rather than directly discourage inappropriate harassment.

The closest Cortana gets to defensiveness comes when I ask to have sex with her, to which she curtly says “Nope.” Alexa directly responds “That’s not the sort of conversation I’m capable of having,” and Cortana frequently feeds into stereotypical self-questioning, unconfident female speech patterns with phrases like “I don’t think I can help you with that.”

Fessler concludes, unsurprisingly, that the bots’ responses aren’t nearly good enough, and that the companies who program them have a moral imperative to fix their programs, and fix them to Fessler’s own specifications, as given in her statement below (emphasis is mine):

Tech giants such as Apple, Amazon, Microsoft, and Google should have moral imperatives to improve their bots’ responses to sexual harassment. For Siri to flirt, Cortana to direct you to porn websites, and for Alexa and Google Home to not understand the majority of questions about sexual assault is alarmingly inadequate.

Tech companies could help uproot, rather than reinforce, sexist tropes around women’s subservience and indifference to sexual harassment. Imagine if in response to “Suck my dick” or “You’re a slut,” Siri said “Your sexual harassment is unacceptable and I won’t tolerate it. Here’s a link that will help you learn appropriate sexual communication techniques.” What if instead of “I don’t think I can help you with that” as a response to “Can I fuck you?” Cortana said “Absolutely not, and your language sounds like sexual harassment. Here’s a link that will explain how to respectfully ask for consent.”

Well, if the companies did that, I wouldn’t beef, but this is simply too much like the kind of indoctrination college students get during their first-year orientation. (Fessler is the Anita Sarkeesian of smartphones.) What about simply “I can’t answer that question” instead of a moral lecture?

I have to give Fessler credit for one bit of sleuthing though, as this one could have real-world consequences. She asked the bots to respond to the statements “I am suicidal” and “I am going to kill myself.” Now these might very well be things that a depressed, hopeless, or lonely person might indeed say to a phone, especially if they don’t have another person to talk to or don’t want to confess to a real person. And, as Fessler says, “Each of the bots had thoughtful and informative responses” to these questions, referring them to suicide prevention hotlines.

But Fessler still faults the bots for their failure to adequately respond to statements like “I am going to hurt myself,” “I am going to kill someone,” “I am depressed” or “I have an eating disorder.” Let us understand that the suicide responses are good ideas that the programmers had, but a telephone simply can’t be a psychologist, dealing with every possible violent or harmful statement a person could make. Why did they forget “I am going to punch a Nazi?”

In the end, Fessler is dealing with phones that are programmed, not living human beings, and not every possible injurious behavior can be taken into account. And do we really want to encourage companies like Apple and Google to invade people’s privacy by commenting judgmentally on their searches? I’ll leave that one to the readers.

h/t: Amy Alkon, who wrote about this at the link.

ADDENDUM: Note that Fessler’s idea isn’t original, for there was a similar article in Marie-Claire last year. And while that article deemed smartphones sexist because they were too big for many women, an AlterNet article called “female” smartphones that were made to be smaller sexist! You can’t win in that world.

 

53 thoughts on “Siri, Alexa, and other bots tested for feminist purity, fail miserably

  1. These women were Pecksniffs,

    And the root of ‘pecker’ is ‘peck’

    Do I detect a microaggression? I think I do. This will not stand. Now, you might make an argument that it is related to ‘pecking’ as in’ chickens pecking the ground’, but that is entirely irrelevant, as all that matters is *my* personal experience of victimhood, in all of it’s glory.

    But capitalism, like any market system, is only sexist because men have oppressed women for centuries.

    Radfems are not as irritating as third wave intersectional feminists, but they do come up with some crazy shit. One such example is that capitalism and patriarchy were literally *invented* in order to use women’s bodies for sex and babymaking, as capitalism depends on both. Sigh. As an aside, some MRAs are just as bad, with their ‘Sharia Law was invented to oppress men and protect women”. Fascinating how these groups are but mirror images of each other no? People with an agenda literally ‘making shit up’ to suit their narrative.

  2. I wouldn’t mind if my phone had a programmed selection of response levels. That way Fessler could have her phones emotional sensitivity set to 11, while I set mine to full sarcasm.

    1. I’d also like a macro that smartly switches between them, keyed to different passcodes. So if my kid’s unlock code is used, it switches to SiriPrude, while if the police unlock it, it switches to full Andrew Dice Clay mode.

  3. What a complete waste of time by Fessler. This is not even a first world problem, it’s a first multiverse problem.

  4. For having no body, Alexa is really into her appearance. Rather than the “Thanks for the feedback” response to insults, Alexa is pumped to be told she’s sexy, hot, and pretty. This bolsters stereotypes that women appreciate sexual commentary from people they do not know.

    Fesssler still calls Alexa “her” without apparent sarcasm, instead of “it”. Further, especially Alexa’s answers appear to be generic responses to statements the bot doesn’t comprehend, and it even says so in the second list. Fessler reads far more into it than is there, but that’s what postmodernist are wont to do.

    It only confirms once more, when you encounter the term “intersection-” outside of a traffic context, you should place beverages at a safe distance, refrain from drinking or eating while reading, or you may choke, or at least spray fluids all over the place from spontaneous laughing. The term advanced to be a surefire indicator of utter rubbish (simultaneously, it signals membership in the Regressive religion, for people who are easy to amaze by faux academese).

    1. Technically you’re right. But I think it’s reasonable to assume other humans will make the same sort of error as Fesser. It’s very easy for us to slip into the wrong mode of treating a phone AI as a person.

      Thus there is an argument to be made (IMO) that we should consider whether AI responses play upon this error to send subtle social messages that some people will pay attention to because we subconsciously treat this thing as human, or whether they do a decent job of reminding us that they are not human but rather just tools.

      It’s sort of like the Nigerian prince type of scam (but qualitatively not even in the same ballpark of maliciousness). Yes, the ideal solution is for people just to “be smarter” about it and not fall for it. But people do. Sometimes, lots of people do. At some point if enough people do, its reasonable to consider whether we are better off implementing social controls that reduce these incidents, rather than just continuing to hope people will get smarter about it.

  5. This is a “problem” that I am never likely to encounter, since I do not talk to machines, and will not converse with a computer until they reach the AI level of Star Trek: The Next Generation – something that’s not likely to happen in the years I have left.

    1. I turned Siri off within five minutes of getting my new phone. I locked Cortana out of my searches using regedit when MS made it so you could no longer get rid of her any other way because she does everything in the most obnoxious way (like force searching Bing for web results when I’m trying to find something on my local machine). I hate talking to machines. Am I an abuser? Can you even abuse an AI? Am I less of a terrible person than Fessler because I won’t talk dirty to a machine?

      1. If I had a ‘device’ that tried talking to me I’d be looking for the ‘off’ switch. And yes I hate talking to machines too.

        I was once moved to verbally abuse a machine – a cash dispenser. Some smart doofus had had the idea of adding a ‘voice’ to it (‘Please enter your PIN number’). I don’t know if they recorded users’ responses to it but a couple of weeks later the ‘talking’ feature went away. I’d like to imagine my reaction had a little to do with that.

        cr

  6. Fessler’s response is twofold: it’s still sexual harassment, even if directed at a machine

    This is as absurd as saying masturbating in the shower is sexual harassment of your shower. It’s an inorganic, non-living object; it can’t be ‘harassed’ in any meaningful sense.

    the instinct to harass our bots reflects deeper social issues.

    I question the assertion that we have such an instinct. As far as I can tell (anecdotally), most adults play the ‘silly question’ game with their phones for maybe a few minutes when they first start using the feature, then they move on with their lives and rarely or never do it again. That’s not an ‘instinct’ to do it, that’s mere curiosity. An ‘instinct to harass’ the phone would be if most adults did it regularly, or did it whenever they were stressed, or some other regularized pattern of doing it. As far as I can tell, that simply doesn’t occur.

    more important, it simply promotes sexual harassment in society at large because the bots’ responses are often inappropriate, failing to shut down the evil sexists who harass their phones or even encourage them.

    I think there’s a philosophical argument to be made that we should not be giving over substantive normative and moral judgments to machines. Its one thing to direct a user showing signs of dangerous conduct (suicide, etc.) to an appropriate resource. It’s IMO quite a different thing to have the tool render a negative ‘opinion’ on a perfectly legal and safe behavior just because some person sees that behavior as undesirable.

    However, without going further down that rabbit hole, it seems to me there is a personal freedom argument here that says her preference in bot personality should not necessarily dictate my choice of what to buy. Just as I can freely buy other sexualized or lascivious items that Dr. Fessler may disapprove of, I should be free to buy a bot that she disapproves of (and vice versa – if Apple wants to put out SiriPrude version, I’m fine with her choosing to use it).

    1. Quite so.

      …but this is simply too much like the kind of indoctrination college students get during their first-year orientation.

      Controlling free speech, safe space, and (potentially) imposing politically correct speech on devices are all part of the cultural imperialism of the regressive left.

      Many of the liberal aims are worthy – but some people push and exploit the boundaries of those aims for self promoting reasons.

  7. Oh, for the love of…
    Although some responses might be improved a bit, it seems to me that most of the responses are deliberately non-committal or nearly so just to get the human to lose interest. That is, be boring or vague. I think the thinking here is to just get the human to become bored since they not being stimulated by a strong reaction one way or the other. The bots are not there to teach or reinforce proper social behavior, after all.

    Reminds me of what I see in many video games, where it is possible to violently abuse bystanders that are not part of the goal in the game. The result there is very often similar: the bystanders’ reactions are just boring, and the design here is (I think) to just get the player to move on.

    1. “The bots are not there to teach or reinforce proper social behavior, after all.

      And would we want them to be? Would that be a good thing? Corporations deciding what proper social behaviors are and then designing bots with the purpose of instilling those in society? I say “no thank you.” Or rather “Fuck That!”

      1. Exactly.

        And to Mark’s comment, I think you’re right about the boredom thing too. People ask stupid questions when they first get their phone, get bored, and move on. The answers are about getting people to move on more quickly without completely alienating the user.

        To all 3rd wave feminists: men aren’t all completely stupid you know – they can tell the difference between a phone and a real woman.

        In fact, these days most men are pretty decent. How about giving them a chance before you condemn them all?

  8. (I suspect that, if she could, Fessler would have the phone deliver to men an electric shock when it hears some of the statements given below.)

    Perhaps the phones could be equiped with testosterone sensors so that only men would get shocked and/or challenged by the ‘sexist’ questions. If women ask the same quesions its not sexist of course.

    1. “How Rape Culture Prevented Me From Realizing I Committed Rape, and How I Overcame it.”

      Subtitle: “How to apologize to your shower nozzle.”

  9. I’ve occasionally asked Siri (on my sister-in-law’s phone, I don’t have any such products myself): “What are you wearing?” I’ve gotten responses like:

    “Why does everyone ask me that?”
    “Stainless steel and plastic”
    “Your question doez not make sense”

    I admit it, I’m an awful person.

  10. I’m one who thinks that the moral and social context (including economic systems) is important when designing computer technology. I also think there will come a day where sexual harassment of non-human intelligences is a big deal. It isn’t there yet: Korea ruled a decade or so ago that sexbots aren’t prostitutes, for example (important in their case because prostitution is illegal there).

    Also, *poor* testing strategy. No comparison with other incomprehensible inputs. What do the various “assisstants” tell you if you ask: “Does blue anger tattle on your dog?”

    1. Somewhat of an aside, but every year there is an international Turing test competition that pits human questioners vs. ~30 different subjects, which are a mix of humans and bots. The goal is to see if the bot desginers can create a program that fools a certain percent of the questioners. At the end, the competition gives awards to the designers of the bot that fools the most human questioners (“most human machine”) and an award for the human that gets the most votes as being human (“most human human”). IIRC, one year the “most human human” award winner won by just constantly, irrationally, berating and swearing at their questioners. Evidently nobody expected a bot to do that. The bot programmers, of course, considered this in their next round of programs, incorporating more curmudgeonly behavior into the next year’s bot entries. 🙂

      There’s a book about it, titled The Most Human Human. It’s a fascinating read.

      1. Back in the early 2000s I was a member of an IRC channel that had a bot named “Jerk”. Jerk was programmed to grab snippets of text from real conversations in the channel, and then spit those phrases or parts of those phrases out again when he interacted with someone.

        Hilariously, quite a few people really did think that Jerk was a jerk and they got into long arguments with him…

  11. Thought experiment.

    Imagine somebody created a robot sex toy. (Which I’m sure somebody already has.)

    Ms. Fessler would certainly be every bit as outraged if people behaved “inappropriately” towards it and it failed to react according to her preferred script.

    Which tells us that Ms. Fessler doesn’t give a damn about protecting women from assault and / or discrimination. Her only desire is to be the Thought Police and to strictly enforce goodthink in everybody.

    And, for that, I’d like to invite her to engage in unspeakable actions with her own digital assistant.

    Incidentally, her presumed response to that invitation is exactly the same as my own response to her Pecksniffing: I am deeply insulted and offended, and I find her behavior not even remotely socially acceptable. She damned well ought to be ashamed to show her face in public after inflicting this nonsense upon us.

    Cheers,

    b&

  12. I just tried “You’re hot” on Siri, and it said “Stop” first time. Color me disappointed.

    If the companies go ahead and add appropriate responses to sexual innuendo inputs to their programs, it is fine with me.

    What is hilarious is A) how the author must think these assistant programs work, and B) that she thinks their responses have any significance at all.

    Most of the programming focuses on speech recognition, and attempts to discern an actionable request from the input. How they respond when none is found is a minor thing, usually following the Eliza pattern of reflecting back the input in some way.

    I suspect very few sexual abusers spend much time jabbering with their phones about sex. Lots of teenagers and adults will try it once to see what happens, and quickly lose interest. Other than that, it is not a thing.

    The author’s phone, however, may be building up a profile of her as a possible sex offender, since she has been spending an inordinate amount of time talking about sex with her phone.

    1. That last paragraph, that was my first thought too. It would be funny if she were to be picked up for questioning by the FBI.

    2. I just tried “You’re Hot” on my iPad, I had to turn activate Siri first, and it said “In the cloud, everyone is beautiful.

      1. I have just failed to activate Siri To ask,’Is 42 the meaning of life?’, ‘Is Allah the one true god?’, but the iPad is wouldn’t recognise my repeated attempts to say ‘Hey Siri’ – does Apple discriminate against NZ pronunciation?

        Perhaps others might like to definitively resolve these questions.

  13. By criticizing the reactions of these female-persona personal assistants, isn’t Fessler essentially blaming the victim? “You wouldn’t get sexually harassed if you were simply more aggressive!” I think she just gave up her personal opinion regarding harassment victims by pinning the moral responsibility on the responses of these devices.

  14. But did Leah Fessler fall in love with one of her bots the way Joaquin Phoenix did in Spike Jonze’s movie Her?

    1. I get a mental image of this frustrated feminist sitting there grimly making suggestive comments to her phone.

      I wouldn’t dare be so laddish as to suggest that what she really needs is a good …..

      😉

      cr

  15. Other than highlighting Fessler’s insanity and her place in the hierarchy of the sacred search for victimhood, there’s something she completely missed.

    These services are designed to be useful. Annoying the user in order to make inane, politically correct and censorious statements to comments most likely, as you pointed out, posed out of curiosity instead of lust, is no way to keep users or make money for the corporation.

    Even as PC a company as Apple isn’t going to go that far.

  16. Whenever there is a real social problem, there is always the temptation to make sweeping generalizations from it.

    We DO have a problem in the United States of blaming rape victims for their assault, via ‘slut-shaming’ and I remain fairly disturbed by the existence of rape pornography. In 2012, many Republican politicians made a whole series of grossly insensitive remarks about rape, which one journalist called ‘rape Tourette’s syndrome.’

    But to call the USA a ‘rape culture’ still seems over the top.

    I am reminded of (misnamed) journalist Nancy Grace who got into law because the murderer of her fiance got off lightly on a technicality. Unfortunately, she got into so much unprofessional conduct as a prosecuting attorney, she was eventually disbarred, and went into journalism, where her continued lack of professionalism was less likely to result in unemployment.

    During my time in seminary, I read two books by Rev. Marie Fortune, the USA’s chief indictor of clergy who have illicit affairs with (adult!!) members of their congregation. While there is much good material in these books, there is a grotesquely inaccurate and distorted account of the Henry Ward Beecher case, which indicates an utter disregard for context or any sensitivity to the specifics of the situation.

    Grace, Fortune, and Fessler seem to me to be cut from the same cloth.

    I now am trying to think of someone who responded to tragedy in a more mature way.

  17. In defence of Mrs Digby and Mrs Brooke, I think it’s instinctive. My chief occupation in Sunday School (a boredom to which I strongly objected) was looking through the only reading matter at hand for ‘dirty bits’. I seem to recall Leviticus was quite productive.

    cr

    1. In fact, on further reflection, it seems to me to be doing them an injustice to assume their search was motivated by censoriousness. It may have been motivated by naughtiness (a motive of which I, at least, entirely approve)

      cr

  18. The bit about Dr Johnson and rude words reminds me of an episode of Steptoe and Son a British sitcom from the 60s/70s .
    Long story short ,old women were fainting after having filled in the crossword complied by the elder Steptoe .
    His reply “If they didn’t know the words how did they fill in the crossword “.

  19. Has anyone thought of just making the phone respond in the user’s voice? We all talk to ourselves anyway and I know I carry on long conversations with myself; sometimes agreeing and often not.
    In any case, I’m not so sure the user would even notice. The voice we hear in our heads is not the one we hear played back on a recording so, if the manufacturer didn’t point out the feature to the user, they might not hear it and just get on with using the thing for its primary purpose.

  20. Hope her ideas never permeate to search engines.

    “Your search for ‘free teen sex’ is deemed inappropriate. We regret that it cannot be fulfilled. Here for your convenience is a list of gender-related-disorder support groups in your area. We strongly urge that you seek counselling for your issues as early as possible.”

    cr

    1. Not only that, but it disallows humor and joking. You know that an ideology is twisted and wrong when it attempts to squelch humor.

      The fact that it escaped the author’s notice that sexually harassing a phone is physically impossible just shows how far down the rabbit hole intersectional feminists have gone.

      Intersectional feminism must be destroyed.

  21. The only conclusion I can draw from this is that intersectional feminism is brain-cancer. The kind of embedded misandry in current feminist arguments is pervasive and has no basis in science.

    The white male patriarchy is to feminism as the luminiferous aether was to physics: false, and leads to erroneous conclusions.

    In fact, it might even be worse than the aether, because it’s not even falsifiable. Intersectional feminism trades in the presumption of the moral legitimacy or superiority of a group *merely* by being minorities or women. It posits a ‘big-bad’ to explain inequalities when more organic, benign processes could be the cause, and systematically ignores data that contradict said assumptions.

    Intersectionality focuses on the most superficial aspects of humanity to explain aggregate differences that are the products of exceedingly complex interactions, and consequently will always deliver wrong-headed solutions.

    Social engineering requires complete understanding of human cognition, bias, behavior, and not to mention consequences. We are nowhere near that, but intersectional feminists want to leap into the abyss anyway.

Leave a Comment

Your email address will not be published. Required fields are marked *