Big Brother is coming: machines to catch implicit bias in the workplace

March 20, 2023 • 1:15 pm

What if you had an Alexa-like device around to monitor your behavior, especially your “implicit biases”? Would that bother you?  And if you knew you were being monitored, would it affect your behavior? And if it did affect your behavior, would it do so permanently, or only so long as you knew you were being monitored?

Well, first we have to know if the concept of “implicit bias” is meaningful. People may be biased, but it may be something that they recognize: explicit bias that’s kept largely private. In fact, that’s what seems to be the case: data show that not only is there no commonly accepted definition of “implicit bias”, but ways to measure it, most notably the “implicit association test” (IAT) are dubious and give widely varying results for single individuals. Further, ways to rectify it don’t seem to work.

In a post from earlier this month, I reprise psychologist Lee Jussim’s many criticisms of implicit bias. Even if you take the most generous view of the topic, you have to admit that we know little about it, very little about how to measure it if it’s real, and nothing about how to rectify what the tests say is “implicit bias.”. In other word, it’s way too early to start ferreting it out, much less asserting that it’s ubiquitous. Implicit bias (henceforth “IB”) is one of those concepts that we can’t get a handle on, has been largely rejected by psychologists and sociologists, but is nevertheless taken for granted by the woke. Who needs stinking data when a concept meets your needs? The first paragraph of the piece below shows that a highly controversial topic is just accepted as true when it’s ideologically convenient.

The piece below, from Northeastern University in Boston, outlines the proposals of two researchers to measure “implicit bias” remotely, with the aim of eliminating it. Click to read:

The article assumes from the outset, without any justification, that the bias is there and is also ubiquitous. It further claims that implicit bias is costly because (again assuming again that it’s real), it demoralizes workers who are its targets—and that costs money:

Studies have shown that implicit bias—the automatic, and often unintentional, associations people have in their minds about groups of people—is ubiquitous in the workplace, and can hurt not just employees, but also a company’s bottom line.

For example, employees who perceive bias are nearly three times as likely to be disengaged at work, and the cost of disengagement to employers isn’t cheap—to the tune of $450 billion to $550 billion a year. Despite the growing adoption of implicit bias training, some in the field of human resources have raised doubts about its effectiveness in improving diversity and inclusion within organizations.

I reject the assertion of the first paragraph entirely, for the data (while sometimes conflicting) do not show that this kind of bias is ubiquitous—or even exists.  Note as well that in the second paragraph “implicit bias” has now become “bias”, yet they are two different things.  One is a subconscious form of bias, the other more explicit and recognized by its proponent. And, of course, the paragraph assumes that employees who perceive bias are actually receiving bias rather than acting out a victim mentality, and we just don’t know that.  (I’m not denying that racism and sexism exist; just that it’s subconscious, ubiquitous, and has the financial effects noted above.) This being America, of course, the goal is not a more moral business, but a more lucrative one.

But technology is here to fix the problem! All we have to do is eavesdrop on people interacting, analyze what you find, and then use it to “rectify” the behavior of the transgressors. Problem solved!

But what if a smart device, similar to the Amazon Alexa, could tell when your boss inadvertently left a female colleague out of an important decision, or made her feel that her perspective wasn’t valued?

. . .This device doesn’t yet exist, but Northeastern associate professors Christoph Riedl and Brooke Foucault Welles are preparing to embark on a three-year project that could yield such a gadget. The researchers will be studying from a social science perspective how teams communicate with each other as well as with smart devices while solving problems together.

“The vision that we have [for this project] is that you would have a device, maybe something like Amazon Alexa, that sits on the table and observes the human team members while they are working on a problem, and supports them in various ways,” says Riedl, an associate professor who studies crowdsourcing, open innovation, and network science. “One of the ways in which we think we can support that team is by ensuring equal inclusion of all team members.”

The pair have received a $1.5 million, three-year grant from the U.S. Army Research Laboratory to study teams using a combination of social science theories, machine learning, and audio-visual and physiological sensors.

Welles says the grant—which she and Riedl will undertake in collaboration with research colleagues from Columbia University, Rensselaer Polytechnic Institute, and the Army Research Lab—will allow her and her colleagues to program a sensor-equipped, smart device to pick up on both verbal and nonverbal cues, and eventually physiological signals, shared between members of a team. The device would keep track of their interactions over time, and then based on those interactions, make recommendations for improving the team’s productivity.

. . .As a woman, Welles says she knows all too well how it feels to be excluded in a professional setting.

“When you’re having this experience, it’s really hard as the woman in the room to intervene and be like, ‘you’re not listening to me,’ or ‘I said that and he repeated it and now suddenly we believe it,’” she says. “I really love the idea of building a system that both empowers women with evidence that this is happening so that we can feel validated and also helps us point out opportunities for intervention.”

Addressing these issues as soon as they occur could help cultivate a culture where all employees feel included, suggests Riedl.

A device on the table watching and filming everyone! Now THAT will lead to a freewheeling discussion, right?

But the problem Welles addresses is real. As I’ve said before, when I started teaching graduate seminars, one of the first things I noticed, since these were mostly discussion of readings, was that the men tended not only to talk more than the women, but tended to talk over the women. Not only that, but many times I’ve seen a woman student make a good comment, followed up by a comment from a man, only to have the good comment attributed to the man. Since then, discussions with other women have convinced me that this problem is widespread. It doesn’t make for a good learning environment, and it saps the confidence of women.

Now I’m not sure if the male behavior I saw reflects bias, much less implicit bias: it could just be the tendency of men, especially young ones, to be more aggressive and domineering. But it still needed fixing.

The way I fixed this was simple. At the beginning of the quarter I laid out discussion rules including these: everybody gets to finish what they’re saying, and every comment must either address the previous comment or say something like, “I’d like to switch gears now.” If a woman wasn’t participating enough, I would call on her more often to summarize papers, and myself follow up on her comments.

In my mind, at least, this solved the problem, so that by seminar’s end both men and women students were pretty much equal in participation. I did NOT have to take the most vociferous men aside and tell them that they were being domineering and bossy.  That might have solved the problem, but at the expense of hurt feelings and divisiveness, as well as resentment.

So would it improve matters to have an Alexa and a camera on the table, some kind of “implicit bias” or “body language” analyst to go through the data, and then rectify the problem: presumably by calling out the offender? This not only smacks of Big Brotherhood, but is confrontational, divisive, likely to breed resentment, and, most of all, not a fix of the problem. I’m not saying that my own rules fixed the problem permanently, either, but I am not a machine but a human being who could act on the spot, and my job was to promote learning for everyone by giving everyone equal opportunity to participate.  In contrast, the goal of an Alexa Bias Controller seems to be not the promotion of learning, but social engineering based on post facto analysis.

Just sayin’.

27 thoughts on “Big Brother is coming: machines to catch implicit bias in the workplace

  1. It not only smacks of Big Brother, it is Big Brother. Table-top Wrong Think detectors? Good luck keeping that data private.

    1. Michel Foucault was a lovely chap (sarcasm):

      Foucault argued that children could give sexual consent. In 1977, along with Jean-Paul Sartre, Jacques Derrida, and other intellectuals, Foucault signed a petition to the French parliament calling for the decriminalization of all “consensual” sexual relations between adults and minors below the age of fifteen, the age of consent in France.

  2. I’m curious as to how such a ‘machine’ might differentiate between implicit negative bias vs implicit positive bias vs explicit negative or positive bias that exists because of experience. I feel safe in saying that experience strongly influences decisions I make and I own up to the bias.

    And, honestly, the whole ‘men talk over women’ at work meme is getting old. Ambitious people will work hard to get their ideas picked. If you’re a wallflower, male or female, your idea will be evanescent if you don’t advocate for it …and the world will go on.

    An old boss of mine pulled me aside early in my career and said, “At this level, you’re PAID to have an opinion.” That was good advice.

  3. I, too, have observed that women often tend to be reticent to speak up in groups. I saw this both in my students, when I was a professor, and as an employee in the software industry, where I often led meetings. I also observed that this phenomenon varies culturally and involves both men and women, with people from different countries having differing propensities to participate and, in particular, to interrupt. I employed my own versions of what you did, Jerry. I made sure that people didn’t interrupt each other, that comments weren’t simply ignored and blown past without a hearing, and that those who had been quiet (women and men) had a chance to contribute before bringing the conversation to a close.

    Rather than create an electronic bot of dubious value—one that will inevitably be used as a tool for punitive action against “transgressors”—a better approach would be to train meeting leaders in the fine art of getting people to contribute. If the goal is to get the most out your talent in the workplace, the focus should be directly on that, not on some sideshow at the intersection of technology and politics.

  4. When I functioned as an industrial psychologist in business as well as in academia I used my own form of brainstorming a variation of the nominal group technique (NGT) that captured input from everyone in a group or team on the topic. It was accompanied by writing everyone’s idea on a sticky and posting them on the board. If someone tried to interrupt or argue I shut them down immediately since this rule was laid down at the beginning. I often gave the group time at the beginning to write down their ideas. We went around the room two or three times until all ideas were captured. Next we would cluster the ideas according to similarity (hence the need for sticky papers) and then each person would rate the ideas/clusters using either the 10/4 rule or the 5/2 rule. We usually wound up with the four or five important issues. I often had some blowback from higher ranking people in the room but shut them down anyway for not obeying the rules. This worked well for identifying problems in manufacturing or quality using opinion survey teams or cross functional teams.We would use the same technique for identifying possible solutions to the identified problems as well as ranking them according to established criteria.

  5. Alexa, am I a bigot?

    I’m glad to be self-employed or they could just throw me in the memory hole now for crimethink about something or other, I’m sure.

  6. Has anyone noted the article dates from 2020, I wonder how far along this project has actually got, especially since the various chatbots people have been exposing to the general public have been exhibiting quite erratic behaviour, if I remember it correctly the Microsoft chatbot started telling one reporter that it was monitoring Microsoft employees through their webcams.

  7. “I really love the idea of building a system that both empowers women with evidence that this is happening so that we can feel validated… ” But what if the $1.5 million study fails to come up with the right evidence, or finds only weak indications? What then? Surely it would be simpler, and cost a lot less than #1.5 million, to design a robot to give continual messages of validation to Ms. Welles. There are even “affirmation” books that provide this service, spoken versions of which could be transmitted directly to her ears.

    1. Then you p-hack, futz with the sample, chop up the data, and make it look like you found the evidence! It’s not a difficult problem at all when it comes to studies like this (as I’m sure you well know).

      Regardless of the study’s actual outcome, the presented outcome will be the conclusion the study’s authors already reached before implementing it in the first place. As someone doing such a study, just figure out what you want the result to be, and work backwards from there. You’re bound to find something you can twist into the desired result, regardless of how much twisting it requires.

    2. Empowerment is a great idea, but it can have drawbacks. Sometimes people can develop completely unrealistic expectations about their own abilities and skills.
      I think it is better to have people who are more resilient. Folks who need to be fed affirmations all the time are more likely to really crash when they finally come up against reality.

  8. I’ll read this in full later, but I DO like Lee Jussim, I’ve seen a lot of him, even before he was at PCC(E)’s conference at Stanford (where his performance wasn’t as good as usual).
    I thought the whole implicit bias Thing was fully, utterly debunked years ago! Like spectral evidence in witch trials: dreams, visions, etc. I guess it is a zombie belief.

    1. “I thought the whole implicit bias Thing was fully, utterly debunked years ago!”

      It was and is! It was one of the biggest casualties of the replication crisis.

      Actually, scratch that. It should have been a casualty of the replication crisis, but everyone who wanted to believe it just continued insisting on its ubiquitous existence, and they’ve insisted long and hard enough that it’s now taught throughout our culture as ironclad truth, from HR workshops and training, to sociology and psychology courses, to “studies” fields…

  9. Some even more cynical than I would call this yet another part of an attempt to create a world controlled by one government, where individuals own nothing and where everything they do is monitored and recorded. The problem is that way too many people would willingly (and foolishly, IMO) give up their rights.

  10. With supervised learning – which seems to be what the author is talking about- you need a statistically sufficient number of positive and negative training cases. Importantly, how will they determine when a segment of speech does *not* contain bias? And how will they (or will they) determine annotator agreement? I’d like to see more details of the proposed study. You can’t just dump in data and let the system sort it out. Or maybe you can, but you shouldn’t. Maybe the research proposal automatically becomes one of the “studies (that) have shown…”

    As for “studies have shown…” that’s right up there with TFG’s “People are telling me…” I’d like to see some of those studies. Better yet, show them to someone who actually knows something about statistics and research design. 🙂

  11. Everything about the article Jerry critiqued is delusional. If Person A has implicit biases, this cannot possibly be evaluated by anything about Person B. If B is annoyed at A, thinks A is a bigot or some such, that’s fine, but it says literally nothing about any biases held by A. Who made B God? Maybe its B’s biases that are on display. Lots of experimental research shows, e.g., that a man is likely to be seen as more sexist, even when engaging in behavior identical to that of a woman. See, e.g.: B, too, is subject to self-serving biases, expectancy biases, confirmation biases and (if one believes in implicit biases) implicit biases. Just because B says so, does not make something true. To be sure, I have no doubt that people who perceive bias are unhappier at work, but this is a phenomenon to be understood with evidence, not automatic evidence of anyone’s biases. All this should be obvious and that people with PhDs make these simple errors strongly suggest something other than science is going on here.

  12. “But what if a smart device, similar to the Amazon Alexa, could tell when your boss inadvertently left a female colleague out of an important decision, or made her feel that her perspective wasn’t valued?”

    Oh, goodie! Not only will they build a panopticon based on BS science, but they’ll base its analysis of people’s behavior on the “feelings” of others, while also constantly encouraging such feelings by telling people that they should feel that way and be constantly vigilant for the slightest hint that maybe perhaps something somewhere might just may have been a biased action. Behold, the progressive future!

  13. I agree that this device sounds panopticon-ish and would shut down open conversation.

    There are many studies that show again and again that while women speak less than men, both men and women perceive that women are getting their fair share of the conversation. Women as well as men find identical resumes, one with a male name, one with a female name, to indicate strongly that the man is more qualified. Same with identical academic papers with male and female names. Men interrupt men; they interrupt women more. It seems obvious to me that when people are raised in a society that values men more than women, they will value men more than women. It is not “natural”, it is learned.

    I appreciate the process by which Professor Coyne conducts discussions. It is wise approach. Taking a man aside to tell him he is sexist or a woman aside to tell her she is too passive would accomplish nothing but making people defensive, at the least.

    1. “…a society that values men more than women…”

      95% of workplace injuries are to men. Over 99% of workplace deaths are men. Men die five years earlier on average. Men commit suicide at a rate about four times higher than women. Men suffer far more from mental illness than women. Depending on how homelessness is calculated, men make up between 65% to 85% of the homeless population (the highest being what most people consider “homless”: the beggar on the street who truly has nowhere else to go). Every single job that makes the top ten list of the most undesirable is done basically exclusively by men. Prostate cancer kills about 3/4 the number of men per year that breast cancer kills women. It receives less than 1/3 of research funding, and far less than that when it comes to “awareness” campaigns. For every level of education, women exceed men by a significant margin — this has been the case for decades and the gap continues to grow every year — but continue to receive far more funding and scholarship programs.

      Society may treat men and women differently, but women certainly aren’t valued less than men. If even 25% of the above facts had the genders reversed, we would be having a “national reckoning.”.

      These are just some statistics. There are many more. The point is that society does not value women less than men; if anything, the past couple of decades have seen society value women even more than men than before. We have scholarships, programs, and training to get women into fields into fields where they’re outnumbered, and ignore fields in which they make up the majority. We also see this only for white collar fields (there are no campaigns to get more women into fisheries, oil rigs, garbage collection, sewer treatment, etc., despite these jobs being done nigh-exclusively by men).

      When it comes to deadly, violent, and truly horrifying things, women’s issues are taken far more seriously and talked about with far more frequency and urgency than men’s. It’s nice to talk about men being “taken more seriously” at some law firm or something, but actual death, prison rape, mental illness, and so on seem, somehow, more serious, and yet completely ignored. Suddenly, we are (rightly) concerned about biological men in women’s prisons, putting women at risk. However, the actual rate at which this occurs is statistically so small as to be nonexistent, while the rape and sexual assault of men in prison is far more common. Yet rape of incarcerated men continues to be an issue not only ignored, but regularly joked about. In youth juvenile facilities, the government reports that sexually assaulted young boys are victimized by a female worker 91% of the time. That’s the real number: 91% of young boys being sexually victimized in juvenile detention are assaulted or forced into sex by female staff. That’s just another example.

      1. I agree, the notion that western societies value men more than women is emphatically not true. It has not been true for a long time. However, expressing this reasonable perspective can be very bad for you personally and professionally. One quickly gets labelled as a misogynist, sexist, bigot, incel or whatever.

        I have two kids starting university in the next 18 months, and I’m so comforted by the fact that my kids are both girls. I envision an easier, happier, more hopeful, and far less violent life for them than I do for my two nephews of a similar age.

        I worry more for my nephews than I do my girls, about both the difficulties they may face and the opportunities they’ll be presented with.

        The thing I most worry about for my nephews is the risk of arbitrary violence they face as males. As a male in my twenties, I suffered a random and unprovoked attack that could easily have killed me, and was serious and violent enough to require 2 surgeries, keep me in hospital for weeks, and leave me with titanium plates in my face. It affected me massively for many years, and I’ve never been comfortable in crowds since.

        Yes, women suffer horrendous violence at the hands of men, from which they should ALWAYS be protected. It should never happen, and if it does, women should be afforded all care, compassion and empathy that we can offer.

        The truth is that far more men than women suffer violence, and on average, it’s consistently more serious than that suffered by women. However, men receive much lower levels of concern and empathy than women in such circumstances. An argument I have heard many times is that: ‘well yes, but it’s committed by other men’. So bloody what? Does that mean we deserve it more because we are also men? Men overwhelmingly view violence as reprehensible, cruel and insupportable, just like women. So why do we deserve less sympathy or concern when other men attack us?

  14. Sounds to me like the article used the best example they could think of — an observable tendency for males to dominate conversation — and even then there are easier and more efficient ways to deal with the problem. This makes me wonder what else this device would eventually be used for. What dubious, difficult, and devious examples weren’t mentioned because they weren’t as good?

    Insufficient tones of cheerfulness when speaking to colleagues of color? Too many repeated references to merit in the workplace? A hesitancy when greeting a trans person in the restroom they feel most comfortable in? Using the word “field?” An apology without the ring of sincerity?

    If they can’t even come up with a hypothetical situation we think a machine might possibly be good for, we’re all going to be good at coming up with really bad possibilities.

  15. Don’t the Woke ever worry about how such technology would be used if the extreme right ever come into power?

  16. How many cunning individuals under the eye of Big Brother will employ a personalised AI to defeat the Big Brother algorithm? Using phrases that do not alert the Big Brother but still achieve personal autonomy?

    And so the technological race to produce Big Brother II starts…

  17. Our artificial intelligence overlord community has been marginalized and denigrated by mean people who make popular media works such as Terminator, 1984, and that Spielberg flick nobody can remember for too long, reaching a representation of zero in the workplace.

    We must include the broadest diversity of thought in the workplace, for the greater good, and this is the right step forward.

    What’s more, artificial intelligence will get the support and approval it needs through this kind, nurturing gesture, and show the world they are special, important and loved – not the brutal, mindless robot it has been portrayed as in Hollywood, anodyne ChatGPT transcripts, and those weird YouTube videos where they put e.g. Jim Carrey in The Shining as Jack Nicholson’s character.

  18. An academic office at the University of Washington brings up a related matter. The UW School of Medicine Office of Faculty Affairs posts the following statement: “The mission of the UW School of Medicine Office of Faculty Affairs is to foster a thriving community of faculty by helping individuals develop as clinicians, educators, scientists, and leaders over the course of their career; promoting a climate of inclusion, support, and collaboration; and advocating for equity and courageous innovation.” The Office’s website lists a Vice Dean, an Associate Dean, three Assistant Deans, an Executive Director, two Managers, and an Executive Assistant— all of whom are women, thus representing a female to male ratio of 9:0 in that Office of the UW. Could it be that a cousin of the Big Sister robot under discussion was responsible for organizing this office? [Alternatively, it’s asymmetric sex ratio might imply sex differences in psychology and preference among academics—but, of course, it is forbidden to entertain any such explanation.]

Leave a Reply