What if you had an Alexa-like device around to monitor your behavior, especially your “implicit biases”? Would that bother you? And if you knew you were being monitored, would it affect your behavior? And if it did affect your behavior, would it do so permanently, or only so long as you knew you were being monitored?
Well, first we have to know if the concept of “implicit bias” is meaningful. People may be biased, but it may be something that they recognize: explicit bias that’s kept largely private. In fact, that’s what seems to be the case: data show that not only is there no commonly accepted definition of “implicit bias”, but ways to measure it, most notably the “implicit association test” (IAT) are dubious and give widely varying results for single individuals. Further, ways to rectify it don’t seem to work.
In a post from earlier this month, I reprise psychologist Lee Jussim’s many criticisms of implicit bias. Even if you take the most generous view of the topic, you have to admit that we know little about it, very little about how to measure it if it’s real, and nothing about how to rectify what the tests say is “implicit bias.”. In other word, it’s way too early to start ferreting it out, much less asserting that it’s ubiquitous. Implicit bias (henceforth “IB”) is one of those concepts that we can’t get a handle on, has been largely rejected by psychologists and sociologists, but is nevertheless taken for granted by the woke. Who needs stinking data when a concept meets your needs? The first paragraph of the piece below shows that a highly controversial topic is just accepted as true when it’s ideologically convenient.
The piece below, from Northeastern University in Boston, outlines the proposals of two researchers to measure “implicit bias” remotely, with the aim of eliminating it. Click to read:
The article assumes from the outset, without any justification, that the bias is there and is also ubiquitous. It further claims that implicit bias is costly because (again assuming again that it’s real), it demoralizes workers who are its targets—and that costs money:
Studies have shown that implicit bias—the automatic, and often unintentional, associations people have in their minds about groups of people—is ubiquitous in the workplace, and can hurt not just employees, but also a company’s bottom line.
For example, employees who perceive bias are nearly three times as likely to be disengaged at work, and the cost of disengagement to employers isn’t cheap—to the tune of $450 billion to $550 billion a year. Despite the growing adoption of implicit bias training, some in the field of human resources have raised doubts about its effectiveness in improving diversity and inclusion within organizations.
I reject the assertion of the first paragraph entirely, for the data (while sometimes conflicting) do not show that this kind of bias is ubiquitous—or even exists. Note as well that in the second paragraph “implicit bias” has now become “bias”, yet they are two different things. One is a subconscious form of bias, the other more explicit and recognized by its proponent. And, of course, the paragraph assumes that employees who perceive bias are actually receiving bias rather than acting out a victim mentality, and we just don’t know that. (I’m not denying that racism and sexism exist; just that it’s subconscious, ubiquitous, and has the financial effects noted above.) This being America, of course, the goal is not a more moral business, but a more lucrative one.
But technology is here to fix the problem! All we have to do is eavesdrop on people interacting, analyze what you find, and then use it to “rectify” the behavior of the transgressors. Problem solved!
But what if a smart device, similar to the Amazon Alexa, could tell when your boss inadvertently left a female colleague out of an important decision, or made her feel that her perspective wasn’t valued?
. . .This device doesn’t yet exist, but Northeastern associate professors Christoph Riedl and Brooke Foucault Welles are preparing to embark on a three-year project that could yield such a gadget. The researchers will be studying from a social science perspective how teams communicate with each other as well as with smart devices while solving problems together.
“The vision that we have [for this project] is that you would have a device, maybe something like Amazon Alexa, that sits on the table and observes the human team members while they are working on a problem, and supports them in various ways,” says Riedl, an associate professor who studies crowdsourcing, open innovation, and network science. “One of the ways in which we think we can support that team is by ensuring equal inclusion of all team members.”
The pair have received a $1.5 million, three-year grant from the U.S. Army Research Laboratory to study teams using a combination of social science theories, machine learning, and audio-visual and physiological sensors.
Welles says the grant—which she and Riedl will undertake in collaboration with research colleagues from Columbia University, Rensselaer Polytechnic Institute, and the Army Research Lab—will allow her and her colleagues to program a sensor-equipped, smart device to pick up on both verbal and nonverbal cues, and eventually physiological signals, shared between members of a team. The device would keep track of their interactions over time, and then based on those interactions, make recommendations for improving the team’s productivity.
. . .As a woman, Welles says she knows all too well how it feels to be excluded in a professional setting.
“When you’re having this experience, it’s really hard as the woman in the room to intervene and be like, ‘you’re not listening to me,’ or ‘I said that and he repeated it and now suddenly we believe it,’” she says. “I really love the idea of building a system that both empowers women with evidence that this is happening so that we can feel validated and also helps us point out opportunities for intervention.”
Addressing these issues as soon as they occur could help cultivate a culture where all employees feel included, suggests Riedl.
A device on the table watching and filming everyone! Now THAT will lead to a freewheeling discussion, right?
But the problem Welles addresses is real. As I’ve said before, when I started teaching graduate seminars, one of the first things I noticed, since these were mostly discussion of readings, was that the men tended not only to talk more than the women, but tended to talk over the women. Not only that, but many times I’ve seen a woman student make a good comment, followed up by a comment from a man, only to have the good comment attributed to the man. Since then, discussions with other women have convinced me that this problem is widespread. It doesn’t make for a good learning environment, and it saps the confidence of women.
Now I’m not sure if the male behavior I saw reflects bias, much less implicit bias: it could just be the tendency of men, especially young ones, to be more aggressive and domineering. But it still needed fixing.
The way I fixed this was simple. At the beginning of the quarter I laid out discussion rules including these: everybody gets to finish what they’re saying, and every comment must either address the previous comment or say something like, “I’d like to switch gears now.” If a woman wasn’t participating enough, I would call on her more often to summarize papers, and myself follow up on her comments.
In my mind, at least, this solved the problem, so that by seminar’s end both men and women students were pretty much equal in participation. I did NOT have to take the most vociferous men aside and tell them that they were being domineering and bossy. That might have solved the problem, but at the expense of hurt feelings and divisiveness, as well as resentment.
So would it improve matters to have an Alexa and a camera on the table, some kind of “implicit bias” or “body language” analyst to go through the data, and then rectify the problem: presumably by calling out the offender? This not only smacks of Big Brotherhood, but is confrontational, divisive, likely to breed resentment, and, most of all, not a fix of the problem. I’m not saying that my own rules fixed the problem permanently, either, but I am not a machine but a human being who could act on the spot, and my job was to promote learning for everyone by giving everyone equal opportunity to participate. In contrast, the goal of an Alexa Bias Controller seems to be not the promotion of learning, but social engineering based on post facto analysis.