I’m back to reading a lot during the pandemic, as I’m simply tired of looking at the Internet as a distraction. And so I finished two books this week: one excellent and one so-so. Let’s start with the good one, which I read so long ago that it seemed new to me. Reading The Plague is especially apposite at the moment as it can be read the contest of the pandemic. Can it illuminate our current experience? The answer is yes and no.
And it was this old edition that I read (click to go to the Amazon site):
At about 280 small pages, those who shy away from big books will find this one doable. It’s one of the novels that won Camus the Nobel Prize in Literature, and deservedly so. The Plague (La Peste in the original French) is considered an “existentialist” novel, and I suppose that’s because one could construe it as the fictional story of men laboring to fight a meaningless but fatal pestilence: a bubonic plague that struck the city of Oran in Algeria in the 1940s. The protagonist, Dr. Rieux, is an atheist, and realizes the senselessness of what is happening—despite the local priest’s attempt to find meaning in the epidemic—but still labors to exhaustion, seven days a week, to help the stricken. Rieux doesn’t do this because he sees it as the “moral” thing to do, but believes that relieving suffering is an aspect of human love, the only worthwhile thing he sees in our existence.
I won’t give away the plot or the spoiler (i.e., who the narrator is), but it’s worth rereading in light of the coronavirus pandemic. There are parallels (quarantines, lots of death), but also differences (no mask wearing, even though some of the plague is pneumonic, no lockdowns of businesses, and none of the peevishness that limns our behavior). But the big parallel is humanity being at the mercy of an invisible microbe, which takes lives randomly and senselessly. If that’s existentialism, so be it.
The novel rises to a climax with the narrator’s “analysis” at the end after the plague has lifted, which contains some of the book’s best writing. My favorite bit, which I’ve mentioned before, is the ending, which is wonderful even in translation. And it’s also about the futility of fighting the plague, which, though it can be temporarily conquered, will always return:
And, indeed, as he listened to the cries of joy rising from the town, Rieux remembered that such joy is always imperiled.
He knew what those jubilant crowds did no know but could have learned from books: that the plague bacillus never dies or disappears for good; that it can lie dormant for years and years in furniture and linen chests; that it bides its time in bedrooms, cellars, trunks, and bookshelves; and that perhaps the day would come when, for the bane and enlightening of men, it would rouse up its rats again and send them forth to die in a happy city.
Lots of nice alliteration there, and the last bit, “when, for the bane and enlightening of men, it would rouse up its rats again and send them forth to die in a happy city,” is sheer genius. Bane and enlightening indeed!
I read this one on—as I recall—the recommendation of a reader here. But perhaps not. At any rate, I was drawn by the topic: Daum’s disillusionment with wokeness and her discovery of “IDW” members like John McWhorter, Glenn Loury, Christina Hoff Sommers, and Bret Weinstein. This is journey that many of us have taken, and I wanted to see what Daum had to say about it.
I didn’t find the book absorbing, but perhaps that’s because I already share Daum’s intellectual criticism of wokeness and had undergone the political changes that describes at length, embroidering them with details about her crumbling marriage and her disillusionment with feminism. In my view, Daum provided too little meat and tried way too hard to be clever, throwing in personal information that didn’t enhance her thesis—if she has a thesis. Daum is a big fan of Joan Didion’s writing, but doesn’t have the chops to emulate her, nor Didion’s ability to make the personal sufficiently impersonal to be interesting to the reader.
It’s a solipsistic book that I don’t think would enlighten many of us. Read it at your own peril.
What next? Below a book that came highly recommended from an expert: literary critic James Wood of the New Yorker. Having met James in Cambridge MA (he teaches at Harvard) and discussed with him the idea of whether literature was a “way of knowing” (I won’t divulge his take), I wrote him asking if I should read a copy of Donna Tartt’s The Goldfinch that I found in a free book box.
The Goldfinsh won the Pulitzer prize in 2014, and I was about to start it when Wood replied and said that he much preferred a wonderful 2006 novel, translated from the German by Anthea Bell in 2015, that he had extolled several years ago in The New Yorker. All for Nothing is clearly one of Wood’s favorite modern novels. He warned me not to read his review before I read the book, as he gave spoilers. So I haven’t, but will start this book today:
So that is my latest reading. Your turn: what books have you liked lately?
Imagine my delight when, purely by accident, I came upon a Wikipedia entry called “List of books considered the worst,” with the explanation, “The books listed below have been cited by many notable critics in varying media sources as being among the worst books ever written.” [Their emphasis.] The list includes only books written in the nineteenth and twentieth centuries, and I’ve put below just the former, eliminating all the snark and explanation that makes each entry hilarious. I did leave in the entire entry for A. N. Wilson’s dreadful book on Darwin, because, to my extra delight, I found they quoted me (my original review was in the Washington Post, not Dawn).
This is Wikipedia’s list, not mine, and I find that I’ve read only a few of their choices: Mein Kampf (yes, boring, but i read it dutifully, to know Hitler’s mind), The Da Vinci Code (as my excuse, I was spending a week in a rental cottage in Dorset and that was the only book they had), Naked Came the Stranger (a popular book when I was in college), Fifty Shades of Grey (I didn’t really read it, but flipped through it in a bookstore to see what the fuss was about), and, of course, Wilson’s book on Darwin. Yes, they’re all dreadful, but Hitler’s book is on the list not because it’s bad but because it’s characterzied as “evil”.
I couldn’t really make my own list of the worst books ever written, because if I find that a book doesn’t engage me, or is poorly written, I don’t finish it. But there is one book I’ve read that is a glaring omission from the list below: a book whose prose is truly awful, and yet became a best-seller and a popular movie. I don’t have it at hand, but here’s Coyne’s choice for the worst fiction book of the 20th century:
The Bridges of Madison County (Robert James Waller, 1992). I can’t remember when I read this rancid crock of tripe, but it was similar to the circumstances in which I read The Da Vinci Code: I was in a house where there was only one book to read. I need to read like a tiger needs meat, so I read that one. (I used to read the cereal boxes at breakfast when I was a kid.) All I can remember is that the prose was absolutely awful: a rank amateur attempting a love story.
The leopard swept over her, again and again and yet again, like a long prairie wind, and rolling beneath him, she rode on that wind like some temple virgin towards the sweet, compliant fires marking the soft curve of oblivion.
“It’s clear to me now that I have been moving toward you and you toward me for a long time. Though neither of us was aware of the other before we met, there was a kind of mindless certainty bumming blithely along beneath our ignorance that ensured we would come together. Like two solitary birds flying the great prairies by celestial reckoning, all of these years and lifetimes we have been moving toward one another.
. . . . “It already smells good,” he said, pointing toward the stove. “It smells… quiet.” He looked at her.
“Quiet? Could something smell quiet” She was thinking about the phrase, asking herself. He was right. After the pork chops and steaks and roasts she cooked for the family, this was quiet cooking. No violence involved anywhere down the food chain, except maybe for pulling up the vegetables. The stew cooked quietly and smelled quiet.”
. . . “He was an animal. A graceful, hard, male animal who did nothing overtly to dominate her yet dominated her completely, in the exact way she wanted that to happen at this moment.”
. . . “The human heart has a way of making itself large again even after it’s been broken into a million pieces.”
The thing is, I read the book after I saw the 1995 movie, directed by Clint Eastwood and starring him and Meryl Streep as the star-crossed lovers. I thought that the movie was very good, and the performances convincing. It was in fact a tearjerker, and once in my life I even experienced a cars-going-opposite-ways-at-an-intersection parting similar to that below when, in the pouring rain, Robert’s truck goes right and Francesca, in the car with her husband, goes left. (It was a scene that was filmed magnificently.) Imagine, then, my depression when I read the book, and found it was infinitely worse than the movie. (Movies from books are usually worse than the source.) Whoever turned that steaming dung pile of a book into a screenplay—and the acting of course was a major plus—did a magnificent job.
Here’s the parting, which always breaks my heart. This is the last time they see each other:
Anyway, the book sucks big time.
Here’s Wikipedia’s list for the last 120 years. Do add your own, or, if you’ve read any of the books below, feel free to agree or disagree.
I’m not generally a fan of political books, but I may have to break down and get Barack Obama’s new memoir, A Promised Land.
I don’t know if the New Yorker article below is free (I subscribe), but it was reading that excerpt from Obama’s book that made me think about getting the whole thing (click on first screenshot to go to Amazon page). It’s only volume 1 and is a daunting 768 pages, but the reviews have been uniformly favorable. Further, it’s #1 among all books on Amazon, and a very cheap $23.96 in hardback on the site:
Below: the excerpt. What I liked about it was that it dealt not only with policy (the “toughest fight” was about Obamacare), but also the day-to-day doings and feelings of a President—what it is like to be President. And it’s extraordinarily well written for a man who is not a professional writer but a politician. He’s a natural.
Here’s the ending of the New Woker piece, which gives you an idea of the mix of political and personal, conveyed in a folksy style that isn’t cloying (you can hear Obama’s voice in these words). It’s this mix that made the excerpt—and, according to the reviewers, the book—so appealing:
It wasn’t just that criticism from friends always stung the most. The carping carried immediate political consequences for Democrats. It confused our base (which, generally speaking, had no idea what the hell a public option was) and divided our caucus. It also ignored the fact that all the great social-welfare advances in American history, including Social Security and Medicare, had started off incomplete and had been built upon gradually, over time. By preëmptively spinning what could be a monumental, if imperfect, victory into a bitter defeat, the criticism contributed to a potential long-term demoralization of Democratic voters—otherwise known as the “What’s the point of voting if nothing ever changes?” syndrome—making it even harder for us to win elections and move progressive legislation forward in the future.
There was a reason, I told my adviser Valerie Jarrett, that Republicans tended to do the opposite—that Ronald Reagan could preside over huge increases in the federal budget, the federal deficit, and the federal workforce and still be lionized by the G.O.P. faithful as the guy who successfully shrank the federal government. They understood that, in politics, the stories told were often as important as the substance achieved.
We made none of these arguments publicly, though for the rest of my Presidency the phrase “public option” became a useful shorthand inside the White House anytime Democratic interest groups complained about us failing to defy political gravity and securing less than a hundred per cent of whatever they were asking for. Instead, we did our best to calm folks down, reminding disgruntled supporters that we would have plenty of time to fine-tune the legislation when we merged the House and Senate bills. Harry kept doing Harry stuff, including keeping the Senate in session weeks past the scheduled adjournment for the holidays.
As he’d predicted, Olympia Snowe braved a blizzard to stop by the Oval and tell us in person that she’d be voting no. But it didn’t matter. On Christmas Eve, after twenty-four days of debate, with Washington blanketed in snow and the streets all but empty, the Senate passed its health-care bill, titled the Patient Protection and Affordable Care Act—the A.C.A.—with exactly sixty votes. It was the first Christmas Eve vote in the Senate since 1895.
A few hours later, I settled back in my seat on Air Force One, listening to Michelle and the girls discuss how well Bo was adjusting to his first plane ride as we headed to Hawaii for the holiday break. I felt myself starting to relax just a little. We were going to make it, I thought. We weren’t docked yet—not even close, it would turn out—but thanks to my team, thanks to Nancy, Harry, and a whole bunch of congressional Democrats who’d taken tough votes, we finally had land within our sights.
Apparently the end of the first volume, according to the review below, is a taut and gripping account of the hunting of Osama bin Laden, seen from the White House.
In fact, when I saw that the conservative Spectator had a very positive review—though faulting the book for “humblebrag” and “schmalz”—it almost sealed the deal. I think the chances are about 75% that I’ll get the book. But that of course commits me to getting the second volume. Click on the screenshot:
An excerpt from the Spectator review:
But under all that hopey changey stuff, and where the long sections about wrangling policy through Congress really come into their own, is a superbly engaging study in realpolitik. He was famous for his windy rhetoric; but to get anything done in office required a steely political operator. Obama, the centrist dad’s centrist dad, is again and again confronted by the hard arithmetic of the caucus at home, and of tangled interests abroad. He really shows you how the sausage is made — and his cool, conscientious, covering-all-the-angles pragmatism, more than his optimism, is the real fascination in this book. If first-term Obama has an arch-nemesis, it’s not Osama bin Laden or Donald Trump: it’s the Senate filibuster. And there’s a wry sense of the absurd. On the campaign trail in Iowa, he secures the endorsement of the ‘Butter Cow Lady’, ‘who at the state fair each year sculpted a life-sized cow out of salted butter’, and blasts statewide the prerecorded call announcing her support. ‘She later created,’ he says proudly, an Iowan Ozymandias: ‘a 23-pound butter bust of my head.’
He delivers crisp little put-downs, too. As a candidate, when a do-gooding ice-cream company called on him to defund the Pentagon, he recalls wearily: ‘I had to call either Ben or Jerry — I don’t remember which.’ Nicolas Sarkozy is a ‘bantam cock’ (that’s surely at least half right) whose conversation
‘swooped from flattery to bluster to genuine insight, never straying far from his primary barely disguised interest, which was to be at the center of the action and take credit for whatever it was that might be worth taking credit for.’
Now the older I get, the greater the percentage of nonfiction in what I read, but my tolerance for long books has also decreased. There were days when I could breeze through Robert Caro’s 4-volume biography of LBJ—one of the greatest nonfiction “books” of our time—for several hours a day, every day until I finished each volume. Now I struggle with such a length. I’m not sure whether this is age or simply the anxiety that comes with the pandemic. So 700+ pages seem daunting, and, truth be told, politics usually bore me. But Caro didn’t bore me, and Obama’s book seems to have the appealing Caro-esque mix of the man and his job.
Has anybody read it yet? There are over ten thousand reviews on Amazon, 94% of them giving the work five stars—for a book that came out on November 17!
Reading the latest edition of The Chicago Maroon, our student newspaper, I saw an op-ed about self care by Ada Palmer, an associate professor of History. I’m not going to write about that; her piece is pretty straightforward and empathic towards our students, who will be having a rather stressful semester. Rather, when I looked Palmer up, I saw that she’d written a review two years ago in Harvard Magazine of Steve Pinker’s Enlightenment Now: The Case for Reason, Science, Humanism and Progress. Always interested in how my colleagues regard Pinker, in arguments for empiricism and rationality, and intrigued by the title of her piece, I read her piece. You can, too, by clicking on the screenshot below.
It turns out that Dr. Palmer likes Steve’s book, but has two reservations. The first is that Steve argues that humanism, which is a handmaiden of atheism, is the way forward, and that religion has only been an impediment to moral and material progress. I think he’s pretty much right on that one. But Palmer doesn’t like the atheism bit:
Pinker reviews what he sees as humanism’s intellectual adversaries, such as those who caricature it as cold utilitarianism, those who suggest that humans have an innate need for spiritual beliefs, and the classic accusation, ubiquitous in the Renaissance and Enlightenment, that there cannot be good or virtue without God. For some readers, it will be frustrating that 350 pages of useful and cheering data, the majority of which one could call faith-neutral, culminate in the declaration that only triumphant atheism can ensure that scientific progress will help instead of harm. But Pinker’s secular humanism is less militant than that of many contemporary atheist voices; he focuses on the benefits of caring about the earthly world, rather than on condemning religion. His conclusion, that progress simply requires us to value life over death, health over sickness, abundance over want, freedom over coercion, happiness over suffering, and knowledge over superstition, is one numerous theisms can and have embraced.
Thank God he’s not as militant as Dawkins! God forbid that anyone should condemn religion.
Yes, but of course many theisms have impeded science, reason, and morality, and continue to do so (I’m looking at you, Vatican), while atheism hasn’t impeded those things one bit. After all, atheism is simply lack of belief in gods. The lucubrations above look like either religion osculation or accommodationism. I doubt that anyone could argue cogently that science would be more advanced if everyone became religious. Palmer also mentions “secular evidence” below, as if there was a kind of “nonsecular evidence” for science.
But the main problem with her piece is a recurrent trope that we see among those who wish to minimize the importance of science. It’s the claim that reason itself, or logic, or science itself, cannot prove that science can actually help us understand the universe in a useful way. For philosophers and some in the humanities, the lack of a priori justification that reliance on empirical methods will work is somehow an indictment of science. Here’s how Palmer goes at it:
Pinker briefly reviews efforts to value other factors—love, passions, feeling—above reason, but declares such efforts self-defeating: as soon as they attempt to justify themselves, the very act of providing reasoned arguments for their beliefs admits that reasoned arguments are the strongest grounds for belief. Yet, as I reflect on this argument, I am reminded how science, during a critical moment in its history, was self-defeating in much the same way.
Why was it self-defeating? Because there was no a priori justification for going ahead with empirical observation, hypothesis-making and -testing, and so on as a way to understand nature:
Progress in the modern sense, as an intentional and human-driven process, was first fully articulated by Francis Bacon early in the seventeenth century, when he suggested that a collaborative community of empirical inquiry would uncover useful truths that would radically transform human civilization and make each generation’s experience incrementally better than that of the generation before. This was not the easy sell it seems, since Bacon had no evidence that this unprecedented project could wield such power—and even if he had found evidence, one can’t use reasoned evidence to prove that reasoned evidence can prove things. New discoveries were frequent—the moons of Jupiter, the magnification of insects, the circulation of the blood—but practical benefits were slow in coming.
Well, that’s not exactly true, because people had been using what I call “science broadly construed” to understand nature for millennia. I was impressed, on reading Beryl Markham’s West With the Night, how local trackers used scientific observation to find game: the depth of the tracks, how dry they were, where waterholes were, and so on. There was in fact every reason to think that empirical inquiry would lead to understanding, while prayers and revelation, which any chowderhead would know didn’t help much, weren’t a good way to find animals or decide which plants were edible vs. poisonous.
As for the “practical benefits being slow in coming”, well, I take issue with that. Is improved understanding of the world “practical”. Maybe it won’t make you richer or healthier, but it makes you wiser and more appreciative of the marvels of nature.
In the end, though, I don’t care if you can’t use reason to prove that reason and empiricism “can prove things”. (Actually, they can’t: science doesn’t speak of “proof” but of more or less confirmed hypotheses.) What’s important is that, as Richard Dawkins said pungently, “Science works, bitches!” The justification of empiricism, reason, and science is in its results: we find out what makes people sick, how to get to the Moon, how to cure disease, and so on. Only somebody hogtied with the strictures of philosophy could see a lack of a priori justification as an argument against the methods and validity of science. Yet we hear this all the time—often from theologians.
Palmer goes on:
Yet Bacon did succeed in awakening a groundswell of enthusiasm (and funding) for reason and science, through an argument that often surprises my students: he appealed to the personality of God, arguing that a good Maker would not send humans out into the wilderness without the means to achieve the desires implanted in us. Thus, because reason is God’s unique gift to humankind, it must be capable of all we desire.
From time to time, particularly in the aftermath of the French Revolution, champions of secularized science have been embarrassed by this comment from Bacon—worrying what would happen if their atheist followers realized that science, at its inception, had no secular evidence to support its own faith in the power of evidence.
Well, the important thing is that nobody’s embarrassed by this argument any more, for the majority of scientists, and nearly all “elite ones” neither believe in gods nor worry about “the lack of secular evidence” to support the power of evidence. As I noted above, long before Bacon we knew that we could understand things without needing “divine evidence.”
Palmer makes one more dig at atheism:
But with Pinker’s entire book in hand, Bacon would also have felt the tension between two arguments running through it: the inclusive argument that reason, science, humanism, and progress have made our present better than our past, and can make our future better still; and the less inclusive argument, however eloquently and intelligently presented, that the humane and empathetic humanism capable of turning our powers to good and away from evil must be secular.
Frankly, I don’t care what Bacon would think about the lack of need for “divine” as opposed to secular evidence for science, or about the power of humanism. There’s not an iota of evidence that religion makes people behave better, and often it makes them behave palpably worse. (Remember Steve Weinberg’s dictum: “With or without religion, you would have good people doing good things and evil people doing evil things. But for good people to do evil things, that takes religion.”) And of course the more atheistic a country, the better off it is—by nearly any measure: gender equality, happiness, prosperity, well being, and so on.
But it doesn’t matter, for her main argument, which she reprises in her last paragraph, is both philosophical and a non-starter. Note what I see as a snarky bit in the following (I’ve bolded it):
Pinker is no more successful than Bacon at justifying science and reason without a recursive appeal to science and reason. Yet for those already confident in the persuasive force of evidence, it would be hard to imagine a more encouraging defense than Pinker’s of the reality and possibilities of progress.
What? Is there a large segment of humanity that isn’t confident in the persuasive force of evidence? If so, they shouldn’t be trusting any court decisions, or even their own observations, much less taking planes or swallowing antibiotics. In my view, nearly everyone is confident in the persuasive force of evidence about most things, though some fraction of humans are confident in things that lack evidence. They include religious people, conspiracy theorists, and cranks. (Oh, and Donald Trump.)
Why does this argument against science keep coming up? It’s worthless!
The book: I just finished this book for the second time (I read an earlier edition without the introduction; click on image for Amazon link):
This is the memoir of pilot, horse trainer, and adventurer Beryl Markham (1902-1986), recounting her years in Africa with the Happy Valley set, which included Karen Blixen (author of Out of Africa), her estranged husband Baron von Blixen, and Denys Finch Hatton, Blixen’s lover (and, as I discovered, also Markham’s). The book wasn’t that popular when it came out in 1942, and went out of print, only to be rediscovered by Ernest Hemingway and brought back into print in 1982. Markham, who returned to Africa at fifty, then enjoyed a few years of literary fame before she died.
She deserves that fame based on this book, as it’s a wonderful and beautifully written memoir—a perfect complement to Out of Africa, published five years earlier. Both describe the same part of Africa (Kenya) at the same time, but one from the vantage point of a coffee-farm owner and the other from an aviator. They both approach the memoir not as a seamless narrative, but as a series of incidents, each illuminating one moment of time. And both describe women who refused to accept the subordinate status afforded to females at the time, and thus are, as they say, “empowering.” Both women were brave and admirable, and you need to read both books, especially if you love good prose.
If I had to pick a favorite, it would be Out of Africa. That’s because at times Markham’s prose becomes a bit—well, not exactly purple—but strained as she strives for literary effect. I’m not sure if she was trying to match the graceful writing of Blixen (I’m not even sure if Markham had read Out of Africa), but she didn’t have the tools to do what Blixen did. For example, she could not have matched what I consider one of the finest set pieces in English literature: the description in Out of Africa of Finch Hatton’s grave, which I reproduced here.
That said, the book is still well above most memoirs, and deservedly a classic. Read it soon.
Nearly two hours long, it’s not long enough, for Williams contained multitudes. It’s full of clips showing the man’s quicksilver mind, riotous humor, and embellished with remembrances from his wives and friends, especially Billy Crystal. I thought Williams was always “on”, but it becomes clear that when he was with his family, he was very quiet and withdrawn, perhaps recharging. As we know, he killed himself at 63. Many say this was unexplainable, but he’d been diagnosed with Parkinson’s and was furious at not being able to control his thoughts; as he said, “I need to reboot my mind.”
The one omission here is that Williams’s movies are given short shrift, and they were an important part of how many people remember him: Good Will Hunting, Mrs. Doubtfire, Good Morning, Vietnam, Dead Poets Society, The Fisher King, Awakenings, and so on. That alone is a wonderful c.v., but then there was also the comedy onstage and on television. The guy was sui generis, fizzling with energy, and, as they say, we won’t see his like again. But you can see it in this wonderful HBO documentary.
Upcoming movie. I haven’t seen the new movie Ammonite yet (it comes out in the U.S. November 13), but reader Kurt sent me a five-star review from the BBC. You’ve likely heard of paleontologist Mary Anning, and this tells her story, embellished with a fictionalized lesbian romance. Wonderful casting: Anning is played by Kate Winslet, and her protege and later lover Charlotte Murchison, (a real person who was friends with Anning) played by the great young actor Saoirse Ronan. I’ll reserve judgment until I see it, of course, but I will see it. Here’s a trailer:
I’ve just finished reading Steve Stewart-Williams’s recent book The Ape That Understood the Universe (Cambridge University Press, revised edition 2019). I recommend it highly as a good way to get not only an introduction to evolutionary psychology, but also to see why the discipline is worthwhile and why its detractors are often misguided. Click on the screenshot if you want to buy it from Amazon US.
I have to hedge my encomiums a bit, because while most of the book—the first part that deals with evolutionary psychology—is excellent, the second bit, only the last 64 pages, is weaker. That’s the bit that deals with memes, the popular but, I think, misguided view that we can understand human cultural evolution by assuming it’s propelled by memes, “units of culture” first dreamed up by Dawkins in The Selfish Gene. While memetics sounds good at first glance, and has become incorporated into popular jargon as “an item that’s gone viral on the Internet”, I have always questioned its value as a way to understand how ideas and objects spread in human culture—indeed, supposedly creating human culture. I explained my criticisms in a 1999 Nature book review of Susan Blackmore’s book The Meme Machine, and won’t reiterate them at length here.
But the biggest part of the book is well worth reading, particularly because Left-wing biologists have denigrated evolutionary psychology at length, calling it not only worthless, but meaningless. I won’t name these miscreants, but suffice it to say that their motivations are largely ideological: they think that if human behavior—particularly behavioral differences between groups and especially between men and women (but also behavioral “universals”)—are partly instilled in our genome by natural selection, then that will justify xenophobia, misogyny, and all kinds of bigotry.
This claim isn’t true, of course. As I’ve mentioned repeatedly, to say that our evolutionary past justifies how people should treat others, or construct a morality, is deeply misguided: “the naturalistic fallacy.” And to accept that natural selection has molded human bodies and physiology, and has done so within the last 10,000 years (see here), but then to deny that natural selection has affected human behaviors, including differences between the sexes that sometimes parallel those seen in animals, is a nonsensical and unparsimonious view.
Further, evolutionary psychology as a discipline is neither worthless, unproductive, nor tautological. After describing how natural selection operates on genes (including kin selection and the production of cooperative behaviors), Stewart-Williams takes up some topics in evolutionary psychology and shows that the discipline has indeed produced testable and confirmed hypotheses, particularly those involving aspects of human sexual behavior as well as behavior toward kin and group-mates (“altruism”).
Stewart-Williams is no uncritical booster of evolutionary psychology, readily admitting that some of its advocates have gone overboard. But you don’t throw out the baby with the bathwater, and in an appendix called “How to win an argument with a Blank Slater”, Stewart-Williams takes up and rebuts some of the most common criticisms of the discipline (e.g., “evolutionary psychology is the latest incarnation of genetic determinism”, “hypotheses in evolutionary psychology are either just-so stories or are unfalsifiable”, and so on). Hypotheses in the discipline are often testable and falsifiable, and one of the strongest parts of this book is the description of data that support hypotheses about the evolution of behavior, as well as some description of tests that have failed. Like Darwin, Stewart-Williams is always anticipating readers’ queries and criticisms, and addresses them throughout the book.
The discussion of human “altruism”, always a puzzling topic, is also quite good, with Stewart-Williams lucidly describing the various ways what we think of as “selfless behavior” could evolve (kin selection, small-group tit-for-tat strategies, and group selection, which he considers unlikely). All in all, I strongly recommend you read at least the first 218 pages on evolutionary psychology, as well as Appendix A on arguing with Blank Slaters.
You should also read the last chapter on memetics (“The Cultural Animal”), but do so with an especially critical eye. Although Stewart-Williams’s aim in the book is to explain human behavior and society as a result of both biological and cultural evolution, he’s much more successful with the former than the latter. That’s not to say that he doesn’t have good insights into cultural evolution, for he does. It’s just that the addition of “memes” doesn’t, in my view, add much.
I’ll give just one example. Most of us love apple pie and ice cream, and Stewart-Williams considers this a meme whose spread needs explanation. The classical explanation of memetics is that ideas spread when they parasitize human brains and have features that are good for the memes themselves to spread, though those features may not be adaptive for individuals or society (he uses smoking as one example). So it goes with apple pie à la mode:
. . . the ultimate criterion which determines whether a meme will spread is not whether it benefits us or our groups, but whether it benefits the meme itself.
Two examples will illustrate the point. [JAC: I give just the first.] The first is apple pie and ice cream. The apple-pie-and-ice-cream meme has prospered in human societies because it powerfully activates the brain’s pleasure centers—more powerfully, in fact, than anything in our natural environment. Eating too much of the stuff isn’t good for us, but that’s irrelevant. The meme proliferates, not because it’s good for us but purely because it’s good for itself—purely, that is, because it’s good at proliferating. To be clear, it doesn’t want to proliferate or know what’s good for it, any more than genes do. It’s apple pie! The idea is simply that if we want to understand which memes come to predominate in a culture, then rather than looking at how memes affect our fitness or the fitness of our groups, we need to look at how they affect their own chances of being passed on.
But to assert that the apple-pie-a-la-mode meme has properties that make it good at proliferating is simply tautological, and not in the way that the spread of genes is said to be tautological. What, exactly, about this meme helps it spread itself among Americans or Brits? What makes it good for itself? As far as I can see, nothing. What makes it spread is simply that apple pie and ice cream taste good, and taste better than alternatives like, say, donuts and ice cream. While Stewart-Williams admits that this dessert “activates the brains’s pleasure centers,” the real explanation for why this dessert “meme” is popular would involve understanding why it tastes better than alternatives. As I wrote in my Nature review of Susan Blackmore’s book:
. . . Blackmore’s enterprise has two fatal flaws. First, she has got the chain of causation backwards. The claim that memes created major features of humanity is equivalent to the claim that the main force driving the development of better computers has been the self-propagation of software. In reality, computers are usually designed for speed and capacity, which then permits the development of new software. Similarly, the self-replication of memes does not mould our biology and culture; rather, our biology and culture determine which memes are created and spread. What a world of human psychology is obscured by Blackmore’s mantra, “If a meme can get itself successfully copied it will”! To me, memetics boils down to the following obvious theory: ideas tend to spread if they cater to our desires to have love, comfort, pleasure, power, sex, the attention and admiration of others, a meaningful life and a way to evade the awful fact of mortality.
This brings us to the biggest problem: memetics seems completely tautological, unable to explain why a meme spreads except by asserting, post facto, that it had qualities enabling it to spread. One might as well say that aspirin relieves pain because of its pain-relieving properties. The most interesting question — why some memes spread and not others — is completely neglected. Why did Christianity take hold during the waning days of the Roman Empire? You won’t find the answer, or any way to attain it, in memetics. (This, by the way, makes memetics utterly unlike biological evolution. The spread of genes through natural selection is not tautological because one can predict their fate through their known effects on replication and the reproduction of their carriers.)
Nothing is gained in understanding the spread of apple pie and ice cream by considering it a “brain parasite,” which Stewart-Williams does.
I think Stewart-Williams recognizes this problem with memetics, for he deals with it in Appendix B: “How to win an argument with an Anti-Memeticist”. Here we find the following passage, which starts with a criticism of memetics in italics and then the his rebuttal in plain text.
The hallmark of a good scientific theory is that it generates research: it makes novel predictions about the world, which lead scientists to make otherwise unexpected discoveries. Memetics, however, has been woefully unsuccessful on this front. Indeed, the field’s flagship journal, The Journal of Memetics, had to close its doors because it didn’t get enough submissions.”
[Stewart-Williams’s answer]: This is the “if you’re so smart, why aren’t you rich?” criticism. Of all the criticisms on offer, it’s probably the one that worries me most. In the end, though, I think it fails. It is certainly true that memetics has yet to deliver much in the way of new research. It’s also true that many specific meme-based explanations have yet to be adequately tested. However, when it comes to evaluating a theory or explanation, what we ultimately want to know is not how many publications it’s generated, or how many surprising discoveries, bur rather something more basic: whether or not the theory is true. That, in the final analysis, is what science is all about. And despite the current research shortfall, there’s good reason to believe that, at least in its general outline, memetics is indeed a true and accurate theory.
But how can we know if a theory is true if it doesn’t propose tests or potential falsifications? Stewart-Williams tentatively accepts the truth of memetics because he says it makes sense: cultural entities appear “designed to benefit themselves”, even if they harm individuals or groups. (Yes, too much pie is bad for you, or, as the jolly Almus Pickerbaugh said in Arrowsmith, “too much pie makes pyorrhea.“) But how, exactly, does the apple-pie-and-ice cream “meme” benefit itself? Can you predict this in advance from simply the existence of the combination? Well, perhaps you could predict it if you knew how its gustatory constituents interacted with human brains, but that’s a psychological explanation that has nothing to do with the “self spreadability” of the pie-and-ice cream meme.
In the end, you have to judge whether a theory is true based not on intrinsic plausibility but on whether it survives empirical tests. In my view, the empirical “tests” of memetics boil down to post facto explanations of why something spread based on some characteristic of the cultural unit itself. And here memetics has, as Stewart-Williams admits, failed. It doesn’t explain much and doesn’t seem falsifiable because memeticists always seem able to confect a reason why something had to spread, independent of human tastes, needs, or psychology. It is intrinsically an unfalsifiable theory.
I’ve written a lot about memes because it’s a bugbear of mine, not because it’s the major topic of Stewart-Williams’s book. It isn’t. And so I recommend that you read the book, if for no other reason than to see why the critics of evolutionary psychology are largely misguided. But you’ll also learn a lot about how natural selection works, and how it’s forged an appreciable part of human behavior.
It’s been roughly four years since I wrote about Elaine Ecklund‘s efforts to show that religion and science aren’t in conflict and also that scientists are more religious than one might suspect (see posts here). A sociologist at Rice University, Ecklund has been funded, as far as I can see, nearly continuously by various Templeton grants, as their sub-organizations love her message of harmony between science and faith. And Ecklund’s analyses designed to show that have involved, in my view, a sometimes disingenuous presentation of the data—data that often don’t support her conclusions (read some of my earlier posts to find out how).
In the June issue of Free Inquiry, philosopher Russell Blackford reviews Elaine Ecklund et al.’s new book (screenshot of review and book below). The article is paywalled, but I’ve gotten permission to send Russell’s manuscript in Word, which is apparently nearly identical to what was published, to those who are interested (don’t ask unless you want to read it!):
The book, with seven authors (and, as you see, with Ecklund clearly the senior one), came out July 2 and was published by Oxford University Press. Click on the cover below to go to the Amazon site:
Part of the acknowledgments:
I haven’t yet read it, so you can use Russell’s review as a guide for whether you want to read it yourself. He’s quite critical, but, in the end, doesn’t think the book is completely worthless. After taking it apart for several thousand words, he does add an encomium at the end:
Finally, although I have emphasized what I see as an obvious pro-religious bias – and a certain amount of wishful thinking – throughout Secularity and Science, the large amount of money that went into the book from Templeton’s coffers was not entirely wasted. This book does provide important information for scholars to pore over and consider. Secularity and Science is a resource, among many others, and I’m not sorry to have had the opportunity to read it. I certainly intend to make further use of its extensive information, notes, and bibliography. It just has to be read with a critical mind, and its conclusions should be taken with a grain of salt.
The book interviewed 600 individual scientists in “elite” universities from several countries: the US, the UK (not including Northern Ireland), France, Turkey, Italy, India, Hong Kong, and Taiwan, most of which get their own chapter.
Ecklund’s conclusions, some of which she’s published before in papers (see my earlier posts) are predictable, and Russell summarizes them at the outset:
Secularity and Science offers numerous conclusions about the countries that were studied. With the US, for example, the conclusions are, first, that American scientists are often hostile to religion because of an exaggerated sense of the fundamentalism of the American religious public, and, second, that discrimination against religious scientists undermines American science. But these claims are, to say the least, impressionistic and conjectural. In particular, no worthwhile evidence is presented for the second claim, which would be explosive if it were true. As we’ll see, American scientists are markedly less religious than the general public in the US, and that would have been the most obvious conclusion to report.
The book also offers four overall conclusions, not relating to any particular country:
“Around the world, there are more religious scientists than we might think.”
“Scientists – even some atheist scientists – see spirituality in science.”
“The conflict perspective on science and religion is an invention of the West.”
“Religion is not kept out of the scientific workplace.”
Little of this is helpful if we hope to deepen our understanding of the relationship between science and religion. . . .
Russell’s three big beefs are these. First, Ecklund’s most important claim is that “there are more religious scientists than we might think”, but “the authors fail to produce any evidence as to what ‘we’ might, or actually do, think.” That conclusion, then, is little more than wishful thinking to soothe accommodationists and Templeton.
The second involves Ecklund’s claim above that “The conflict perspective on science and religion [i.e., that they’re in conflict] is an invention of the West.” Blackford calls this a sleight of hand with the word “invention because:
Why not call the conflict model a discovery of the West, rather than an invention, since nothing in Secularity and Science demonstrates that the perception of conflict is actually false? Or why not look for a more neutral way of making the point?
For all Ecklund and her collaborators tell us, some degree of conflict, or at least tension, between science and religion might be almost inevitable. This might be a genuine problem for the ongoing viability of religious faiths, even it was first identified in Western countries and has, so far, received little recognition from scientists in Asia.
Russell then goes on to demonstrate, as I did in Faith Versus Fact, that science and religion have different epistemologies and ways of obtaining “knowledge”, that religious methods, in contrast to science’s, haven’t lead to reliably true claims about the universe, and indeed often conflict with scientific claims, and that scientific investigation has continually eroded religious belief and the idea of a supernatural. I would call that a conflict, and I define what I mean by “conflict” at the beginning of my own book.
Finally, despite the claims above, the book demonstrates, as Russell shows clearly, that scientists throughout the world are less religious—often much less religious—then are the citizens of their own countries. There is no discussion of this in the book, nor why the general populations of most of these countries are much less religious than they were, say, a century ago. This is an important question, but of course ignoring it is in keeping with Ecklund’s career-long narrative as well as with Templeton’s agenda of science/religion harmony. To be sure, Russell says that these topics weren’t within the scope of their project.
Perhaps they weren’t, but surely this question should at least have been brought up. There are several reasons why scientists in general might be less religious than the general populace, including the enrichment of science with people who weren’t believers at the outset, as well as the loss of religious faith for those working in science. (I suspect both factors are in play.) But surely, as I mention in Faith Versus Fact, the huge disparity in religiosity between scientists and their lay fellow citizens bespeaks some kind of conflict between religion and science.
I wouldn’t bet that Ecklund will investigate this important question in the future.
Oy! I barely started reading Robin DiAngelo’s White Fragility, to the detriment of my digestive system, when I learn that there’s another equally well known antiracist book out there, one that’s just been reviewed by John McWhorter at Education Next. To be sure, he says it is “the better of the two big antiracism bestsellers,” but hardly gives it a ringing endorsement. But I suppose that all of us who are liberals, committed to equal opportunity for all, and eager to understand the antiracist currents of society that have gone ballistic since the murder of George Floyd, should read both of them.
Click on the screenshot to read McWhorter’s review, and you can find Kendi’s book on Amazon here. (For some reason the paperback, which comes in large print only, costs ten bucks more than the hardcover.) You can read more about Ibram X. Kendi here.
Unlike DiAngelo, who asserts that all whites, even if they don’t realize it, are racists and complicit in structural racism, Kendi’s book admits that whites can be antiracist. But it’s still a Manichaean book in another way:
Kendi, like Hume, would seem to have it all figured out: We are divided simply between racists and antiracists. Racists are bigots and allow a status quo under which black people are not doing as well as whites. Antiracists are committed to working against that imbalance. For reasons Kendi seems to think obvious but are not, there is nothing in between these two categories—not to be actively working, or at least speaking, against the imbalance leaves one in the racist class. There is no such thing as someone simply “not racist.”
One trait that marks you as a racist, says Kendi, is to deny the claim that all disparities between races are due to racism. This is equivalent to saying that someone’s a misogynist or misandrist if they deny that disparities in representation of the sexes in jobs or achievements is due to sexism. In the cases of sexes, an alternative hypothesis is sex differences in preferences, be they cultural, genetic, or both. In the case of racism, says McWhorter, the alternative hypothesis for blacks and whites is that the culture of races differs, and for blacks it differs in a way that leads to underachievement. Here McWhorter, as an African-American, can get away with saying stuff like the following:
In 1987, a rich donor in Philadelphia “adopted” 112 black 6th graders, few of whom had grown up with fathers in their home. He guaranteed them a fully funded education through college as long as they did not do drugs, have children before getting married, or commit crimes. He also gave them tutors, workshops, after-school programs, kept them busy in summer programs, and provided them with counselors for when they had any kind of problem. Yes, this really happened.
The result? 45 never made it through high school. Of the 67 boys, 19 became felons. Twelve years later, the 45 girls had had 63 children, and more than half had become mothers before the age of 18. Part of what makes How to Be an Antiracist a simple book is its neglect of cases like this, or the assumption that they easily trace to “racism.” What held those poor kids back was that they had been raised amidst a different sense of what is normal than white kids in the ‘burbs. That is, yes, another way of saying “culture,” and it means that through no fault of their own, it was not resources, but those unconsciously internalized norms, that kept them from being able to take advantage of what they were being offered.
Kendi’s taxonomy would classify what I just wrote as “racist,” but to qualify as coherent, this charge would have to come with a more careful defense than Kendi seems accustomed to engaging. For example, if that Philly story a generation past the Great Society is just a fluke, what about what was happening in Kansas City around the same time? Twelve new schools were built to replace crummy ones black students had been mired in for decades. The effort cost 1.4 billion dollars. The new schools included broadcast studios, planetariums, big swimming pools, and fencing lessons. Per-pupil spending was doubled, while class size was halved to about 25 students a class. Elementary school students all got their own computers, and there were now 53 counselors for them when before there had been none.
Fade out, fade in: dropout rates doubled, the achievement gap between white and black students sat frozen, and the schools ended up needing security guards to combat theft and violence. The reason for this was nothing pathological about the kids: the story of how black inner cities got to the state they were in by the 1980s is complex and has nothing to do with blame. However, to say that the revolution in schooling offered to these kids was not a major antiracist effort, in Kendi’s terms, would be willfully resistant to empiricism.
To wit: antiracism, under Kendi’s definition, only explains so much. Racism quite often leaves cultural legacies that render black people unable to take advantage of antiracist policies. Concerned people devote careers trying to figure out what to do about this, and they should. But consulting Kendi, they will encounter a proton/neutron contrast between “racist” and “antiracist” that blinds them to nature of problems in the real world.
Why McWhorter’s statement is anathema in current discourse is the “progressive” assumption that, on average, different groups are basically identical not just in talents and preferences, but in those cultural features that lead to success in society. But, at least for the latter, this can’t be true, at least for those who favor ethnic diversity in colleges and institutions as a way to increase “viewpoint diversity”—and not just about racism. (My own favoring of diversity and affirmative action derives not from seeing diversity as an inherent good that improves education—the Bakke rationale—but as a form of reparations to try to make good on generations of racial discrimination. I simply don’t know if different groups have, on average, different ways of thinking that can improve university education.)
And indeed, what really rankles McWhorter about Kendi’s book is the suggestion not that there are cultural differences between races, but cultural differences that lead to different but equal skill sets:
[Kendi’s] philosophy founders especially on education in this way. Kendi subscribes to the notion getting around these days, from the contingent fascinated with white privilege, that things like close reasoning, the written word, and objectivity are “white” practices, the imposition upon black people of which is “racist.” Hence another passage that many readers will find stirring, but that others will find disturbing and even, in Kendi’s terms, “racist”:
What if different environments lead to different kinds of achievement rather than different levels of achievement? What if the intellect of a low-testing Black child in a poor Black school is different from – and not inferior to – the intellect of a high-testing White child in a rich White school? What if we measured intelligence by how knowledgeable individuals are about their own environments? What if measured intellect by an individual’s desire to know?
But what does this mean, as counsel from Kendi, who is the head of Boston University’s Center for Antiracist Research? Just how would we measure “desire to know”? What student would deny “wanting to know”? And just what would “wanting to know” yield in terms of skills or reasoning power?
More to the point, if it’s “racist” that there are so few black professors pursuing careers in science, technology, engineering and math—a common opinion it is reasonable to assume Kendi espouses—then how does suggesting we assess black people’s intelligence via their street smarts, capacity for emotional empathy, and “spunk”—which is essentially what Kendi and others mean with suggestions like these—help solve that problem? None of those traits will be of much use in laboratory work or higher mathematics. George Washington Carver’s miracles with the peanut were not driven by some kind of “authentic” alternate science—he worked within the conventional scientific method he learned at Iowa State. The snazzy-looking little View-Master of our memories was designed by a black man, Charles Harrison. He used the same skills as white designers of his time; savory black spontaneity and in-touch-ness would have done nothing to help him.
This comes close to the claim that white and black cultures are different in ways that don’t reward black people in American society, presumably because our society has privileged “white” traits over black ones as prerequisites for success (see the famous Smithsonian poster controversy).
In the end, though, McWhorter gives Kendi’s book a stronger endorsement than DiAngelos’s, though the endorsement is one of faint praise.
Kendi’s is, in the end, a simple book. One senses little interest in engaging questions. The text works in basic colors, not shades; splashes, not brushstrokes — perhaps because he thinks the roots of all black problems in white perfidy are too clear to require complexity. But his directness, pragmatism, and societal focus is certainly preferable to White Fragility’s psychological torture sessions in the guise of sociopolitical commitment.
. . . it is worth finding the value in it that we can. In truth, if How to Be an Antiracist increases the number of Americans committed to activism that makes life better for black people who need help, its substance becomes a background matter. Out doing the real work, people will, as have generations of concerned people before them, immediately encounter and seek their way through the complexities that Kendi cannot perceive.
But I shall have to read it. Given the currents in American society, it behooves us all to essay at least the most widely-read antiracist books.
Robin DiAngelo’s book White Fragility has enjoyed a tremendous resurgence of popularity since George Floyd’s murder. It’s appearing on many college reading lists, and is even a recommended resource of the Society for the Study of Evolution, which has now gone uber woke and has a full page of resources that will teach the guilt-ridden evolutionist “how to be an anti-racist”, including a list of places where you can give money. (Why a society dedicated to promoting the study of evolution needs such a page is beyond me.)
I suppose that, given its popularity, I need to read DiAngelo’s book. Not willing to pay for it, I see that it’s online at my University library, and I suppose I will essay it. But I’m looking forward to it about as much as I looked forward to my hernia operation. That is, I know it’s necessary but also that it will be painful.
This, at least, is my preliminary conclusion from having read two negative reviews of the book by people I respect. One is John McWhorter, whose Atlantic review is here (I discussed it in a recent post.) The other is Godless Spellchecker (Stephen Knight), who has a review on his own website. Click the screenshot below to read it.
It turns out that Knight and McWhorter come to pretty much identical conclusions, even both comparing DiAngelo’s form of anti-racism to a religion. I’ll put Knight’s quotes in indented Roman type, and McWhorter’s in italics.
I make a habit of reading books and articles that I expect to find disagreeable. This serves to test my convictions as I bounce them off opposing views and discover whether or not they survive the collisions. Moreover, the willingness to seek out alternate views invariably teaches you something that you did not know. In fact, sometimes you actually learn that your understanding of the issue was completely wrong altogether.
The fact that Robin DiAngelo’s ‘White Fragility’ did not manage to be informative or useful on any level is an achievement in and of itself. I’ve never encountered a book so intellectually vapid as to make me worry that reading it may have actually subtracted some knowledge.
DiAngelo has convinced university administrators, corporate human-resources offices, and no small part of the reading public that white Americans must embark on a self-critical project of looking inward to examine and work against racist biases that many have barely known they had.
I am not convinced. Rather, I have learned that one of America’s favorite advice books of the moment is actually a racist tract. Despite the sincere intentions of its author, the book diminishes Black people in the name of dignifying us. This is unintentional, of course, like the racism DiAngelo sees in all whites. Still, the book is pernicious because of the authority that its author has been granted over the way innocent readers think.
White anti-racism can never win because white supremacy is forever entrenched:
We are told that ‘a positive white identity is an impossible goal. White identity is inherently racist. White people do not exist outside of the system of white supremacy’.
That is a pretty strong charge to make against people who, according to DiAngelo, don’t even conceive of their own whiteness. But if you are white, make no mistake: You will never succeed in the “work” she demands of you. It is lifelong, and you will die a racist just as you will die a sinner.
DiAngelo’s doctrine is akin to a religion:
There are many ways in which her arguments mimic the structure of fundamental religion. The book is awash with unfalsifiable claims and contains a plethora of conflicting dogmas and injunctions that are impossible to satisfy. The author leans heavily on the idea that you are born sick—sorry, ‘privileged’—and must seek to absolve your sins—sorry, ‘whiteness’—for as long as you walk the earth. Also, if you disagree with the tenets of this scripture, that is in itself evidence that the devil—sorry , ‘white fragility’—is working through you.
. . . Also, no religion would be complete without its martyrs. A role which DiAngelo also steps up to. ‘Because I am seen as somewhat more racially aware than other whites, people of colour will often give me a pass’ she writes. She momentarily reveals how virtuous she is compared to us mere troglowhites, before heroically declaring how she refuses to let her own woke brilliance go to her head. She informs us that this sort of ‘black acceptance’ only serves to ‘stunt her path of racial growth’ apparently and therefore the black individual praising her is actually guilty of colluding with her racism.
That is a pretty strong charge to make against people who, according to DiAngelo, don’t even conceive of their own whiteness. But if you are white, make no mistake: You will never succeed in the “work” she demands of you. It is lifelong, and you will die a racist just as you will die a sinner.
She operates from the now-familiar concern with white privilege, aware of the unintentional racism ever lurking inside of her that was inculcated from birth by the white supremacy on which America was founded. To atone for this original sin, she is devoted to endlessly exploring, acknowledging, and seeking to undo whites’ “complicity with and investment in” racism. To DiAngelo, any failure to do this “work,” as adherents of this paradigm often put it, renders one racist. [See also McWhorter’s “sinner” comment above and “prayer book” and “cult” comments below.]
DiAngelo’s schema is a watertight edifice that can’t be refuted:
And if you find it somewhat irksome to be accused of white supremacy (as any non-racist would), that too is evidence of your ‘white fragility’. Checkmate, said the pigeon.
. . . In a modern twist on experiments concerning the buoyancy of witches, DiAngelo argues that feeling ‘outraged’ by accusations of racism levelled at you is simply further proof of your ‘white fragility’. When faced with accusations of racism, no matter ‘how/when/why’ they occur, you must not only accept them, but be grateful to receive them. The idea that some accusations of racism will be false/and or irrational are simply not even entertained as a possibility by the author. If you don’t think you are a racist, that’s simply because you are a racist, obvs.
She operates from the now-familiar concern with white privilege, aware of the unintentional racism ever lurking inside of her that was inculcated from birth by the white supremacy on which America was founded. To atone for this original sin, she is devoted to endlessly exploring, acknowledging, and seeking to undo whites’ “complicity with and investment in” racism. To DiAngelo, any failure to do this “work,” as adherents of this paradigm often put it, renders one racist.
Diangelo also writes as if certain shibboleths of the Black left—for instance, that all disparities between white and Black people are due to racism of some kind—represent the incontestable truth. This ideological bias is hardly unique to DiAngelo, and a reader could look past it, along with the other lapses in argumentation I have noted, if she offered some kind of higher wisdom. The problem is that White Fragility is the prayer book for what can only be described as a cult.
. . . If you object to any of the “feedback” that DiAngelo offers you about your racism, you are engaging in a type of bullying “whose function is to obscure racism, protect white dominance, and regain white equilibrium.”
DiAngelo has no workable solutions for racism:
But the tragedy of all this is that ‘White Fragility’ is nothing more than a pseudo-intellectual misdirection masquerading as compassionate activism. In reality, this performative, humble-bragging white guilt will do nothing to help alleviate inequality or improve the material needs of black people. It’s a lazy way for white racists to alleviate their guilty consciences and continue to avoid doing anything useful for the the people they claim to care about—except for treating them like children. Just so long as you are willing to admit how awful and privileged you are (I.e. talk about yourself incessantly), then you don’t have to talk about the real issues facing black communities or think too hard about a complicated issue and its difficult questions.
And herein is the real problem with White Fragility. DiAngelo does not see fit to address why all of this agonizing soul-searching is necessary to forging change in society. One might ask just how a people can be poised for making change when they have been taught that pretty much anything they say or think is racist and thus antithetical to the good. What end does all this self-mortification serve? Impatient with such questions, DiAngelo insists that “wanting to jump over the hard, personal work and get to ‘solutions’” is a “foundation of white fragility.” In other words, for DiAngelo, the whole point is the suffering.
DiAngelo’s book is itself racist:
Throughout this book I was also taken aback with how often the author reveals how little she thinks of black people in general. She doesn’t consider black people in terms of the individual. In fact, she doesn’t accept the notion of ‘the individual’ at all, warning that individualism is a harmful ‘white’ idea proposed to avoid acknowledging the unique evils of ‘whiteness’. The entire book reads as though a white supremacist feels guilt for their prejudices, and confesses in the hope that we will be inspired to do the same, seemingly unaware that normal people do not think about black people and their own skin colour the way she does.
For example, the author argues that a desire to have discussions in a ‘respectful environment’ of ‘non-conflict’ is a very white-centric notion of what it means to be ‘respectful’. Therefore imposing these ‘white’ norms of civility creates a ‘hostile environment’ for non-white people. This is what the racism of low expectation looks like. She goes on to say that ‘feedback on white racism is difficult to give [and] how I am given the feedback is not as relevant as the feedback itself’.
Of course something can be true whether it is argued in a logical and calm manner or whether it is screamed in your face whilst you are trying to eat your lunch. However, the idea that the former approach comes more naturally to white people frames black people as belligerent infants, incapable of attaining the white gold standard of rationality and civility. It’s as offensive as it is infantilising. This is a wall of white supremacy glossed over with a thick coat of guilt.
White Fragility is, in the end, a book about how to make certain educated white readers feel better about themselves. DiAngelo’s outlook rests upon a depiction of Black people as endlessly delicate poster children within this self-gratifying fantasy about how white America needs to think—or, better, stop thinking. Her answer to white fragility, in other words, entails an elaborate and pitilessly dehumanizing condescension toward Black people. The sad truth is that anyone falling under the sway of this blinkered, self-satisfied, punitive stunt of a primer has been taught, by a well-intentioned but tragically misguided pastor, how to be racist in a whole new way.
Now I’m not at all implying that the authors have copied each other; rather, I see this as two keen minds (one in a black body, the other in a white) coming up with similar conclusions. I expect I’ll have my own conclusions after I read the book, but the odds are that I’ll agree with both McWhorter and Knight.
If you’ve already read the book, do weigh in below.
It’s past time for a “what are we reading” thread. I’ve just finished one book and am about 40% through another.
The one I just finished was inspired by my love of the Beatles, and I had high hopes for it because it goes through every Beatles song, analyzing it musically, explaining its roots, and judging it (I love someone judging music!). It’s this one (click to go to the Wikipedia article about it), written by music critic “Ian MacDonald“, whose real name is Ian MacCormick. (He committed suicide in 2003 at age 54.)
I’ll let Wikipedia summarize the format:
The book’s main section comprises entries on every song recorded by the group, in order of first recording date, rather than date of release. Each entry includes a list of the musicians and instruments present on the track, the song’s producers and engineers, and the dates of its recording sessions and its first UK and US releases. MacDonald provides musicological and sociological commentary on each song, ranging in length from a single sentence for “Wild Honey Pie” to several pages for tracks such as “I Want to Hold Your Hand“, “Tomorrow Never Knows” and “Revolution 1“.
The book also contains the essay “Fabled Foursome, Disappearing Decade”, MacDonald’s analysis of the Beatles’ relationship to the social and cultural changes of the 1960s. Later editions of the book added further commentary: the preface to the first revised edition discusses the British art school scene that spawned the Beatles and some of the differences between British and US culture that affect the two nations’ respective views of the group; and the second covers subjects such as the Beatles’ continued popularity into the 21st century, criticism of their lyrics, and the death of George Harrison. The book concludes with a month-by-month chronology of the 1960s (consisting of a table listing events in the Beatles’ career alongside significant events in UK pop music, current affairs and culture), a bibliography, a glossary, a discography, and an index of songs and their keys.
The book was critically acclaimed, though the article notes that Paul McCartney has taken issue with some of the factual statements. The introductory essay I found a bit turgid, for, at least in this book, MacDonald is not an engaging writer. that essay is more like an academic article than part of a trade book, but it’s still well worth reading. But I found the main section, going through the Beatle’s songs in chronological order of recording, to be fascinating. If nothing else, it astonishes you with the ability of the group (mainly, of course, Lennon and McCartney) to turn out ever-changing songs, from rockers (“Back in the USSR”) through lovely ballads (“Yesterday,” “In My Life”) to songs that had no obvious precursors in rock (“A Day in the Life,” “Penny Lane”). MacDonald’s expertise as a music critic is quite useful in his analyses of the songs, and in understanding why he thinks they’re either good (most of them) or lame. I found myself repeatedly putting down the book to listen to the tracks discussed, and you will, too. It certainly supported my view that the Beatles were, by far, the best rock group that ever put out a song.
The final essay is a splenetic but excellent disquisition on why the Beatles represented the height of rock, which, in MacDonald’s take, peaked in 1966 and has gone downhill ever since. He has no truck with much later rock, and he also says that jazz, too, reached its peak decades ago and is pretty much worthless. Since I agree with these opinions, I recommend the book!
And here’s a book I just got from the publisher, who hoped I’d review it (I may here, but I doubt any newspaper or mainstream website will ask). Click on screenshot to go to the Amazon link:
Based on knowing the authors’ work, I expected an enjoyable takedown of wokeness and cancel culture, but instead got something better: a trenchant and scholarly analysis of postmodernism and how it gave rise to Critical Theory: of Race, Gender, Colonialism, and so on. So although I didn’t have my own biases buttressed (yes, the authors are pretty much on my side), I instead learned something about philosophy and history. And that is better. I’m only about 40% of the way through yet, deep into the analysis of Critical Race Theory, so I can’t summarize what’s in the last half. But if you want to understand why wokeness is as it is, and where it came from, this book is essential. It’s well written, too.