Are you a racist if you like big butts?

January 8, 2023 • 1:15 pm

How can you not like Kat Rosenfield when she’s named “Kat,” is Jewish, and has the ability to write a trenchant but also funny review of a book on how white people are not allowed to either like big butts or (if a woman) have one, for it’s a form of racist cultural appropriation. Twerking is out too.

The book under consideration is called Butts: A Backstory, and you can click on the cover to go to the Amazon link (yes, the title and graphics are clever):

 

You can read Kat’s Unherd review by clicking on the screenshot:

Kat gives the book a mixed review. The bit about the documented racialization of oversized derrieres, particularly in the 18th and 19th century, is pretty horrifying, especially the story of the South African black woman Sarah Baartman, who was exhibited as an inferior specimen of human for her rear pulchritude. But if big butts were racially denigrated then, author Heather Radke says that they’re welcome now among some white people, and gives examples like Kim Kardashian, Jennifer Lopez, and so on. “Twerking,” too, has been taken up by whites. And it’s the white appropriation of the butt fetish that is, well, problematic:

Rosenfield:

To be fair, it surely is not Radke’s intention to inculcate racial anxiety in her reader: Butts feels like a passion project, deeply researched and fun to read, offering a deep dive into the history and culture of the human rear end, from the Venus Callipyge (from whose name the word “callipygian” is derived) to Buns of Steel to Sir Mix-A-Lot’s seminal rap celebrating all things gluteal. It is a topic ripe for well-rounded analysis, so to speak. But having been written in the very particular milieu of 2020s America, Butts unfortunately falls victim to the contemporary vogue for viewing all matters of culture through a racial lens. The result is a work that not only flattens the butt, figuratively, but makes the book feel ultimately less like an anthropological study and more like an entry into the crowded genre of works which serve to stoke the white liberal guilt of the NPR tote bag set.

At this point I was starting to fall in love with Rosenfield, but she kept on stoking the ardor with her insistent anti-wokeness:

The concept of cultural appropriation has always struck me as both fundamentally misguided and historically illiterate, arising from a studied incuriosity about both the inherent contagiousness of culture and the mimetic nature of human beings. But when it comes to the remixing of thing such as textiles, hairdos or fashion trends across cultures, the appropriation complaints seem at least understandable, if not persuasive: there’s a conscious element there, a choice to take what looked interesting on someone else and adorn your own body in the same way. Here, though, the appropriated item is literally a body part — the size and shape of which we rather notoriously have no control over. And yet Radke employs more or less the same argument to stigmatise the appropriation of butts as is often made about dreadlocks or bindis.

The book is insistent on this front: butts are a black thing, and liking them is a black male thing, and the appreciation of butts by non-black folks represents a moral error: cultural theft or stolen valour or some potent mix of the two. Among the scholars and experts quoted by Radke on this front is one who asserts that the contemporary appreciation of butts by the wider male population is “coming from Black male desire. Straight-up, point-blank. It’s only through Black males and their gaze that white men are starting to take notice”. To paraphrase a popular meme: “Fellas, is it racist to like butts?”

But if it’s racist to like big butts, why are so many white people either getting butt implants or taking pride in their derrieres? It seems that Radtke is conflating racism with cultural approprition. Both are “moral errors”, but really it’s only the first.

First of all, buttophilia is not a new thing; there have always been a subset of men who like an ample bottom, and there’s nothing wrong with that, for there’s a subset of men who favor any given female body part. (Rosenfield notes the theory that the bustles of earlier times were designed as a superliminal stimulus to appeal to those who favor large rears.)

But there’s also no doubt that there’s recent cultural appropriation, as in white rapper Iggy Azalea’s astounding increase in bum girth, one suspects through surgery. If a love of big butts is racist, there’s an awful lot of white people who favor them!  Again, things that are really considered racist are not culturally appropriated, no matter who appropriates them.  I suppose that Radtke’s thesis, although she mentions cultural appropriation, is that women who strive for big butts are, à la Rachel Dolezal, trying to be black, and that is somehow a form of racism. But that doesn’t explain why some white men like big butts.

It’s all a mystery, but Rosenfield still writes well about it:

By the time Butts comes around to analysing the contemporary derriere discourse, its conclusions are all but foregone: the political is not just personal, but anatomical. The book calls multiple women, including Jennifer Lopez, Kim Kardashian, and Miley Cyrus, to account for their appropriation of butts, which are understood to belong metaphorically if not literally to black women. The most scathing critique is directed at the then-21-year-old Cyrus, whose twerking at the VMAs is described as “adopting and exploiting a form of dance that had long been popular in poor and working-class Black communities and simultaneously playing into the stereotype of the hypersexual Black woman”. The mainstreaming of butts as a thing to be admired, then, is the ultimate act of Columbusing: “The butt had always been there, even if white people failed to notice for decades.”

There is also the curious wrinkle in Radke’s section on the history of twerking, which credits its popularisation to a male drag queen named Big Freedia. The implicit suggestion is that this movement style is less offensive when performed by a man dressed as a woman than by a white woman with a tiny butt.

On the other hand, now that the fad is “healthy at any size,” how can there be an ideological stigma against large bottoms?

. . . Ironically, the author of this book is herself a white woman with a large backside, a fact of which she periodically reminds the reader. And yet, Butts thoroughly subsumes its subject matter into the cultural appropriation discourse in a way that implicitly impugns all the non-black women who look — at least from behind — a hell of a lot more like Nicki Minaj than Kate Moss, women who perhaps hoped that their own big butts might be counted among those Sir Mix-a-Lot cannot lie about liking. It is worth noting, too, that the women hung out to dry by this argument are the same ones who other progressive identitarian rhetoric almost invariably fails to account for: the more it indulges in the archetype of the assless willowy white woman, the more Butts excludes from its imagination the poor and working class — whose butts, along with everything else, tend to be bigger. It fails to account, too, for those from ethnic backgrounds where a bigger butt — or, as one of my Jewish great-grandmothers might have said, a nice round tuchus — is the norm.

And the last paragraph is great:

All told, Butts offers an interesting if somewhat monomaniacal look back at the cultural history of the derriere. But as for how to view our backsides moving forward — especially if you happen to be a woman in possession of a big butt yourself — the book finds itself at something of a loss. Those in search of body positivity will not find it here; Radke is firm on this front, that white women who embrace their big butts are guilty of what Toni Morrison called “playing in the dark”, dabbling thoughtlessly with a culture, an aesthetic, a physique that doesn’t really belong to them. The best these women can hope for, it seems, is to look at their bodies the way Radke does in the final pages, with a sort of resigned acceptance: her butt, she says, is “just a fact”. On the one hand, this is better than explicitly instructing women to feel ashamed of their bodies (although implicitly, one gets the sense that shame is preferable to the confident, twerking alternative). But after some 200 pages of narrative about the political, sexual, cultural, historical baggage with which the butt is laden, it feels a bit empty, a bit like a cop-out. It could even be said — not by me, but by someone — that Butts has a hole in it.* [see below]

In the end, it seems as if Radke’s message is that it’s not really racist to like big butts if you’re white, but you better not get one or engage in twerking.  That’s reserved for ethnicities whose women naturally have large rumps; in other words, whites of a callipygean bent are engaging in cultural appreciation, and that’s wrong. But I’ve never seen a form of cultural appropriation that I’d criticize, and this one is no exception. Let a thousand butts twerk!

The evolution of Iggy Azelea’s rear, from an article in (of course) The Sun:

Rosenfield’s last line reminds me of a semi-salacious joke that my father used to tell me when, as a young lad, I was tucked in (he always had a witticism at bedtime):

“Jerry, there’s a good movie on. The ad says “Mein Tuchus in two parts. Come tomorrow and see the whole!”

h/t: Luana

A short book review and two short movie reviews

January 1, 2023 • 1:00 pm

Since I wasn’t able to be in Poland over the holidays, I read books and watched movies. One book I recommend highly is Beartown, loaned to me by a friend (image below links to Amazon site). It’s the first book of a trilogy by Swedish writer Fredrick Backman, and this one’s about the way high-school hockey takes over a small Swedish town and then tears it apart. The language is spare but lovely, especially when the author becomes more philosophical near the end. It starts off with a simple narrative about the local hockey team, but then becomes very dark very fast. I won’t give away the pivotal event of the story.

It’s engrossing, was a best-seller in Sweden and then in the U.S. The theme is about community and loyalty, and I’m considering continuing on to the last two novels of the trilogy. I’d recommend this one highly. It’s not a world classic or a masterpiece, but it’s an absorbing and disturbing read. (Disturbing books are the best books.)

I didn’t go to the movies much last year because of the pandemic, and the University movie series, Doc Films, had a pared-down schedule. I’m catching up online now, and here are two that I watched and liked. I found them because they both appeared on at least two “best films of 2022” lists.

I watched “The Worst Person in the World” because, though I hadn’t heard of it, it was named as the best movie of the year on Esquire’s tally of the 35 best. Here’s what the authors have to say:

Granted, it was only February when I saw this, but director Joaquim Trier’s wonderfully humane Norwegian import and nominee for last year’s Best Foreign Language Film Oscar is still, hands-down, the best film of 2022. I’ve blown hot and cold on some of Trier’s earlier films, but this one is an instant classic in large part due to Renate Reinsve’s luminous performance as Julie—an aimless Oslo woman on the cusp of 30 who’s trying to figure herself out in ways that are so funny, sad, and realistically messy that it feels like we’re spying on someone we’ve known for years. The title might give you the impression that Julie is trouble, leaving chaos and broken hearts in her wake. But the title actually isn’t about her. Plus, she’s far more complex than that implies anyway. Told in 12 chapters plus a prologue and an epilogue, The Worst Person in the World is anything but neat and orderly. Like life, it’s complicated, unpredictable, bittersweet and indecisive. It’s also brimming with so much empathy for Trier’s female lead that you can’t help but fall in love with her even when you know she’s making mistakes. After all, who are we to judge? Trier tracks Julie’s relationships with men, but it’s far more interested in getting inside of her head and figuring out what makes her tick, which is a rarity in Hollywood films. We’ll see if anything in the coming months can match Trier and Reinsve’s masterpiece, but they’ve set an incredibly high bar.

That pretty much says it all, but I wouldn’t rate this as the best even among the few movies I’ve seen this year (that would be “Tár”). The main character ,Julie (played by Renate Reinsve), turns in a creditable performance, but I don’t understand all the critics’ hulabaloo. (It was rated 96 by the critics and 86 by the audience on Rotten Tomatoes.) Julie is aimless, flaky, and lovable, and makes a mess of her life, especially when dealing with men, but that aimlessness itself, and the attendant sadness and tragedy, don’t carry the picture.  To my mind, Julie wasn’t sufficiently developed to be absorbing, and the reviewers seemed to conflate flakiness and confusion with complexity and depth. I would rate this as a good+ movie, but the best? No way. But watch it for yourself. Here’s a trailer:

Kimi“, directed by Steven Soderbergh, was better, and though also not a classic is clever, absorbing, and a crime thriller to boot. Kimi is an AI device like Alexa, made by a company that employs the protagonist Angela, played very well by Zoë Kravitz. Angela is an extremely introverted and agoraphobic women who almost never leaves her flat, but her job can be done from home: she listens in on requests to Kimi to figure out how to improve the AI device. By accident she hears a crime being committed, and it’s her attempts to report the crime, and the opposition she faces from a criminal conspiracy, that make for an edge-of-your-seat experience.  I’m surprised I liked this better than the one above, as I usually like long, slow, movies with character development and not that much action.  This movie gets a “very good” from me and I recommend that you see it if you get the chance.

I also watched a movie that was on many lists as a “best of 2022”: “Everything Everywhere All At Once“, starring Michelle Yeoh, but I found it tricked out and tedious, and stopped watching 45 minutes in. (It’s about the multiverse.) Many of my friends liked it, so I’ll just say, “Go see it and report in”, or report below if you’ve already seen it.

Now it’s your turn: which movies did you like best that were made last year?

The Atlantic recommends six long books

December 28, 2022 • 12:15 pm

If I like a book, I want it to be LONG. A thousand pages means nothing to me if the book is a good one. On the other hand, I know that many people beef about long books—an attitude I fail to understand. If the book is absorbing, or a good story, then why would you want it to end so soon? It’s like A. J. Liebling’s explanation of why he was a gourmand and not a gourmet: if you like food, you will like a LOT of food.

Well, I know I’m in the minority here, but I just found an article in The Atlantic that recommends LONG books (I also just remembered that I’ve had an online subscription to the magazine for five months, and had forgotten about it!)

Click to read (I don’t know if it’s paywalled):

It turns out that I’ve already read four of these. Guess which of the six I haven’t read?

Here’s Masad’s list, but first the intro:

Literature should not be something we approach out of a sense of duty. But many lengthy, complex, and well-known books really are that good. Like taking a long hike or following a tricky recipe, engaging with writing that challenges you can be deeply satisfying. Each of the books below is demanding in its own way, and reading or rereading them can be a fascinating, beautiful, and rewarding experience.

The Tale of Genji, by Murasaki Shikibu (translated by Dennis Washburn)

Moby-Dick, by Herman Melville

Vanity Fair, by William Makepeace Thackeray

Middlemarch, by George Eliot

Almanac of the Dead, by Leslie Marmon Silko

Infinite Jest, by David Foster Wallace

Of the four I’ve read (and this gives one away), I found Middlemarch the best, but all the ones I’ve read are good.

But why do they leave out Ulysses, or Anna Karenina, or The Brothers Karamazov? (Actually Masad did read and enjoy Joyce’s novel, though he said he initially  read it out of a sense of duty.

Here’s a book (or rather, a bunch of books) that I tried to read out of a sense of duty, and couldn’t get through even the first volume: Remembrance of Things Past.  It was simply too fricking turgid!  Of course that means I can never enter a “Summarize Proust” contest (first five minutes of the Python episode below):

Two (actually five) excellent books to consider

December 15, 2022 • 12:30 pm

I’m working on a long writing piece, so have had to curtail posts here for a few days. We’ll be back to normal tomorrow.

I don’t think I’ve mentioned this book before, but it’s in a tie for second place among all the fiction I’ve read this year. I’ve reviewed the other two books before: All the Light We Cannot See, a 2014 book by Anthony Doerr that’s the best new fiction I’ve read in several years (my review here), and the book that ties with the one below as second-best, Hamnet, a 2020 novel by Maggie O’Farrell(my review here).  Either of those, or Horse, below, will serve you very well. But if you can read only one book, it must be the one by Doerr, which won the fiction Pulitzer Prize in 2020.

Horse, which came out this spring, is a terrific work of the imagination, weaving past and present around a single famous (and real) eighteenth-century racehorse, Lexington,. The theme is anti-black bigotry, which in the “past” section involves Jarret, the enslaved black trainer of the racehorse, and in the “present” section involves a woman zoologist, Jess, and her black boyfriend Theo. Theo finds a discarded painting of the horse in a Washington, D.C. trash heap, figures out who it is, and uses it for his Ph.D. thesis in art history. That’s how he meets Jess, who in fact is the keeper of the horse’s bones at the Smithsonian, but didn’t know that those bones belonged to Lexington.

And so the past interweaves with the present in short, compelling chapters, with nearly all the fictional characters based on real ones. The story is mesmerizing, though it swerves a bit into wokeness and preachiness at the end; and there’s a tragedy that I won’t reveal. The horse, by the way, is an almost humanlike character in the book, since his character is described in great and winsome detail.

You don’t have to be a fan of horses to love this book, and if you want to read more, go to the laudatory review at the Washington Post.  Click on the screenshot below to go to the Amazon site.

Below is the real Lexington—one of the few existing photos. As Wikipedia notes, “Lexington (March 17, 1850 – July 1, 1875) was a United States Thoroughbred race horse who won six of his seven race starts. Perhaps his greatest fame, however, came as the most successful sire of the second half of the nineteenth century; he was the leading sire in North America 16 times, and broodmare sire of many notable racehorses.”

The second book is not in my top three, but is still an excellent read:  Empire of Pain. by Patrick Radde Keefe. This book came out in April of last year, and was, like Horse, recommended to me by my editor at Viking/Penguin, who has impeccable taste in books. It is the story of the Sackler family and the trio of Jewish brothers who founded Purdue Pharma, the company that devised and manufactured OxyContin, a synthetic opioid that, as you may know, got gazillions of Americans addicted to painkillers. Many of them died. The Sacklers, who made billions on this one pill, were also philanthropists who gave millions to museums and universities, always insisting that the family name go on the donated wing or building. They—and especially the eldest brother Arthur—were hard driving and arrogant, but also somewhat polymathic (Arthur, for instance, became an autodidactic expert in Chinese art and built up a huge collection).

The family advertised OxyContin in ways that they knew would get people addicted, and ignored the damage caused by their drug. They were able to fight off lawsuits for years, but finally took a hit (not a big hit given the family’s billions) and declared bankruptcy. But all the Sacklers wound up fine, living it up in mansions around the world. The message: is was no justice.

An immense amount of reportage went into this book. It reminds me a lot of Bad Blood, the fantastic book by reporter John Carreyrou that exposed the duplicity of Elizabeth Holmes and her partner Sunny Balwani, the pair who ran the bogus startup Theranos. (Both will be going to jail for a long time, partly because Carreyrou’s book revealed their perfidy.)  Bad Blood began with Carreyrou’s reporting in the Wall Street Journal, which eventually became a mesmerizing piece of nonfiction (read it!). Empire of Pain started with an article as well: a piece by Keefe in the 2017 New Yorker. It’s extremely well written, and will introduce you to a family of which you haven’t heard, and to the enormous damage caused by their greed.

Click the screenshot to go to the Amazon site.

Now, as General Patton said, “You know what to do.”  Let us know what you’re reading, how you like it, and what books you’d especially recommend.

Julian Baggini reviews a new book on agency that ignores the issue of free will

December 7, 2022 • 1:00 pm

Philosopher and author Julian Baggini, a nonbeliever who wrote the Very Short Introduction to Atheism for Oxford University Press (and about 20 other books), has a nice review in the Wall Street Journal of a new book on cognitive and behavioral autonomy, with a thesis that touches on issues of free will. The book is Freely Determined: What the New Psychology of Self Teaches Us About How to Live, by Kennon M. Sheldon.

You can read Baggini’s review by clicking on the screenshots below; if it’s paywalled, I’ll quote enough to give you the tenor of the piece:

Sheldon himself is a professor of psychological sciences at the University of Missouri in Columbia, and uses the book to push his own pet theory: Self-Determination Theory (SDT). SDT has been around for about fifty years, and is about people’s motivations and self-determination. Sheldon asserts that your wellbeing is higher if you think you are the agent who produces your own actions, and, apparently, that you have the ability to freely will your actions, or to will one of several possible actions. The latter, of course, bears on contracausal (“I-could-have-done-otherwise”) free will.

As Baggini notes:

Mr. Sheldon’s interest in free will is rooted in his work in Self-Determination Theory, which he calls “the world’s most comprehensive and best-supported theory of human motivation.” A core tenet of SDT is that “people need to experience themselves as the causal source and origin of their own behavior rather than feeling controlled and determined by external forces.” When people feel autonomous, they are more content and successful. When they feel they have no control, they become morally cynical. After all, if we’re not in control of what we do, how can we be blamed for wrongdoing?

Most of Mr. Sheldon’s 10 chapters constitute a compelling and clear introduction to what SDT teaches us about nurturing a sense of autonomy. The theory gives us a rich and powerful understanding of motivation—how to harness it and avoid undermining it. Most notably, the theory points to the importance of intrinsic motivation: the desire to do something for its own reward, not for any instrumental benefit.

And indeed, Sheldon may be right: we may do best if we feel that we are deciding our own actions rather than being compelled by the desires of other people or, ultimately, by forces beyond our control.  If you’re a hard determinist, like me (and I think Baggini, though I’m not sure), you realize that we aren’t really able to decide one course of action versus another: that is decided for us by the laws of physics. Still, I have no beef with the idea that we feel better entertaining the notion that we can indeed choose one course rather than another. Indeed, natural selection may have favored, for various reasons that lack the will, time, and space to discuss, the feeling that we are making free choices. But what makes us feel good isn’t necessarily true; we can have our feeling of agency and feel better for it, even if that agency is illusory.

The problem, which Baggini homes in on, is that author Sheldon seems to think that SDT and contracausal free will are incompatible. That is, if you feel that you have agency, then it must be true that you have agency:

When it comes to the metaphysical realm, Mr. Sheldon’s mistake is to think that SDT and the philosophical denial of ultimate free will are incompatible. That is only true of the most popular, if simplistic, threat to his model of human freedom: the extreme reductionism claiming that reality can be completely described in the language of physics; that consciousness is just the humming of the neural machine; and that everything is strictly predetermined.

Mr. Sheldon sees off this crude challenge with skill and clarity. A key to his argument is the idea of a “grand hierarchy of human reality”—a scale of human understanding and its modes, from micro to macro. Physics sits at the bottom, with the sciences of chemistry, microbiology and neuroscience stacked above it. Every time you ascend a level you encounter reality at a different order of organization. As Mr. Sheldon writes: “There is a kind of ‘functional autonomy’ at each new level, which builds upon what is given below. This means that each new level affects the world in a way that is partially independent of the levels below.”

Ascend the hierarchy further and you get to the social sciences: varieties of psychology, sociology and anthropology. (He might have added philosophy.) You can’t understand human societies, he observes, without reference to their values, or human actions without desires and intentions. The reductionist assumption—that thoughts and feelings are causally inert—is invalid.

The issue is that evoking “reductionism” doesn’t touch the issue of libertarian free will. A “grand hierarchy” must still show that each level is compatible with the one below it, even if it couldn’t be predicted from the one below it. And at the bottom sit the hard laws of physics, which ramify upwards to produce psychology and anthropology. Just because you can’t predict how human societies work from the laws of physics doesn’t mean that those societies aren’t the ineluctable results of the laws of physics. It’s a fundamental error to deny reductionism just because we can’t predict how phenomena ramify. What would overturn reductionism is the observation that new phenomena arise at higher levels that aren’t COMPATIBLE with what’s going on at lower levels. And to a determinist, this just doesn’t happen.

And so, argues Baggini—and I agree—this palaver about the benefits of feeling empowered to decide (which is a real feeling and may be beneficial), combined with the denial of reductionism, leads Baggini not just to reject libertarian free will, but to ignore it completely.

This is all persuasive, but it leaves the deeper metaphysical problem of free will untouched. Human beings may make choices that are not predictable or even completely determined. [JAC: I presume that Baggini’s referring here to a fundamental indeterminacy, as far as we know, of quantum mechanics.] The hard question of free will is whether, at the time of making a choice, we could have done otherwise (leaving aside randomness or chance). The most popular position in philosophy today is compatibilism: It says that, although we can’t do other than what we do, we still have a valuable form of free will that allows us to maintain ideas of autonomy, control, responsibility and blame. In short, we may not be as free as we think we are, but we are free enough.

Note that Baggini explains that compatibilism accepts the fact that we lack contracausal free will. What is “compatible” in compatibilism is the absence of “true” (what I call “contracausal”) free will with another definition of free will that’s confected by whichever philosopher is pushing compatibilism. But Sheldon doesn’t even mention compatibilism, though he alludes to a form of it—a form that, to me, seems to deny contracausal free will entirely:

In “Freely Determined,” compatibilism doesn’t get a single mention. Instead Mr. Sheldon leans heavily on a recent book by Christian List (“Why Free Will Is Real”) in which it is argued that free will requires three capacities: considering the possibilities for action; forming an intention; and acting on a chosen possibility. But whereas Mr. List delves into the complexities behind these seemingly simple check-boxes, Mr. Sheldon merely helps himself to the comforting conclusion that, since human beings possess all three capacities, we are free.

In the end, Baggini recommends the book but criticizes the author for eluding a truth that really bothers people: our inability to decide or behave other than the way we do:

Fortunately, what Mr. Sheldon has to tell us about motivation and human action remains valuable, however we resolve the philosophical problem of free will. Readers will get a lot out of his book—as long as they recognize that it doesn’t so much solve the problem as deftly swerve around it.

Ironically, what first got me thinking about determism, and ultimately rejecting contracausal free will, was a paper by biochemist Anthony Cashmore in PNAS, written as a freebie when he was elected to the National Academy of Sciences. I think it’s well worth reading, and it has the word “swerve” in it, referring to the Lucretian swerve. Read “The Lucretian swerve: The biological basis of human behavior and the criminal justice system.” (It’s free online.)

A critique of Yuval Harari’s popular-science writing

October 26, 2022 • 10:00 am

I’ve never read any of Yuval Harari’s wildly popular books (Sapiens, Homo Deus, and 21 Lessons fo the 21st Century), so I can’t judge those. Surely many readers have read one or more, however, so I’ll call your attention to this critique of his work in Current Affairs, and if you’ve read his work, do weigh in below.

The critical reviewer is Darshana Narayanan, who describes herself as a neuroscientist and journalist, adding “evolutionary biologist” in the piece below.  Her beef with Harari is that his books aren’t fact-checked, and so they’re full of howlers and misinformation. Since important people take Harari seriously, she sees this as a significant problem, especially when he tries to predict where humanity is going.

I’m pretty sure that his books, like nearly all popular-science books, are not fact checked. None of mine were, but when you sign the contract with the publisher you accept responsibility for any errors. Thus it’s up to the author’s sense of responsibility to ensure that a book is factually accurate.  Many publishers, however, do get “trade” (i.e. “popular”) science books reviewed by outside experts. I do this regularly.  That ensures that at least one set of expert eyes have vetted the manuscript, but we do this for free and there’s not much impetus to do fact-checking or line editing unless something sticks out as an egregious error. The job of such a reviewer is to give a broad take on the book, and point out places that need changing.

My take on Narayanan’s critique—and again, I haven’t read Harari’s books—is that it’s a mixed bag. Some of the false statements she highlights are trivial, while others are more serious. Yet she herself, despite being a biologist, makes some scientific mistakes, which I’ll highlight below.  This, then, is a review of a review. Click the screenshot below to read Narayanan’s piece:

Narayanan begins by noting both Harari’s popularity and his influence.

. . . consider: among Harari’s flock are some of the most powerful people in the world, and they come to him much like the ancient kings to their oracles. Mark Zuckerberg asked Harari if humanity is becoming more unified or fragmented by technology. The Managing Director of the International Monetary Fund asked him if doctors will depend on Universal Basic Income in the future. The CEO of Axel Springer, one of the largest publishing houses in Europe, asked Harari what publishers should do to succeed in the digital world. An interviewer with The United Nations Educational, Scientific and Cultural Organization (UNESCO) asked him what effect COVID would have on international scientific cooperation. In favor of Harari’s half-formed edicts, each subverted their own authority. And they did it not for an expert in any one of their fields, but for a historian who, in many ways, is a fraud—most of all, about science.

That’s a strong claim, and rather than saying he’s a “fraud” (based on what Narayanan says), I would say he’s “careless”. But he’s sold a lot of books, and if he gets $1 in royalties per copy, Harari’s a rich man:

It scares me that, to many, this question appears to be irrelevant. Harari’s blockbuster, Sapiens, is a sweeping saga of the human species—from our humble beginnings as apes to a future where we will sire the algorithms that will dethrone and dominate us. Sapiens was published in English in 2014, and by 2019, it had been translated into more than 50 languages, selling over 13 million copies. Recommending the book on CNN in 2016, president Barack Obama said that Sapiens, like the Pyramids of Giza, gave him “a sense of perspective” on our extraordinary civilization. Harari has published two subsequent bestsellers—Homo Deus: A Brief History of Tomorrow (2017), and 21 Lessons for the 21st Century (2018). All told, his books have sold over 23 million copies worldwide. He might have a claim to be the most sought-after intellectual in the world, gracing stages far and wide, earning hundreds of thousands of dollars per speaking appearance.

She coins a term to describe Harari: a “science populist”, which is pejorative:

Yuval Harari is what I call a “science populist.” (Canadian clinical psychologist and YouTube guru Jordan Peterson is another example.) Science populists are gifted storytellers who weave sensationalist yarns around scientific “facts” in simple, emotionally persuasive language. Their narratives are largely scrubbed clean of nuance or doubt, giving them a false air of authority—and making their message even more convincing. Like their political counterparts, science populists are sources of misinformation. They promote false crises, while presenting themselves as having the answers. They understand the seduction of a story well told—relentlessly seeking to expand their audience—never mind that the underlying science is warped in the pursuit of fame and influence.

Note the psychologizing and the reasons for his sloppiness: his desire to present a simplified but appealing narrative. I generally stay away from psychologizing, but it’s hard to imagine that Harari doesn’t enjoy his sudden fame. Whether that’s the reason for his two subsequent books, his lectures, or his discussions with Notables I cannot say.

Narayanan decided to fact-check Sapiens, Harari’s first book, and, to put it mildly, gives it a low grade. Yes, she finds errors, but they range from trivial to serious.  Errors are errors, but were I Narayanan, I’d concentrate on the serious ones: ones that make a difference in what people think. I’ve classified the errors she highlights as being “serious,” “middling”, or “trivial”. I’ll take them in reverse order

Trivial errors. 

Here’s a statement about top predators that, Narayanan says accurately, is not exactly correct and is confusing to boot. Her writing is indented:

Consider “Part 1: The Cognitive Revolution,” where Harari writes about our species’ jump to the top of the food chain, vaulting over, for example, lions.

“Most top predators of the planet are majestic creatures. Millions of years of dominion have filled them with self-confidence. Sapiens by contrast is more like a banana republic dictator. Having so recently been one of the underdogs of the savannah, we are full of fears and anxieties over our position, which makes us doubly cruel and dangerous.” 

Harari concludes that, “many historical calamities, from deadly wars to ecological catastrophes, have resulted from this over-hasty jump.”

As an evolutionary biologist, I have to say: this passage sets my teeth on edge. What exactly makes for a self-confident lion? A loud roar? A bevy of lionesses? A firm pawshake? Is Harari’s conclusion based on field observations or experiments in a laboratory? (The text contains no clue about his sources.) Does anxiety really make humans cruel? Is he implying that, had we taken our time getting to the top of the food chain, this planet would not have war or man-made climate change?

Yes, Harari speculates here beyond what we know, but this is a purple passage of little import, or so I think.

Here’s another error that a biologist shouldn’t have made, because it’s careless and false. But it’s also not a major error:

My scientific colleagues take issue with Harari as well. Biologist Hjalmar Turesson points out that Harari’s assertion that chimpanzees “hunt together and fight shoulder to shoulder against baboons, cheetahs and enemy chimpanzees” cannot be true because cheetahs and chimpanzees don’t live in the same parts of Africa. “Harari is possibly confusing cheetahs with leopards,” Turesson says.

This is wrong but doesn’t affect Harari’s point.

Middling errors:

Here’s one:

But [Harari’s] errors unfortunately extend to our species as well. In the Sapiens chapter titled “Peace in our Time,” Harari uses the example of the Waorani people of Ecuador to argue that historically, “the decline of violence is due largely to the rise of the state.” He tells us the Waorani are violent because they “live in the depths of the Amazon forest, without army, police or prisons.” It is true that the Waorani once had some of the highest homicide rates in the world, but they have lived in relative peace since the early 1970s. I spoke to Anders Smolka, a plant geneticist, who happens to have spent time with the Waorani in 2015. Smolka reported that Ecuadorian law is not enforced out in the forest, and the Waorani have no police or prisons of their own. “If spearings had still been of concern, I’m absolutely sure I would have heard about it,” he says. “I was there volunteering for an eco-tourism project, so the safety of our guests was a pretty big deal.” Here Harari uses an exceedingly weak example to justify the need for our famously racist and violent police state.

This is a middling error because the Waorani were certainly once more violent than they are now, though we are not told whether they’re still more violent than an “average” society. As Steve Pinker pointed out in Better Angels,  violence has declined worldwide in the past few hundred years, certainly in part because of organized law enforcement and punishment accompanying a better morality. Narayanan devalues her criticism by making the last statement about Harari’s justification “for our famously racist and violent police state.” Is that in fact what he’s doing? And one could certainly argue whether this characterization of modern law enforcement is accurate or, indeed, worse than the kind of crime and punishment in indigenous “tribal” societies.

Narayanan admits that this kind of error is “inconsequential” but argues that they add up to a crumbling edifice of a book:

These details could seem inconsequential, but each is a crumbling block in what Harari falsely presents as an inviolable foundation. If a cursory reading shows this litany of basic errors, I believe a more thorough examination will lead to wholesale repudiations.

Serious errors.

I consider these more serious not just because anybody with the least expertise could spot them, but also because they are misleading about the status and fate of humans in nature. She quotes Harari on languages and then takes the statement apart:

Next, take the issue of language. Harari claims that “[many] animals, including all ape and monkey species, have vocal languages.”

Anybody who knows how human speech differs from that of other species will recognize the qualitative difference between human symbolic language and the non-symbolic lanaguage of other species, as does Narayanan:

Yet, in spite of all their similarities to humans, monkeys cannot be said to have a “language.” Language is a rule-bound symbolic system in which symbols (words, sentences, images, etc.) refer to people, places, events, and relations in the world—but also evoke and reference other symbols within the same system (e.g., words defining other words). The alarm calls of monkeys, and the songs of birds and whales, can transmit information; but we—as German philosopher Ernst Cassirer has said—live in “a new dimension of reality” made possible by the acquisition of a symbolic system.

Scientists may have competing theories on how language came to be, but everyone—from linguists like Noam Chomsky and Steven Pinker, to experts on primate communication like Michael Tomasello and Asif Ghazanfar—is in agreement that, although precursors can be found in other animals, language is unique to humans.

‘Tis true.

Here’s an error Narayanan put in one of the three footnotes to her article:

A similar excerpt from Harari’s 2017 bookHomo Deus: A Brief History of Tomorrow: “Once it becomes possible to amend deadly genes, why go through the hassle of inserting some foreign DNA, when we can just rewrite the code and turn a dangerous mutant gene into a benign version? Then we might start using the same mechanism to fix not just lethal genes, but also those responsible for less deadly illnesses, for autism, for stupidity and for obesity.”

We can’t just “rewrite the code”, as opposed to the CRISPR method of changing the sequence of a gene, because the genetic code is the dictionary telling us what amino acid in a protein is coded for in the DNA’s RNA product by each three-base-pair codon. That code is fixed, and I don’t understand how Hararis says we can “rewrite that code.” There’s no way that this seems possible.

And one more: Harari playing expert about pandemics:

Now here’s what Harari had to say about pandemics in his 2017 book Homo Deus: A Brief History of Tomorrow.

“So in the struggle against calamities such as AIDS and Ebola, scales are tipping in humanity’s favor. … It is therefore likely that major epidemics will continue to endanger humankind in the future only if humankind itself creates them, in the service of some ruthless ideology. The era when humankind stood helpless before natural epidemics is probably over. But we may come to miss it.”

I wish we had come to miss it. Instead, over 6 million of us have died of COVID as per official counts, with some estimates putting the true count at 12-22 million. And whether you think SARS-CoV-2—the virus responsible for the pandemic—came directly from the wild, or through the Wuhan Institute of Virology, we can all agree that the pandemic was not created in “service of some ruthless ideology.”

Harari could not have been more wrong; yet, like a good science populist, he continued to offer his supposed expertise by appearing on numerous shows during the pandemic. He appeared on NPR, talking about “how to tackle both the epidemic and the resulting economic crisis.” He went on Christiane Amanpour’s show to highlight the “key questions emerging from the coronavirus outbreak.” Then it was on to BBC Newsnight, where he offered “a historical perspective on the ​​ coronavirus.” He switched things up for Sam Harris’s podcast, where he told us about “the future implications” of COVID. Harari also found time to make an appearance on Iran International with Sadeq Saba, on the India Today E-Conclave Corona Series, and a slew of other news channels around the world.

Well, I wouldn’t really call that a serious ERROR, but I would call it an unfounded prognostication. After all, many people would have—and probably did have—the same opinion. What’s potentially harmful is his appearing as an expert on the covid pandemic, particularly if he didn’t admit that he was wrong and say why he was wrong. It becomes most serious if, when Harari went on those shows, he said stuff that was dangerous, but Narayanan doesn’t give us an example of any serious misinformation imparted by Harari.

In general, then, Narayanan singles out some errors in Sapiens, but not a lot of them, and most of them appear relatively inconsequential. Whether they are harmful or not depends on whether they could affect people’s well being, which is an issue separate from simply conveying misinformation. But Narayanan says a couple of dubious things in her critique, too:

Narayanan’s own errors. 

Narayanan has a beef with the “gene-centric” view of evolution most famous promulgated by Richard Dawkins, and this leads to some errors of her own. First, her animus:

Harari’s speculations are consistently based on a poor understanding of science. His predictions of our biological future, for instance, are based on a gene-centric view of evolution—a way of thinking that has (unfortunately) dominated public discourse due to public figures like him. Such reductionism advances a simplistic view of reality, and worse yet, veers dangerously into eugenics territory.

I don’t think Harari comes close to eugenic territory in his speculations below—after all, he’s talking about what is possible, not what we should be doing—and Narayanan herself gets things a bit wrong (she’s also wrong about the weakness of the gene-centric view of evolution: see below):

In the final chapter of Sapiens, Harari writes:

“Why not go back to God’s drawing board and design better Sapiens? The abilities, needs and desires of Homo sapiens have a genetic basis. And the sapiens genome is no more complex than that of voles and mice. (The mouse genome contains about 2.5 billion nucleobases, the sapiens genome about 2.9 billion bases, meaning that the latter is only 14 percent larger.) … If genetic engineering can create genius mice, why not genius humans? If it can create monogamous voles, why not humans hard-wired to remain faithful to their partners?”

It would be convenient indeed if genetic engineering were a magic wand—quick flicks of which could turn philanderers into faithful partners, and everyone into Einstein. This is sadly not the case. Let’s say we want to become a nonviolent species. Scientists have found that low activity of the monoamine oxidase-A (MAO-A) gene is linked to aggressive behavior and violent offenses—but in case we are tempted to “go back to God’s drawing board and design better Sapiens” (as Harari says we can), not everyone with low MAO-A activity is violent, nor is everyone with high MAO-A activity nonviolent. People who grow up in extremely abusive environments often become aggressive or violent, no matter what their genes. Having high MAO-A activity can protect you from this fate, but it is not a given. On the contrary, when children are raised in loving and supportive environments, even those with low MAO-A activity very often thrive.

I’m not an expert on the effect of low MAO-A, but mutations in the gene that cause this are associated with a number of behavioral and neurological disorders, including aggressive behavior.  But who is claiming that there can be variations in the degree of association of low levels of this enzyme and behavior.  On average, lowering the level through engineered mutations would create a number of these disorders, but perhaps not in everyone, or, when present, not to the same degree. Nor is anyone claiming that there are not environmental causes of aggression unassociated with MAO-A levels.  Here Narayanan is attacking a straw theory: the view that everyone with a mutation (designed or not) has the same phenotype (here, level of aggression). Of course there are other causes of behavioral changes beyond MAO-A levels.

Here’s another of Narayanan’s misleading statements about genetics:

Our genes are not our puppet masters, pulling the right strings at the right time to control the events that create us. When Harari writes about altering our physiology, or “engineering” humans to be faithful or clever, he is skipping over the many non-genetic mechanisms that form us.

For example, even something as seemingly hardwired as our physiology—cells dividing, moving, deciding their fates, and organizing into tissues and organs—is not engineered by genes alone. In the 1980s, scientist J.L. Marx conducted a series of experiments in Xenopus (an aquatic frog native to sub-Saharan Africa) and found that “mundane” biophysical events (like chemical reactions in the cells, mechanical pressures inside and on the cells, and gravity) can switch genes on and off, determining cell fate. Animal bodies, he concluded, result from an intricate dance between genes, and changing physical and environmental events.

This is a common error, but one that should be caught by any biologist.  And it is this: even environmental effects that turn genes on and off can be coded by the genes. After all, all of development involves both the external environment and the internal environment—the latter a network of evolved changes coded by genes—turning genes on and off. Ergo, stuff like the assortment of chromosomes, the fate of cells, biochemical reactions, an so on, are orchestrated by natural selection so that the “internal environment” produces a vehicle—an organism—that itself is adaptive. Yes, the external environment can turn genes on and off—the changing color of Arctic mammals or the length of your cat’s fur as winter comes are examples—but it’s a mistake to argue that there’s a strict difference between “hardwired physiology” and “changes in the internal environment”. We have evolved so that genes produce products that, acting in the body, turn other genes on and off, and in an adaptive way. Unless you’re a Lamarckian, plasticity largely comes down to gene effects, even if activated by changes in the external environment. The whole system is evolved, and that means that genes affect each other through “mundane” biophysical events.

In short, Narayanan’s attack on “hardwiring”—which may be ideologically based on from her “puppet master” comments and denigration of the gene-centric view of evolution—is misguided. The gene-centric view of evolution is in fact the most sensible way to look at evolution, and doesn’t require denying any effects of the environment. After all, the going definition of evolution is “changes om the proportions of gene variants in a population over time.” But even Richard Dawkins is not going to say, and hasn’t, that we’re puppets on genetic strings, with everything determined entirely by our DNA.

The rest of Narayanan’s article involves stuff that bores me to tears: artificial intelligence, the view of organisms as algorithms, and the growing group of “Dataists”: people who “perceive the entire universe as flows of data.”  This is simply my own predilection, as I know others are keenly interested in such things.

As for Narayanan’s criticisms of Harari’s tendency to predict things without the necessary basis for prediction, well, I can’t speak to that as I haven’t read his book.  But based on Narayanan’s critique, I do see that Harari appears to have been remiss in fact-checking. Whether that laxity is serious or not I cannot tell. But perhaps readers will below.

Here’s Narayanan reprising her critiques of Harari in a 38-minute interview with R. J. Eskow. There’s a lot of duplication with the stuff in her article:

“Hamnet”—a brief review

October 16, 2022 • 12:15 pm

Hamnet, the ninth book by Maggie O’Farrell, was recommended by more than one reader the last time we had our occasional book go-round, and I’m glad it was mentioned.  I’ve just finished it, and recommend it very highly. (Click the screenshot below to see the Amazon site.)  I’d give it an A, and though it’s not quite the tour de force of the last reader-recommended book I essayed ( Anthony Doerr’s All the Light We Cannot See, a Pulitzer winner that you MUST read), I’d certainly place Hamnet above nearly all the contemporary fiction I’ve read lately. I’m surprised I hadn’t heard of the book, though it did win the National Book Critics Circle Award in 2020.

In 16th century England the names “Hamnet” and “Hamlet” were interchangeable, and the Hamnet of the title is in fact the real name of William Shakespeare’s son, who died of unknown causes at age 11 in 1596. The real Hamnet had a twin sister named Judith, who died years later, in 1662.  Anne Hathaway, Shakespeare’s wife, also had an older girl, Susanna, who lived to be 66. The bard himself died at age 52—tragically early.

Here is the baptismal record of Hamnet and Judith in 1585, and below that the death record of Hamnet in 1596:

We know very little about Shakespeare or his family, but these are the facts upon which O’Farrell builds her novel, a novel fantastic in both senses.  While Wiliam hangs heavily over the novel, his name is never mentioned: he’s referred to as “Hamnet’s father”, or “Mary’s son”.  This is deliberate, for O’Farrell centers the novel on his family. Its center is the playwright’s wife Agnes (renamed from Anne) a sprite of a woman who has the gift of divination and healing with medicinal herbs, an a woman who prefers to crawl into the forest to birth her three children alone rather than stay at home with a midwife. The life of the extended family unreels while William is mostly off stage, having moved early to London to do what he really wants but can’t do in Stratford: write and put on plays.

O’Farrell did substantial research for the book, so that, for example, we see Shakespeare’s father as the glove-maker he was, with the nascent bard continually pressured to learn that trade. (The book leaps backwards and forwards in time.) The twins are inseparable—until the plague takes Hamnet, leaving his entire family inconsolable (William is in London, four days away, and doesn’t get home in time for his son’s death. (The short section on how the flea that bit Hamnet came from Italy is a novel in itself.) Hamnet’s death, with its own supernatural overtones, is heartbreaking; its effect on Agnes is the complete evisceration of her character, so that she can do nothing but lie abed and ponder her dead son.  Judith is hard hit as well, but we don’t learn the depths of the father’s despair until he writes a play with the title being a variant of Hamnet’s name.

It is when Agnes learns of this play and its title that the novel reaches its apogee, but I won’t reveal what happens lest I spoil the narrative. All I can say is that this is a book in which you can run a gamut of emotions on nearly every page. Someone asked me if the book was a “sad one”, and I didn’t know what to answer. It is sad in the way life is sad, leavened with moments of wonder and joy, and the sadness is redeemed in the end.  Yes, there’s a bit of the supernatural in Agnes’s divinations (she can tell a lot about someone merely by pinching the web of skin between their thumb and forefingers), but by and large it’s about life in 16th-century rural England, all overshadowed by an absent father scribbling away in London.

I loved the book and recommend it highly, just a notch in quality behind All the Light We Cannot See, but I still give it an A. I’m surprised that it hasn’t been made into a movie, for it would lend itself well to drama. I see now that in fact a feature-length movie is in the works, and I hope they get good actors and a great screenwriter.

I am now reading another book recommended by a reader, and the comments on posts like this one have been a valuable source of reading material—both fact and fiction. If you please, do tell us below what you’re reading now, or what you’ve read lately, and how highly you recommend it. Don’t forget any books you want to warn us off of!

Did The Selfish Gene damage public understanding of biology?

October 9, 2022 • 9:30 am

This morning Matthew sent me a tweet by “The Dialectical Biologist” (TDB), which astounded me. I don’t know who TDB is, but he/she identifies as “Biologist. Anti-hereditarian. Lewontin fan.” The anti-hereditarian bit explains some of the criticality in the tweet below, and it’s worth noting that Lewontin himself gave The Selfish Gene a very critical review in Nature in 1977 (free with the legal Unpaywall app).

Here’s the tweet (the second part is the important claim), and two of the six subsequent tweets explaining why TDB sees The Selfish Gene as “the most damaging popular science book of all time.”

I was, of course, Dick Lewontin’s Ph.D. student, and I loved and admired the man. But I have to add that his Marxist politics, which included views of an almost infinite malleability of human behavior, did affect his science, and I think his review of Dawkins’s book is marred by that ideology. If you read Dick’s review, you’ll see that, like TDB above, Lewontin objects to the lack of discussion of genetic drift, and to Dawkins’s supposed claim (one that he didn’t actually make) that every aspect of every organism was installed by natural selection, accompanied by  untestable “adaptive stories” about how it arose. (Lewontin calls this “vulgar Darwinism”.)

In short, Lewontin’s review was an abridged version of his paper with Steve Gould, “The Spandrels of San Marco and the Panglossian paradigm: A  critique of the adaptationist program.” That paper was valuable in correcting the excesses of hyperselectionism, pointing out other reasons besides selection for the appearance of organismal traits and behaviors, and implicitly demanding data instead of fanciful stories for natural-selection explanations. (There are many traits, however, like extreme mimicry, where there is no plausible explanation beyond natural selection on bits of DNA.)

It is misguided to fault Dawkins’s book for not dealing in extenso with genetic drift or the San Marco alternatives. The Selfish Gene is essentially a book about how natural selection really works. It’s not important that it doesn’t define “gene” in the way that TDB wants; in fact, biologists haven’t yet settled on a definition of gene! It’s sufficient, when regarding the phenomenon of natural selection, to define a gene as “a bit of DNA that affects the properties of an organism”. If those properties enhance the reproduction of the carrier (the “vehicle”), then the gene gets overrepresented in the next generation compared to the alternative gene forms (“alleles”). These selected bits of DNA act as if they were selfish, “wanting” to dominate the gene pool. That is a very good metaphor, but one that has been widely misunderstood by people who should be thinking more clearly.

The value in the book lies in its clear explanation of how natural selection acts largely (but not entirely) at the level of the gene, not the organism, the group, the population, or the species; its distinction between “replicators” (bits of DNA subject to natural selection) and “vehicles” (the carriers of replicators whose reproductive output can be affected by those replicators); that “kin selection” is, in essence, nothing really different from natural selection acting on the genes of an individual; and that, contrary to a naive “selfish gene” view, altruism can result from natural selection. Finally, it explains clearly the thesis (earlier adumbrated by G. C. Williams) that “group selection—selection on populations—is not a major source of adaptation in nature. (See Steve Pinker’s wonderful essay on the inefficacy of group selection published ten years ago in Edge.)

The Selfish Gene is the clearest explanation I know of how natural selection works, as well as an exposition of ideas like kin selection that were fairly new at the time of the book’s publication.  It also introduces the idea of “memes”, which I think is a distraction that has led almost nowhere in the understanding of culture, but that is just a throwaway notion at the end of the book. (You can see my critique of the meme framework in a review of Susan Blackmore’s book The Meme Machine that I wrote for Nature; access is free.)

Think of the book as an explanation for the layperson about how natural selection really works, and you’ll recognize its value. As far as “damaging” the popular understanding of science, that is a grossly misguided accusation. By explicating how natural selection really works, explaining some of its variants (like kin selection), and dismissing widespread but largely erroneous ideas about selection on groups, The Selfish Gene did the public an enormous service. While popularity is not always an index of a science book’s quality, in this case it is: many laypeople have written about how they finally understood natural selection after reading it.

I could, in fact, argue that the San Marco paper by Gould and Lewontin was damaging, too, by overly restricting the domain of natural selection and failing to adduce cases where drift or pleiotropy were not sufficient explanations for traits (mimicry is one), so that natural selection was the most parsimonious explanation. (In the latter part of his career, it was hard to get Steve Gould to even admit that selection was important, much less ubiquitous). But “San Marco” was itself valuable in dampening hyper-Darwinism, and in the main was a good contribution to  evolutionary biology. The Selfish Gene was, however, a much better contribution

I asked Matthew, someone who of course knows the ins and outs of evolutionary genetics, if he agreed with TDB’s negative assessment of The Selfish Gene. His reply:

Given I am giving a lecture tomorrow in which I tell 600 students they should all read it, I think not…

When I asked permission to reproduce his quote above, he said “sure” and also me the slide he’s showing his 600 students:

. . .and added this:

FWIW I also show them three views in the levels/units of selection debate (a philosopher who says it has to be genes as they are the only things that are passed down, Dick who says we can’t really know and Hamilton who says it’s complicated and it depends what you look at).
The next section of the lecture deals with social behaviour (hence the final line)
I invite those readers who have read The Selfish Gene to weigh in below with their opinion.

Free BBC broadcast: Three biologists (including Matthew) on their new science books

September 26, 2022 • 9:15 am

I can’t imagine NPR putting on a program like this; it’s long and science-y (without jokes), and intelligent. The moderator is not a radio announcer but a scientist.  What we have are three scientists discussing their new (or upcoming) books about genetics and evolution in a BBC panel moderated by geneticist and science journalist Adam Rutherford. You probably know that Adam himself has written several books on genetics.

The show is 42 minutes of discussion with 8 minutes of live audience questions. Here are the three participants and their new works:

Our own Matthew Cobb, Professor of Zoology at the University of Manchester. Matthew’s talking about his new book on genetic engineering, The Genetic Age: Our Perilous Quest to Edit Life. In the U.S. it’s called As Gods: A Moral History of the Genetic Age (out here November 15). I’ve previously highlighted some positive reviews.

Alison Bashford, Laureate Professor of History at the University of New South Wales and Director of the Laureate Centre for History & Population. Her new book is An Intimate History of Evolution: The Story of the Huxley Familyand deals with both Thomas Henry Huxley and his grandson Aldous Huxley. A positive review of her book is at the Guardian

Deborah Lawlor, a professor of epidemiology at the University of Bristol, is working on a book about the inheritance of diabetes in pregnant women in Bradford of both British and Asian descent. She’s also from Bradford where the show was filmed, and so is a local in two respects.

I recommend listening to it all, but if you want to hear just Matthew, he describes his book beginning at 27:43. But then you’d miss Bashford’s eloquent description of the Huxleys and their contributions.  One fact that I didn’t know was that both T. H. and Aldous Huxley suffered from depression (it was called “melancholia” then), which led Aldous to think about a genetic basis for their condition.

Click below to go to the show’s main page, where you can download the podcast.

And click below to listen to the show. Do it soon if you want to listen, as the BBC doesn’t keep its shows up long.

h/t: Anne

“All the Light We Cannot See”, a fantastic book

August 25, 2022 • 1:00 pm

All the Light We Cannot See, by Anthony Doerr, was published in 2014, and I hadn’t heard of it until an old friend recommended it two weeks ago—as did his wife. It turned out to be the best book of fiction I’ve read this year, and perhaps in the last several years. It is a tour de force: mesmerizing, unbelievably inventive, and the kind of book I can’t read before I go to sleep because I just want to keep reading. I finished the book last night and my thoughts are still full of it and of the scenes as I imagined them. (Like everyone, I form a picture in my mind’s eye as each place or character is introduced.)

The book interweaves the stories of two characters, jumping back and forth in time from the beginning of WWII until 1974, with no straight temporal sequence—except that the two stories unfold in parallel, and in very short chapters.

One of the two subjects is a blind French girl, Marie-Laure, daughter of the key-keeper in a big Parisian museum, and the other a German boy, Werner, an orphan ultimately coerced into becoming a German soldier. They’re surrounded by a rich panoply of other characters, including Werner’s sister Jutta and Marie-Laure’s great uncle Etienne. 

The two protagonists are connected at the beginning only because when Marie-Laure and Werrner were children, Etienne broadcast the writings of his own brother on a homemade radio transmitter in the French seaside town of Saint-Malo. Way over in Germany, Jutta and Werner (an amateur radio fanatic) could hear those broadcasts on a homemade receiver, ethralled by the beautiful science scenarios described by Etienne. (Science, by the way, plays a big role in this book, from Marie-Laure’s fascination with mollusks to Werner’s fascination with radios. Even Darwin makes a few appearances in Etienne’s broadcasts.)

I can see that this will become way too long if I even try to summarize the plot without spoilers, so let me be brief.  As the Germans enter Paris, Marie-Laure and her father flee the city, the latter entrusted with a priceless diamond from the museum, tasked with keeping it out of Nazi hands. They wind up in Saint-Malo, taking refuge with Etienne and Madame Menec, Etienne’s housekeeper.

As the war proceeds, Werner becomes an expert in locating covert radio transmissions, but he’s not your average Nazi—he is empathic and kind and arrives at his job through coercion. (The mixture of good and bad in people is one of the book’s themes.) When the Allies invade France, Werner and Marie-Laure’s stories begin to converge as the German is sent to Saint-Malo to pinpoint covert Resistance radio broadcasts made by Marie-Laure. The war ends, people die, and the story fast-forwards to 1974, when only one protagonist is left. The theme of loss plays out on multiple levels, including that of the diamond, and one closes this book with a bittersweet feeling and perhaps an extra bit of moisture in the eye.

The writing is beautiful—simple but evocative—and the story, while complex, never fails to be convincing.

If you like fiction at all, you must read this book. It won the Pulitzer price in 2018, and deservedly so. I believe one reader also recommended it here, while another didn’t like it. But I can’t imagine disliking this book—not if you read fiction.

It’s worthwhile reading the Wikipedia page on this book to see its genesis (a man cursing his cellphone) and why it took ten years to write. Doerr explicitly wanted to write a war story that, while set in war, was not about war. The research behind the story is immense. There’s also a mention that Netflix is making the novel into a four-part movie series.

Here’s Saint-Malo, a place I’ve never been. I couldn’t envision the city as I read the book, and am glad to see it now. The town has been rebuilt, as much of it was destroyed by bombing during the war.

Photo by Sabine de Villeroy