Dennis Prager in The Free Press: Morality can come only from God, so we should at least act as if He exists

February 20, 2026 • 9:45 am

With this article by Dennis Prager, the Free Press officially raises its flag as “We are totes pro-religion!”  In article after article, the site has touted the benefits of religion as a palliative for an ailing world, but you’ll never read a defense of atheism or nonbelief.  Here Dennis Prager, conservative podcaster and founder of an online “university,” touts religion as the only “objective” source of morality. I suspect the “we love religion” mantra of the FP ultimately comes from founder Bari Weiss, who is an observant Jew.

But Prager is wrong on two counts. First, religion is not the only source of morality—or even a good one. Second, there is no “objective” morality. All morality depends on subjective preferences. Granted, many of them are shared by most people, but in the end there is no “objective” morality that one can say is empirically “true”. Is abortion immoral? How about eating animals? What is wrong with killing one person and using their organs to save the lives of several dying people?  Can you push a man onto a trolley to save the lives of five others on an adjacent track?  If these questions have objective answers, what are they?

First, the FP’s introduction:

If you were to name the defining figures of the 21st-century conservative movement, Dennis Prager would surely rank near the top of the list. A longtime radio host and founder of digital educational platform PragerU, he is one of the world’s best-known public intellectuals, publishing more than a dozen books on religion, morality, and the foundations of Western civilization.

His latest book, “If There Is No God: The Battle Over Who Defines Good and Evil,” hits shelves next week. Drawn from a weekend-long lecture Prager delivered to 74 teenagers in 1992, it is a full-throated defense of objective, biblical morality at a time, he says, when more people dispute its existence than ever before. Though rooted in an earlier moment, the book holds new weight: In 2024, Prager suffered a catastrophic fall that paralyzed him from the waist down.

“A certain percentage of this book,” he reveals in the introduction, “was written by dictation and editing from my hospital bed. Were it not for Joel Alperson, who also organized and recorded the entire weekend, the book would not have been finished. We completed the book together. It is a testament to how important we both consider this work.”

Next week, our Abigail Shrier will interview Prager from his hospital room, so stay tuned for their full conversation. And below, we bring you an exclusive excerpt from his book, answering a question that many of us ask every day: In a world where profoundly evil things happen, how do we raise good people? —The Editors

I’m hoping that Abigail Shrier does not throw softballs at Prager, and asks him about “objective” morality and his evidence for God. But I’m betting she won’t: one doesn’t harass a man recently paralyzed from the waist down, and Shrier is employed by the Free Press.

Click, read, and weep.

At the beginning, Prager raises one of these moral questions, and argues that yes, there’s an objective answer—one that comes from the Bible (bolding is mine):

One of my biggest worries in life is that people these days are animated more by feelings than by values.

Let me explain what I mean. Imagine you are walking along a body of water—a river, lake, or ocean—with your dog, when suddenly you notice your dog has fallen into the water and appears to be drowning. About 100 feet away, you notice a stranger, a person you don’t know, is also drowning. Assuming your dog can’t swim, and also assuming that you would like to save both your dog and the stranger, the question is: Who would you try to save first?

If your inclination is to save your dog, that means you were animated by feelings. Your feelings are understandable, and as I own two dogs, I fully relate. You love your dog more than the stranger, and I do, too.

But the whole point of values is to hold that something is more important than your feelings. There is no ambivalence in the Bible about this. “Thou shalt not murder” is not for one group alone. “Thou shalt not steal” is not for one group alone. It is for every human being. Human beings are created in God’s image. Therefore, human life is sacred and animal life is not. You should save the stranger.

Unfortunately, those universal values are not what we’re teaching people today.. . . .

What? You can’t murder a dog? What if the drowning person is Hitler?  And aren’t five human lives on the trolley track worth more than one? What would Jesus do?

And what other Biblical values should we take literally? Should we levy capital punishment for homosexuality? Is it okay to have slaves so long as you don’t beat them too hard? Was it “moral” for the Israelites to kill all the tribes living on their land? Is it okay for God to allow children to die of cancer?  (Of course, sophisticated theologians have made up answers to these questions so that, in the end, they find nothing immoral in Scripture.)

When Prager says that our big problem is that feelings have replaced values, I wonder where those “values” come from. Apparently they come from God. But that raises an ancient question: is something good because God dictates it, or did God dictate it because it was good? (This is Plato’s Euthyphro Dilemma.) And if the latter is true, then there is a standard of morality that is independent of God’s dictates.

This is not rocket science. But Prager sticks to the first interpretation, adhering to the “Divine Command Theory“:

In fact, the Bible repeatedly warns people not to rely on their hearts. If you want to know why so many people reject Bible-based religions, there it is: Most people want to be governed by their feelings and not have anyone—be it God or a book—tell them otherwise.

The battle in America and the rest of the Western world today is between the Bible and the heart.

And Prager sticks to his guns, arguing that atheists and agnostics have no guidelines for morality:

Millions of people today are atheist or agnostic. If you are one of them, my goal is not to convince you that God exists. But I am asking you to live as if you believe God exists, and by extension, as if you believe objective good and evil exist.

Why? Because for a good society to maintain itself, we need objective morality. What would happen to math if it were reduced to feeling? There would be no math. Likewise, if we reduce morality to feeling, there would be no morality. In other words, if values and feelings are identical, there would be no such thing as a value.

Imagine a child in kindergarten who sees a box of cookies meant for the whole class and takes them all for himself. Most people would acknowledge that the child has to be taught that this is wrong. But if values were derived from feelings, this child would keep all the cookies on the basis of his personal value that whoever gets to the cookies first gets to keep them. It’s not as though this philosophy is without precedent. It has been the way many of the world’s societies have looked at life: “Might makes right.”

Again, this palaver appears in the Free Press, which apparently thought it worth publishing.

What Prager doesn’t seem to realize is that an atheist can give reasons for adhering to a certain morality, even if in the end those reasons are directed towards confecting a society that (subjectively) seems harmonious.  For example, John Rawls used the “veil of ignorance” as a way to structure a moral society. Others, like Sam Harris, are utilitarians or consequentialists, arguing that the moral act is one that most increases the “well being” of the world.  But even these more rational moralities have issues, some of which I raised in my questions above. The systems adhere largely to what most people see as “moral”, but they are not really “objective”. They are subjective.

But adhering to the word of the Bible, and twisting it when it doesn’t fit your Procrustean bed of morality, is palpably inferior to reason-based morality. Indeed, the fact that theologians must twist parts of the Bible so that, while seeming to be immoral they turn out to be really moral, shows that there’s no objective morality in scripture.

Does Prager even know his Bible? Have a gander at what he writes here:

That’s precisely why the Ten Commandments outlaw stealing. Because stealing is normal. The whole purpose of moral and legal codes is to forbid people from acting on their natural feelings.

Consider another example, this one far more serious. In virtually every past society, a vast number of women and girls have been raped. In wartime, when victorious armies could essentially do what they wanted, rape was the norm, with few exceptions, such as the American, British, and Israeli armies. Only men whose behavior is guided by values rather than feelings do not rape in such circumstances.

Both of these vastly different examples prove the same thing: To lead good lives, people must first learn Bible-based values, mandated when they are children.

Has he read Numbers 31? Here’s a bit in which, under God’s orders, Moses and his acolytes not only butcher a people, but save the virgin women for sexual slavery (my bolding, text from King James version):

And the Lord spake unto Moses, saying,

Avenge the children of Israel of the Midianites: afterward shalt thou be gathered unto thy people.

And Moses spake unto the people, saying, Arm some of yourselves unto the war, and let them go against the Midianites, and avenge the Lord of Midian.

Of every tribe a thousand, throughout all the tribes of Israel, shall ye send to the war.

So there were delivered out of the thousands of Israel, a thousand of every tribe, twelve thousand armed for war.

And Moses sent them to the war, a thousand of every tribe, them and Phinehas the son of Eleazar the priest, to the war, with the holy instruments, and the trumpets to blow in his hand.

And they warred against the Midianites, as the Lord commanded Moses; and they slew all the males.

And they slew the kings of Midian, beside the rest of them that were slain; namely, Evi, and Rekem, and Zur, and Hur, and Reba, five kings of Midian: Balaam also the son of Beor they slew with the sword.

And the children of Israel took all the women of Midian captives, and their little ones, and took the spoil of all their cattle, and all their flocks, and all their goods.

10 And they burnt all their cities wherein they dwelt, and all their goodly castles, with fire.

11 And they took all the spoil, and all the prey, both of men and of beasts.

12 And they brought the captives, and the prey, and the spoil, unto Moses, and Eleazar the priest, and unto the congregation of the children of Israel, unto the camp at the plains of Moab, which are by Jordan near Jericho.

13 And Moses, and Eleazar the priest, and all the princes of the congregation, went forth to meet them without the camp.

14 And Moses was wroth with the officers of the host, with the captains over thousands, and captains over hundreds, which came from the battle.

15 And Moses said unto them, Have ye saved all the women alive?

16 Behold, these caused the children of Israel, through the counsel of Balaam, to commit trespass against the Lord in the matter of Peor, and there was a plague among the congregation of the Lord.

17 Now therefore kill every male among the little ones, and kill every woman that hath known man by lying with him.

18 But all the women children, that have not known a man by lying with him, keep alive for yourselves.

I suppose that Prager thinks that not only atheists and agnostics lack moral standards, but that’s also true of all the non-Christians of the world, as morality not based on the Bible is evanescent at best:

Again, you don’t need to believe in God. But deciding between right and wrong is essentially impossible without a value system revealed by God. If there isn’t a God who says pushing little kids down—or raping women—is wrong, then all we have to go by are feelings, and then doing whatever you feel like doing isn’t wrong at all.

We’re not talking about theory. We’re living in a country where every few minutes a woman is raped, every minute a car is stolen, and every few hours a human being is murdered. The people committing these crimes don’t act on the basis of biblical values; they act on the basis of feelings.

This is not a wholesale indictment of feelings. Feelings are what most distinguish humans from robots. Feelings make us feel alive. Without feelings, life wouldn’t be worth living. But feelings alone are morally unreliable. Guided by feelings, every type of behavior is justifiable: If you feel like shoplifting and act on your feelings, you’ll shoplift. If a man is sexually aroused by a woman, he will rape her. And, of course, if you have deeper feelings for your pet than for a stranger, you’ll save your dog and let the stranger drown.

If we rely solely on feelings, everything is justifiable. And a society that justifies everything stands for nothing.

So much for Hindus, Buddhists, and Muslims, who march along with us atheists thinking that nothing is immoral.

This is not only stupid, but it’s not new, either. It was Ivan Karamazov in Dostoevsky’s novel who said, “Without God, everything is permitted.”  Prager (and by extension, the Free Press) is making a Swiss cheese of an argument here, one that’s full of holes. If Abigail Shrier doesn’t dismantle it in her interview, I’ll be very disappointed, for I’m a big admirer of her work. And she’s way too smart to buy into Prager’s nonsense.

Here’s Prager’s new book:

Another critique of Agustín Fuentes’s claim of a sex spectrum in humans and other species

February 1, 2026 • 11:20 am

Although the view that sex is a spectrum, and that there are more than two biological sexes in humans and other species, is still prevalent among the woke, others are realizing that sex in humans (and nearly every other species of plant and animal) is indeed a binary, with a tiny fraction of exceptions in humans. These include individuals with “differences in sex determination” (DSD) and almost nonexistent hermaphrodites. Estimates of exceptions in our species range from 0.02% to 0.005%.

The rise of the “sex is a spectrum” notion is due solely to the rise of gender activism and to people who identify as nonbinary or transgender.  But gender is not the same thing as biological sex: the former is a subjective way of feeling, while the latter is an objective fact of biology based on a binary of gamete types.

I personally don’t care if someone identifies as a member of a nonstandard gender, but I do care when people like Steve Novella, who should know better, argue that biological sex is not a binary but a spectrum. In fact, there are far more people born with more or fewer than 20 fingers and toes than are born as true intersexes, yet we do not say that “digit number in humans is a spectrum.”

It’s a shame that many of those who claim that sex is a spectrum are biologists who recognize the sex binary and its many consequences, like sexual selection. The misguided folks include the three main scientific societies studying evolution, who issued a statement that biological sex was a spectrum, and further that this was a consensus view. (Their original statement is archived here.) The societies then took down their claim when other biologists pointed out its inanity (see here, here, and here). And it’s not only biologists who recognize the ideology behind the claim that sex is a spectrum; the public does, too.  NBC News reported this in 2023 (note the conflation of sex and gender):

A new national poll from PRRI finds Americans’ views on gender identity, pronoun use and teaching about same-sex relationships in school deeply divided by party affiliation, age and religion.

Overall, 65% of all Americans believe there are only two gender identities, while 34% disagree and say there are many gender identities.

But inside those numbers are sharp differences. Fully 90% of Republicans say there are just two genders, versus 66% of independents and 44% of Democrats who believe the same

Sadly, if you’re on the side of truth in this debate, at least as far as the number of sexes go, you’re on the side of Republicans. So it goes. Further, Americans and sports organizations themselves are increasingly adopting the views that trans-identified men (“transwomen,” as they’re sometimes called) should not compete in sports against biological women. This is from a 2025 Gallup poll.

Sixty-nine percent of U.S. adults continue to believe that transgender athletes should only be allowed to play on sports teams that match their birth sex, and 66% of Americans say a person’s birth sex rather than gender identity should be listed on government documents such as passports or driver’s licenses.

Thus, although wokeness is like a barbed porcupine quill: easy to go inside you but hard to remove, I’m pretty confident that the claim of a biological sex spectrum will eventually decline even more. But there are still some ideologues who twist and misrepresent the facts to argue that there are more than two sexes. (The argument centers on humans, of course.)  One of these is Princeton anthropologist Agustín Fuentes, who has written several papers and a recent book arguing for the human sex spectrum. I’ve pushed back on his arguments many times (see here), and wrote a short review of his book Sex is a Spectrum, a book that should be read with a beaker of Pepto-Bismol by your side. There’s another and better critical review of Fuentes’s book by Tomas Bogardus, here,  which Bogardus has turned into his own new book, The Nature of the Sexes: Why Biology Matters.

This post is just to highlight another critical review of Fuentes’s book and his views on sex, one written by Alexander Riley and appearing at Compact. You can get to a paywalled version by clicking on the title below, but a reader sent me a transcript, and I’ll quote briefly from that below.

A few quotes (indented). I don’t know how readers can access the whole review without subscribing:

Fuentes, an anthropologist who has extensively studied macaques, begins with a primer on the evolution of sexual reproduction in life on the planet. To show how “interesting” sex is, he offers the example of the bluehead wrasse, a fish species in which females can turn into males in given ecologies. The example, he says, is “not that weird” in biology.

But the reality is that species like this one most definitely are weird, not only in the animal kingdom, but even among fish, who are among the most sexually fluid animals. Among fish, the number of species that are sexually fluid in this way is perhaps around 500 … unless you know that there are approximately 34,000 known fish species. In other words, even in the most sexually fluid animals, transition between male and female by one individual can happen in only 1.5 percent of the total species. What Fuentes describes as “not that weird” is certainly highly unusual. [JAC: note that switching from male to female or vice versa does not negate the sex binary.]

This sleight of hand is typical of Fuentes’s handling of evidence. He attacks a classic argument in evolutionary biology that differences in male and female gametes (sperm an eggs, respectively) explain many other differences between the two sexes. In short, because eggs are much costlier to make than sperm, females have evolved to invest more energy in the reproductive chances of each gamete compared to males. This bare fact of the gamete difference means, according to the Bateman-Trivers principle, males and females typically develop different mating strategies and have different physical and behavioral profiles.

The distortion below is typical of ideologues who promote Fausto-Sterling’s data even when they know it’s incorrect:

Fuentes notes that what he calls “3G human males and females,” that is, those individuals who are unambiguously male or female in their genitalia, their gonads (the gland/organ that produces either male or female gametes), and genes, do not make up 100 percent of human individuals. He goes on to suggest that at least 1 percent of humans, and perhaps more, do not fit the 3G categories. This is a claim unsupported by the facts. The citation he links to this claim is an article by biology and gender studies professor Anne Fausto-Sterling. The claim made by Fausto-Sterling about the percentage of those who are intersex has been thoroughly debunked. She includes a number of conditions in her category of intersex (or non-3G) that are widely recognized as not legitimately so classified. One such condition (Late Onset Congenital Adrenal Hyperplasia, or LOCAH, a hormonal disorder) makes up fully 90 percent of Fausto-Sterling’s “intersex” category. Individuals with LOCAH are easily classed as either male or female according to Fuentes’ 3Gs, and nearly all of them are able to participate in reproduction as normal for their sex. The percentage of those who are actually outside 3G male or female classes is likely around 0.02% percent, which means that 9,998 out of every 10,000 humans are in those two groups.

What’s below shows that trans-identified men do not become equivalent to biological women when they undergo medical transition:

Transwomen are much more likely to exhibit behaviors of sexual violence and aggression than women. A 2011 study showed clearly that even male-to-female transsexuals who had undergone full surgical transition, and who therefore had undergone hormone therapy to try to approximate female hormonal biology, still showed rates of violent crime and sexual aggression comparable to biological males. They were almost twenty times more likely to be convicted of a violent offense than the typical female subject. This is reason enough to keep individuals who have male hormonal biology out of spaces in which they interact closely with semi-clad girls and women.

And Riley’s conclusion:

The fact that Fuentes can make such ill-founded claims without fearing serious pushback is an indication of how captured academic culture is by the ideology behind this book. A healthy academic culture would not so easily acquiesce to political rhetoric masquerading as science.

Yes, anthropology has been captured—especially cultural anthropology—and, as I said, even some biologists have gone to the Dark Side. I have nothing but contempt and pity for those who know that there are two sexes but twist and mangle the facts to conform to the woke contention that the sexes can be made interchangeable. But I should add the usual caveat that, except for a few exceptions like sports and prisons, transgender people whould be given the same rights as everyone else.

Another sign of people rejecting the “sex is a spectrum” claim is that Fuentes’s book didn’t sell well. Despite coming out less than a year ago. it’s now #301,447 on Amazon’s sales list, and has only 25 customer ratings, totaling 3.8 out of 5 stars. It didn’t exactly fly off the shelves.

Here are two Amazon reviews by savvy readers (note: none of the reviews on Amazon are by me):

 

Michael Shermer on free will

January 28, 2026 • 10:15 am

Michael Shermer‘s new book is out, and in the video below, 55 minutes long, he gives an oral summary of its contents (a link to the book is at the bottom). The video was sent to me by reader Barry, who called my attention to the section on free will, and I’ve started the video at the 45-minutes mark—right when Shermer discusses the intractability of the “hard problem” of consciousness and then segues to free will. Here are the YouTube notes.

In this episode, Michael Shermer walks through the core ideas behind his new book Truth: What It Is, How to Find It, and Why It Still Matters, breaking down how humans confuse meaning with reality, stories with facts, and confidence with correctness.

I’ve put a few remarks about Shermer’s view of free will, which seems to me confused, below the video.

Shermer avers that he’s a compatibilist: someone who accepts both determinism and free will. As Wikipedia puts it under “compatibilism“:

Compatibilism is the belief that free will and determinism are mutually compatible and that it is possible to believe in both without being logically inconsistent.

And yet Shermer says he’s not a determinist, although he does define free will as “libertarian, could-have-done-otherwise” free will.  Shermer rejects libertarian free will because he says it’s dualistic, drawing a distinction between mind and matter, and here he’s absolutely right.

But then he argues that “determinists are wrong”! Why? He doesn’t say, but makes a confusing argument that the “could-have-done-otherwise” notion of free will is bogus because it involves replaying a tape of what happens when an instant of “choice” occurs.  Shermer says that if this is the contention, then of course you will do the same thing when you replay that instant, but argues that this is simply because you’re replaying a tape that already has a known consequence, like replaying a record. But if he thinks that, then what does he mean by saying that libertarian free will, which is the contention that replaying the tape could yield a different consequence, is wrong? He says that replaying the tape will always give the same result because it’s a tape. But that is not the argument that physical determinists make. The argument is that you are starting a fresh tape at the moment of choice, but it will always give the same result—absent any quantum effects (see below).

Shermer contends that “the past is determined, but the future isn’t”.  He doesn’t explain why, but here again I agree with his claim that the future is not absolutely determined. But Shermer doesn’t explain why it isn’t.  I will: the future is not completely determined only insofar as fundamentally unpredictable physical effects occur—that is, quantum effects, which as far as we know defy absolute predictability. We know quantum effects applied at the Big Bang, so at that moment the future of the universe was not predetermined.

But do quantum effects apply to human behavior and “choice”?  Perhaps; we just don’t know. Maybe an electron in a neuron in your brain will jump at the moment you’re ordering dinner, so you order fish instead of a hamburger.  If that could happen—and again we don’t know if it does—then yes, you could have done something other than what you did. However, because there’s no mind/body dualism, there is no way that you had any agency in moving that electron; it just happened. Is that what Shermer means by “free will”? If so, it’s a lame kind of free will, because the average person who believes in free will thinks in a dualistic way. Although they don’t say this expicitly, they contend that they have agency that can affect our neurons, brains and behavior.

I’ve written before about how predictability doesn’t equate to determinism, and by determinism I mean physical determinism, defined by Anthony Cashmore this way (this paper is what made me a determinist):

I believe that free will is better defined as a belief that there is a component to biological behavior that is something more than the unavoidable consequences of the genetic and environmental history of the individual and the possible stochastic laws of nature.

Cashmore adds that the environment is still “chemistry”, which of course is also “physics”:

Here, in some ways, it might be more appropriate to replace “genetic and environmental history” with “chemistry”—however, in this instance these terms are likely to be similar and the former is the one commonly used in such discussions.

In other words, to Cashmore (and to me) this form of free will involves dualism. It’s woo. Cashmore, who admits that unpredictable quantum effects can lead to a universe where pure predictability is impossible, adds that that still does not give us free will as defined above—free will not governed by the laws of physics.

We know now that on a macro level, predictability is quite good: we can predict, using classical mechanics, when solar eclipses will occur, where the planets will be in ten years, and we can also use classical mechanics to put people on the Moon. But since classical mechanics is simply a reification on a large scale of quantum mechanics, the future is not completely predictable as quantum effects accumulate. I’ve used as an example the possibility that genetic mutations could be quantum phenomena in some way. If that’s the case, then we can’t predict at a given moment what mutations will occur, and if that is the case, then the raw material for evolution is unpredictable, which further means that evolution is unpredictable.

Nevertheless, because our behaviors are still controlled by the laws of physics, if there is no mind/body dualism then there is no “agency” as most people believe it, and thus there’s no libertarian free will.

But Shermer, as an avowed compatibilist (he appears to be strongly influenced by Dan Dennett), thinks that we do have a form of “free will”, and supports it by using as an example his ability to affect his own future by making preparations for tomorrow’s morning bicycle ride, even if he doesn’t want to ride. He puts his bike in the trunk, he lays out his bicycle clothes for the morrow, and so on. As he says:

“I can choose to do certain things now to make my future different than what it was in the past. That’s freedom; that’s volition; that’s choice. That’s free will.  That’s as good as it gets. So all the determinists, they’re wrong; they’re just simply wrong; they’re assuming we live in a universe that we don’t live in: a predetermined universe.”

It’s sure not choice the way most people mean it, and believe me, I’ve had this argument any number of times. People are not physical determinists, but dualists, just like the saxophone player who nearly attacked me when I told him that at the moment he decided to play an improvised jazz solo, that solo was not something he could alter by thinking.  People are not sophisticated enough to draw a distinction between free will and physical determinsim; they are not sophisticated enough to see that the only physical force that can ultimately change a behavior is quantum mechanics.

Shermer contends that “In the real universe, determinists don’t exist.” He says he’s never met one. Well, Mr. Shermer, meet Mr. Coyne and Mr. Sapolsky, both physical determinists.  We don’t distort the notion of “free will” just so we can say people have it. (Dennett thought that belief in determinism would erode society, and that’s why he wrote two books redefining free will for the masses.)

Finally, Shermer tells us why he doesn’t think there are true determinists: it’s because we act as if we have free will.  He says that some people who pretend to be determinists take pride in the books they write. As he says, “Why would you take pride in your books? You didn’t do anything; it was all determined at the Big Bang.”  Well, I don’t have to respond to that, Shermer knows better. We may well be evolved to think we have agency. We certainly do think that, and have evolved to think that, but I don’t know if natural selection produced that frame of mind. Regardless, we can’t help taking pride in our accomplishments, or looking down on people who do bad things, because that’s the way our brains are configured. That does not mean that physical determinism should not affect our views of punishment and reward: it should, especially with regard to the justice system. But I’ve discussed this many times before.

The last thing I want to say is that some atheist writers whom I admire greatly—people like Shermer, Pinker, and Dawkins—seem to shy away from the free-will problem. I am not sure why; perhaps they realize that if you deny libertarian free will, people will think you’re crazy. You tell me!

Here’s Michael’s book, which came out yesterday from the Johns Hopkins Press.  I haven’t yet read it, but surely will. If you click on the cover you’ll go to the Amazon site:

Short takes: An excellent movie and a mediocre book

January 21, 2026 • 11:30 am

In the last week I’ve finished watching an excellent movie and reading a mediocre book, both of which were recommended by readers or friends. I rely a lot on such recommendations because, after all, life is short and critics can help guide us through the arts.

The good news is that the movie, “Hamnet,” turned out to be great. I had read the eponymous book by Maggie O’Farrell in 2022 (see my short take here), and was enthralled, saying this:

I loved the book and recommend it highly, just a notch in quality behind All the Light We Cannot See, but I still give it an A. I’m surprised that it hasn’t been made into a movie, for it would lend itself well to drama. I see now that in fact a feature-length movie is in the works, and I hope they get good actors and a great screenwriter.

They did. Now the movie is out, and it’s nearly as good as the book. Since the book is superb, the movie is close to superb. That is, it’s excellent but perhaps not an all-time classic, though it will always be worth watching. Author O’Farrell co-wrote the screenplay with director  Chloé Zhao, guaranteeing that the movie wouldn’t stray too far from the book. As you may remember, the book centers on Agnes, another name for Shakespeare’s wife Anne Hathaway, a woman who is somewhat of a seer (the book has a bit of magical realism). And the story covers the period from the meeting of Shakespeare and Agnes until Shakespeare writes and performs “Hamlet,” a play that O’Farrell sees as based on the death from plague of their only son Hamnet (another name for Hamlet; apparently names were variable in England).  I won’t give away the plot of the book or movie, which are the same, save to say that the movie differs in having a bit less magic and a little more of Shakespeare’s presence. (He hardly shows up in the book.)

The movie suffers a bit from overemotionality; in fact, there’s basically no time in the movie when someone is not suffering or in a state of high anxiety.  But that is a quibble. The performances, with Jessie Buckley as Agnes and Paul Mescal as Shakespeare, are terrific. Buckley’s is, in fact, Oscar-worthy, and I’ll be surprised if she doesn’t win a Best Actress Oscar this year.  The last ten minutes of the movie focuses on her face as she watches the first performance of “Hamlet” in London’s Globe theater, and the gamut of emotions she expresses just from a close shot of her face is a story in itself.  Go see this movie (bring some Kleenex for the end), but also read the book.  Here’s the trailer:

On to the book. Well, it was tedious and boring, though as I recall Mother Mary Comes to Me, by Indian author Arundhati Roy, was highly praised. Roy’s first novel, The God of Small Things, won the Booker Prize and I loved it; her second, The Ministry of Utmost Happiness, was not as good.  I read Mother Mary simply because I liked her first book and try to read all highly-touted fiction from India, as I’ve been there many times, I love to read about the country, and Indian novelists are often very good.

Sadly, Mother Mary was disappointing. There’s no doubt that Roy had a tumultuous and diverse live, and the autobiography centers around her  relationship with her mother (Mary, of course), a teacher in the Indian state of Kerala. The two have a tumultuous connection that, no matter how many times Roy flees from Kerala, is always on her mind.  It persists during Roy’s tenure in architectural school, her marriage to a rich man (they had no children), and her later discovery of writing as well as her entry into Indian politics, including a time spent with Marxist guerrillas and campaigning for peaceful treatment of Kashmiris.

The book failed to engage me for two reasons. First, Mother Mary was a horrible person, capable of being lovable to her schoolchildren at one second and a horrible, nasty witch at the next.  She was never nice to her daughter, and the book failed to explain (to me, at least) why the daughter loved such a hateful mother. There’s plenty of introspection, but nothing convincing. Since the central message of the novel seems to be this abiding mother/daughter relationship, I was left cold.

Further, there’s a lot of moralizing and proselytizing, which is simply tedious. Although Roy avows herself as self-effacting, she comes off as a hidebound and rather pompous moralist, something that takes the sheen off a fascinating life.  Granted, there are good bits, but overall the writing is bland.  I would not recommend this book.

Two thumbs down for this one:

Of course I write these small reviews to encourage readers to tell us what books and/or movies they’ve encountered lately, and whether or not they liked them. I get a lot of good recommendations from these posts; in fact, it was from a reader that I found out about Hamnet.

Michael Shermer interviews Matthew Cobb on his Crick biography

January 18, 2026 • 9:45 am

Here we have an 83-minute interview of Matthew Crick by Michael Shermer; the topic is Francis Crick as described in Matthew’s new book Crick: A Mind in Motion. Talking to a friend last night, I realized that the two best biographies of scientists I’ve read are Matthew’s book and Janet Browne’s magisterial two-volume biography of Darwin (the two-book set is a must-read, and I recommend both, though Princeton will issue in June a one-volume condensation).

At any rate, if you want to get an 83-minute summary of Matthew’s book, or see if you want to read the book, as you should, have a listen to Matthew’s exposition at the link below.  I have recommended his and Browne’s books because they’re not only comprehensive, but eminently readable, and you can get a sense of Matthew’s eloquence by his off-the-cuff discussion with Shermer.

Click below to listen.

I’ve put the cover below because Shermer mentions it at the outset of the discussion:

My brief interview of Matthew Cobb about his new biography of Francis Crick

January 7, 2026 • 11:00 am

Matthew Cobb’s new biography of Francis Crick has been out for only a short time, but I’ve never seen a review less than enthusiastic (check out this NYT review). I finished it last week, and was also enthusiastic, finding it one of the best biographies of a scientist I’ve ever read. It concentrates on Crick’s science, but his accomplishments were inseparable from his personality, which focused not only on science but also on poetry (the book begins and ends with a poet), drugs, women, and philosophy (he was, by the way, a hardcore atheist and determinist).

But I digress. I really recommend that if you have any interest in the man and his work, which of course includes helping reveal the structure of DNA, you get this book and read it. It is a stupendous achievement, based on tons of research, sleuthing, and interviews, and only a geneticist could have written it. But it’s not dull at all: Matthew has always written lively and engaging prose. Crick is also a good complement to Matthew’s previous book, Life’s Greatest Secret, about how the genetic code was cracked.

As a complement, a biography of Jim Watson by Nathaniel Comfort is in the works, but hasn’t yet been published.

After I finished the book,  I had a few questions about Crick and his work, and asked Matthew if I could pose them to him and post his answers. on this site  He kindly said “yes,” and so here they are. My questions are in bold; Matthew’s answers in plain text. Enjoy:

What one question would you ask Crick if he could return from the dead? (Perhaps something that you couldn’t find out about him from your research.)

I think I would probably ask him about his view of the state of consciousness research. His key insight, with Christof Koch, was that rather than trying to explain everything about consciousness, researchers should look for the neural correlates of consciousness – neurons that fired in a correlated manner with a visual perception – and ask what (if anything) was special about how they fired, their connections, and the genes expressed within them. Since his death, we have obtained recordings from such neurons, but far from resolving the issue, consciousness studies have lost their way, with over 200 different theories currently being advanced. What did he think went wrong? Or rather, is it time to use a more reductionist approach, studying simpler neural networks, even in animals that might not be thought to be conscious?

 

Why did it take ten years—until the Nobel prize was awarded—for people to appreciate the significance of DNA?

Most people imagine that when the double helix was discovered it immediately made Watson and Crick globally famous and the finding was feted. That was not the case, mainly because the actual evidence that DNA was the genetic material was restricted to Avery’s 1944 work on one species of bacterium (this was contested) and a rather crappy experiment on bacteriophage viruses (this was the famous paper by Hershey and Chase from 1952; the experiment was so messy that Hershey did not believe that genes were made solely of DNA). So although the structure of DNA was immediately obvious in terms of its function – both replication and gene specificity, as it was called, could be explained by reciprocal base pairs and the sequence of bases – there was no experimental proof of this function. Indeed, the first proof that DNA is the genetic material in eukaryotes (organisms with a nucleus, including all multicellular organisms) did not appear until the mid-1970s! Instead, people viewed the idea that DNA was the genetic material as a working hypothesis, which became stronger through the 1950s as various experiments were carried out (eg., Meselson and Stahl’s experiment on replication) and theoretical developments were made (eg Crick’s ideas about the central dogma). Its notable that the Nobel Prize committee awarded the prize in 1962, just after the first words in the genetic code were cracked and the relation between DNA, RNA and protein had been experimentally demonstrated.

 

A lot of the latter part of the book is on Crick’s work on neuroscience (and, later, consciousness). You claim that he made enormous contributions to the field that really pushed it forward. Could you tell us a bit about what those contributions were?

Although he did not make a great breakthrough, he helped transform the way that neuroscience was done, the ideas and approaches it used. From the outset – a 1979 article in a special issue of Scientific American devoted to the brain – he focused attention on one particular aspect of brain function (he chose visual perception), the importance of theoretical approaches rooted in neuroanatomy, the need for detailed maps of brain areas and the promise of computational approaches to neural networks. All these things shaped subsequent developments – in particular the work on neural networks, which he played a fundamental part in, and which gave rise to today’s Large Language Models (he worked with both Geoffrey Hinton and John Hopfield, who shared the 2024 Nobel Prize in Physics for their work on this in the 1980s). And, of course, he made the scientific study of consciousness scientifically respectable, taking it out of the hands of the philosophers who had been tinkering with the problem for three thousand years and hadn’t got anywhere. Later, in a perspective article he published on the last day of the old millennium, he reviewed recent developments in molecular biology and predicted that three techniques would become useful: classifying neurons not by their morphology but by the genes that are expressed in them, using genetic markers from the human genome to study the brains of primates (the main experimental system he advocated using), and controlling the activity of neurons with light by using genetic constructs. All these three techniques – now called RNAseq, transcriptional mapping and neurogenetics – are used every day in neuroscience labs around the world. Indeed, within a few months of the article appearing, Crick received a letter from a young Austrian researcher, Gero Miesenböck, telling him that his lab was working on optogenetics and the results looked promising. During his lifetime, Crick’s decisive leadership role was well known to neuroscientists; now it has largely been forgotten, unfortunately.

 

Is there anything a young scientist could learn from Crick’s own methods that would be helpful, or was he a one-off whose way of working cannot be imitated?

I think the key issue is not so much Crick as the times in which he worked. As he repeatedly acknowledged, he was amazingly lucky. From 1954-1977 he worked for the Medical Research Council in the UK. He did no teaching, no grading, was not involved in doctoral supervision (I’m not even clear how many PhD students he technically supervised – 4? 3? 5? – which highlights that even if he had his name on a bit of paper, he had little to do with any of them). Apart from a couple of periods, he had no administrative duties, and only one major leadership post, at the Salk, which nearly killed him. He wrote one major grant application at the Salk (the only one he ever wrote), but basically he was funded sufficiently well to simply get on with things. And what did he do? ‘I read and think,’ he said. Try getting that past a recruitment or promotions panel today! In a way, the onus for the creation of more Cricks does not lie with young researchers, but with established scientists – they need to allow young people the time to ‘read and think’, and value failure. Most ideas will turn out to be wrong; that’s OK. Or at least, it was to Crick. Many senior researchers (and funders) don’t see things that way. However, even without such changes, young scientists can adopt some of Crick’s habits. Here’s my attempt to sum up what I think were the lessons of his life and work:

  • Read widely and avidly, even engaging with ideas that might seem eccentric or pointless, as ‘there might be something in it’ (one of his favourite phrases).
  • Talk through your ideas with your peers – try to find the weak spots in each other’s arguments.
  • At least in the initial stages of research, don’t get bogged down in the details that might counter your interpretation/theory – Crick and Brenner called this the ‘don’t worry’ approach. They figured that unconnected contrary data points might not undermine their ideas, and would eventually turn out to have specific, varied explanations.
  • Write down your ideas in the form of memos or short documents (keep them short). Writing helps you clarify your ideas and shaped your mind – do not use AI to do this! You can then share your writing with peers, which can be used as a target for discussion and debate.
  • Master the art of clear writing. Avoid jargon, keep your ideas straightforward. Again, the only way to develop this skill is to write – badly at first. So rewrite, edit, recast your writing – it will improve your thinking.
  • Above all, make sure that the science you do is *fun*. That was a word that Crick repeatedly used, and he genuinely got great pleasure from doing science and thinking about it. Seek out an area in which you can have fun and aren’t bogged down by drudgery.

Click below to get the book on Amazon:

A book recommendation: Ian McEwan’s “What We Can Know”

November 26, 2025 • 11:00 am

I decided when I read the NYT review of Ian McEwan’s latest (and 18th) novel, What We Can Know, that I had to read the book.  (Click the screenshots to read the review if you have NYT access, or find the review archived here.)  I quote some of the encomiums from the review:

Ian McEwan’s new novel, “What We Can Know,” is brash and busy — it comes at you like a bowling ball headed for a twisting strike. It’s a piece of late-career showmanship (McEwan is 77) from an old master. It gave me so much pleasure I sometimes felt like laughing.

McEwan has put his thumb on the scale. This is melodramatic, storm-tossed stuff. There is murder, a near kidnapping, a child hideously dead of neglect, multiple revenge plots, buried treasure and literary arson. Writers treat other writers’ manuscripts and reputations the way Sherman treated Georgia. No one is a moral paragon.

. . . I’m hesitant to call “What We Can Know” a masterpiece. But at its best it’s gorgeous and awful, the way the lurid sunsets must have seemed after Krakatau, while also being funny and alive. It’s the best thing McEwan has written in ages. It’s a sophisticated entertainment of a high order.

I had to get it via interlibrary loan, and since it’s new it took some time. But I did get it, and read the 300-page book in a week. And yes, it’s excellent.

 

 

I’m a fan of McEwan, and especially like his novels Atonement (made into a terrific movie) and the Booker-winning Amsterdam. This one also does not disappoint. The NYT gives a plot summary, but I’ll just say that it’s a novel about a poem, and the action takes place over two years more than a century apart: 2014 and  2119. A well-known British poet named Francis laboriously pens a “corona” poem for his wife Vivien on her 53rd birthday. It would be hard to write a normal corona, much less one that, like this one, is said to be a masterpiece. Here’s what the form comprises according to Wikipedia:

crown of sonnets or sonnet corona is a sequence of sonnets, usually addressed to one person, and/or concerned with a single theme. Each of the sonnets explores one aspect of the theme, and is linked to the preceding and succeeding sonnets by repeating the final line of the preceding sonnet as its first line. The first line of the first sonnet is repeated as the final line of the final sonnet, thereby bringing the sequence to a close.

Imagine how hard that would be to write, as the first lines have to form a stand-alone sonnet, and rhyme properly, when put in sequence at the end! To see an example, go here, though the corona has only 12 rather than 14 included sonnets.  At any rate, Francis’s poem gets a national reputation although Francis won’t let it be reproduced or published; it is read aloud on Vivien’s birthday to a dozen guests and then given to her, handwritten on vellum. But only Vivien sees it in print.

Over a hundred years later, with the world devastated by nuclear exchanges, global warming, and skirmishes, a scholar named Thomas Metcalfe, specializing in poetry of the early 2000s, decides to track down the corona to see why it was so renowned despite being unpublished (a nostalgia for the past pervades the 22nd century). As he searches for the work, the story flips back and forth between the 21st and 22nd centuries, giving us two casts of characters, both of which engage in adultery and, in the earlier century, crime.  These intrigues determine the fate of the poem, but I won’t give away the ending. The novel starts a bit slowly, but builds momentum to a roller-coaster finish.  And yes, it’s the best novel of McEwan’s I’ve read since Atonement.

This one I recommend highly.  I keep hoping that McEwan, like Kazuo Ishiguro, will win a Nobel Prize, for he’s pretty close to that caliber. (I tend to lump the authors together for some reason.) But do read it if you like good fiction, and dystopian fiction even more. Two thumbs up!

By the way, it makes constant references to things going on in 2014: cellphones, social media, and people prominent today. I was surprised to find on p. 282 (near the end) a reference to Steve Pinker.  In the earlier century, the pompous poet Francis and his wife invite a couple over to dinner, and the man, named Chris, who is relatively uneducated, uses the word “hopefully” in a sentence, meaning “I hope”.  That was (and is to me) a faux pas, and Francis rebukes the speaker at the dinner table, saying that he doesn’t want to hear that word in his house again. (What a twit!)  But at a later dinner, Chris, rebuked again for the same word, takes Francis apart, showing how he used the word properly and, in addition, a bloke named Pinker said it was okay (I presume this is in Pinker’s book A Sense of Style).  Here’s the passage on p. 282. Chris is speaking and explaining how he discovered that it’s okay to say “hopefully”:

“I don’t know a thing. First time Francis jumped down my throat, I look on Harriet’s shelves. She poined me towards Burchfield’s Fowler and a bloke called Pinker. Seems like some ignorant snob years back picked on hopefully, and a mob of so-called educated speakers got intimidated and joined in and scared each other into never using the word and crapping on anyone who did. Pathetic!”

Below is the book with a link to the publisher. Read it. And, of course, my reviews hopefully will prompt readers to tender their own recommendations. If you have such a book, please name it and tell us why you liked it in the comments below.