More Sophisticated Theology: a religious scholar ponders whether Neanderthals had immortal souls

August 10, 2023 • 8:30 am

Lest you think that Sophisticated Theology™ has fallen on hard times, here we have an article pondering at great and tedious length the immensely important question, “Did Christ die for Neanderthals?” That can be rephrased, according to author Simon Francis Gaine, as “Did the Neanderthals have immortal souls?” (The “OP” after his name stands for Ordinis Praedicatorum, meaning of the Order of Preacher in the Dominican sect of Catholicism.)

And he gets paid to write stuff like this; his biography gives his bona fides, include a degree from Oggsford:

Fr Simon is currently assigned to the Angelicum, Rome, where he teaches in the Theology Faculty of the Pontifical University of St Thomas. He lectures on the Theology of Grace and Christian Anthropology, and oversees the Faculty’s Doctoral Seminar.

Fr Simon holds the Pinckaers Chair in Theological Anthropology and Ethics in the Angelicum Thomistic Institute, of which he is also the Director. He is a member of the Advisory Board of Blackfriars’s Aquinas Institute, the Pontifical Academy of St Thomas, Rome, and the Vatican’s International Theological Commission.

He studied theology at Oxford, and completed his doctorate in modern Catholic theology before joining the Dominican Order in 1995.

Click on the screenshot for a paradigmatic example of Sophisticated Theology™. The paper appeared in 2020 in New Blackfriars, a Wiley journal that’s apparently peer reviewed.

Here’s the Big Question:

 I have no expertise in any of these sciences, but have tried as best I can to understand what they have to say, in order to take account of what they have to say within a theological framework. Today I am going to look at the Neanderthals and their relationship to us from a theological perspective in the Catholic tradition, asking what a disciple of St Thomas Aquinas should make of them. Are they to be counted among the humanity God created in his image and likeness and which fell into sin, or are they to be counted instead among the other animal species of our world represented in the first chapter of Genesis? Or are they something else? While creation itself is to be renewed through Christ at the last, according to Christian faith Christ is said to die for our trespasses, for our sins. So did Christ die for Neanderthals?

This comes down to the question, says Gaine, of whether Neanderthals had immortal souls, so we have to look for evidence of that. If they did, then they could be saved by Jesus, though since the Neanderthals’ demise antedated the appearance of Jesus by about 40,000 years, their souls must have lingered in somewhere like Purgatory (along with the souls of Aztecs and other pre-Christian believers) for millennia. Gaine does not take up the question of whether other hominins, like H. erectus or H. floresiensis, much less the Denisovans, also had souls.

Since we have no idea whether Neanderthals had immortal souls (indeed, we can’t be sure that anybody else has an immortal soul, since it’s like consciousness), we have to look for proxies for souls. The question is complicated by the fact that Neanderthals interbred with “modern” Homo sapiens, so that most of us carry some a few percent of Neanderthal genes in our genome.

To answer his question of whether Neanderthals are “theologically human” (i.e., whether they had immortal souls), Gaine turns to his hero Aquinas:

So were Neanderthals theologically human or not? I think the only way we can approach this question is to ask whether or not Neanderthals had immortal souls, as we do. But, apart from Christian teaching, how do we know that we even have such souls? We cannot just have a look at our immaterial souls, and Aquinas thought that we only know the character of our souls through what we do. Aquinas argues from the fact that we make intellectual acts of knowledge of things abstracted from their material conditions, to the immateriality of the intellectual soul. Our knowledge is not just of particulars but is universal, enabling pursuits like philosophy and science, and the potential to be elevated by God to supernatural knowledge and love of him. If human knowing were more limited to a material process, Aquinas does not think our souls would be such subsistent, immaterial souls. Finding evidence of intellectual flights throughout the history of sapiens is difficult enough, however, let alone in Neanderthals.

. . .  What we need to look for in the case of Neanderthals is evidence of some behaviour that bears the mark of an intellectual soul such as we have.

And so an “intellectual soul” then becomes a proxy for the immortal soul, which is itself the proxy for whether you can be saved by Christ. Did Neanderthals have these? Gaine uses several lines of evidence to suggest that they did.

  • Neanderthals buried their dead (religion!)
  • Language. We don’t know if Neanderthals could speak, but they had a vocal apparatus similar to that of modern H. sapiens. Gaine concludes that they had language, though of course that’s pure speculation. But since when have Sophisticated Theologians™ bridled at usupported speculation?
  • Neanderthals made cave paintings and may have adorned themselves with feathers and jewelry: signs of a “material culture” similar to H. sapiens.

And so he concludes, without saying so explicitly, that Neanderthals had immortal souls and were save-able by Christ. This supposedly allows us to use science to expand theology:

How though does any of this make a difference to theology in the tradition of Aquinas? If Neanderthals were created in God’s image and saved by Christ, this must expand our understanding of Christ’s ark of salvation and raise questions about how his saving grace was made available to them. Because the Church teaches that God offers salvation through Christ to every person in some way.  theologians have often asked in recent times how this offer is made to those who have not heard the Gospel, members of other religions, and even atheists. It seems to me that, just as modern science has enlarged our sense of the physical universe, the inclusion of Neanderthals in theological humanity must somehow expand our sense of human salvation, given that it was effected in the kind of life Neanderthals lived.

. . . But even if Neanderthal inclusion does not pay immediate theological dividends, at least for apologetic reasons it seems necessary for theology to take account of their discovery. Unless theologians do, they risk the appearance of leaving faith and science in separately sealed worlds, as though our faith cannot cope with advancing human knowledge, leaving it culturally marooned and seemingly irrelevant to many. That is exactly the opposite of the attitude of Aquinas, who, confident that all truth comes from God, in his own day confirmed Christian wisdom by integrating into it what he knew of human science.’

But why stop at Neanderthals when you’re “expanding your faith through science”. There are lots of other hominins that must be considered (see below).  Can we rule most of these out because they might not have had language?

From the Encyclopedia Brittanica

And what about other mammals? In 2015 the great Sophisticated Catholic Theologians™ Edward Feser and David Bentley Hart argued about whether dogs can go to Heaven. (Hart said “yes,” while Feser said “no”, both of them furiously quoting Church Fathers like Aquinas to support their positions.)

These are tough questions, and of course to answer them theologians have to construct confected arguments based on casuistry. What amazes me is that people get paid to corrupt science with such ridiculous theological questions. It is unsupported speculation about unevidenced empirical assertions.

h/t: David

The biology of quitting: when you should hold ’em and when you should fold ’em

April 20, 2023 • 12:30 pm

Someone called this Big Think piece to my attention because some quotes from me are in it. And they are, but that’s not the important part, which is the evolutionary biology of giving up, and I guess I’m the Expert Evolutionist in this take.  The piece is by Julia Keller, a prolific author and journalist who won a Pulitzer Prize for feature writing in 2004, and this is an excerpt from her new book  Quitting: A Life Strategy: The Myth of Perseverance and How the New Science of Giving Up Can Set You Free. which came out April 18.

Although I had some association with Julia when she wrote for the Chicago Tribune (I think she helped me get a free-speech op-ed published), I don’t remember even speaking to her on this topic, but it must have been quite a while back. At any rate, I certainly want to be set free from my maladaptive compulsions, which include persisting when I should give up, so I’ll be reading her book.

Click on the screenshot to read:

The science involved is largely evolutionary: it pays you to give up when you leave more offspring by quitting than by persisting. Or to couch it more accurately, genes that enable you to assess a situation (consciously or not) and give up at the right point—right before the relative reproductive gain from persisting turns into a relative loss compared to other gene forms affecting quitting—will come to dominate over the “nevertheless she persisted” genes.  Keller engages the reader by drawing at the outset a comparison between Simone Biles stopping her gymnastic performance in the 2021 Tokyo games, and, on the other hand, a honeybee deciding whether or not to sting a potential predator of the nest.

If the bee does sting, she invariably dies (her innards are ripped out with the sting), and can no longer protect the nest. But if that suicidal act drives away a potential predator, copies of the “sting now” gene are saved in all the other nest’s workers, who are her half sisters. (And of course they’re saved in her mother—the queen, the only female who can pass on her genes.) If a worker doesn’t sting, every copy of that gene might be lost if the nest is destroyed, for if the nest goes, so goes the queen, and every gene is lost.  On the other hand, a potential predator might not actually prey on a nest, so why give up your life if it has no result? You have to know when stinging is liable to pay off and when it isn’t.

Inexorably, natural selection will preserve genes that succeed in this reproductive calculus by promoting stinging at the right time and place—or, on the other hand refraining from stinging if it’s liable to have no effect on colony (ergo queen) survival.  And in fact, as you see below, honeybees, while they surely don’t consciously do this calculus, they behave as if they do, and they do it correctly.  Often natural selection favors animals making “decisions” that cannot be conscious, but have been molded by selection to look as if they were conscious. 

As for Simone Biles, well, you can read about her. Her decision was clearly a conscious one, but also bred in us by selection—selection to avoid damaging our bodies, which of course can severely limit our chance to pass on our genes. This is why we usually flee danger when there is nothing to gain by meeting it. (She did have something to gain—gold medals—which is why she’s like the bees.)

Why do young men street race their cars on the street, a dangerous practice? What do they have to gain? Well, risk-taking is particularly prevalent in postpubescent males compared to females, and I bet you can guess why.

I’ll first be a bit self aggrandizing and show how I’m quoted on evolution, and then get to the very cool bee story. It’s a short piece, and you might think of other “quitting vs. non quitting” behaviors of animals that could have evolved. (Hint: one involves cat domestication.)

“Perseverance, in a biological sense, doesn’t make sense unless it’s working.”

That’s Jerry Coyne, emeritus professor at the University of Chicago, one of the top evolutionary biologists of his generation. [JAC: a BIT overstated, but I appreciate it.] I’ve called Coyne to ask him about animals and quitting. I want to know why human beings tend to adhere to the Gospel of Grit—while other creatures on this magnificently diverse earth of ours follow a different strategy. Their lives are marked by purposeful halts, fortuitous side steps, canny retreats, nick‑of‑time recalculations, wily workarounds, and deliberate do‑overs, not to mention loops, pivots, and complete reversals.

Other animals, that is, quit on a regular basis. And they don’t obsess about it, either.

In the wild, Coyne points out, perseverance has no special status. Animals do what they do because it furthers their agenda: to last long enough to reproduce, ensuring the continuation of their genetic material.

We’re animals, too, of course. And despite all the complex wonders that human beings have created—from Audis to algebra, from hot-fudge sundaes to haiku, from suspension bridges to Bridgerton—at bottom our instincts are always goading us toward the same basic, no‑nonsense goal: to stick around so that we can pass along little copies of ourselves. [JAC: note how this is an individual-centric view rather than the correct gene-centric one, but it’s good enough.] It’s axiomatic: the best way to survive is to give up on whatever’s not contributing to survival. To waste as few resources as possible on the ineffective. “Human behavior has been molded to help us obtain a favorable outcome,” Coyne tells me. We go for what works. We’re biased toward results. Yet somewhere between the impulse to follow what strikes us as the most promising path—which means quitting an unpromising path—and the simple act of giving up, something often gets in the way. And that’s the mystery that intrigues me: When quitting is the right thing to do, why don’t we always do it?

Well, who ever said that every aspect of human behavior was molded by natural selection? Please don’t think that I was implying that it was, as we have a cultural veneer on top of the behaviors conditioned by our genes. In this piece Keller doesn’t get to the subject of why we don’t quit when we should. I’m sure that’s in the book.

Now the very cool bee story:

Justin O. Schmidt is a renowned entomologist and author of The Sting of the Wild, a nifty book about a nasty thing: stinging insects. Living creatures, he tells me, echoing Coyne, have two goals, and those goals are rock-bottom rudimentary: “To eat and not be eaten.” If something’s not working, an animal stops doing it—and with a notable absence of fuss or excuse-making. . . .

. . . For a honeybee, the drive to survive carries within it the commitment to make sure there will be more honeybees. And so she defends her colony with reckless abandon. When a honeybee stings a potential predator, she dies, because the sting eviscerates her. (Only the females sting.) Given those odds—a 100 percent mortality rate after stinging—what honeybee in her right mind would make the decision to sting if it didn’t bring some benefit?

That’s why, Schmidt explains to me from his lab in Tucson, sometimes she stands down. When a creature that may pose a threat approaches the colony, the honeybee might very well not sting. She chooses, in effect, to quit—to not take the next step and rush forward to defend the nest, at the cost of her life.

His experiments, the results of which he published in 2020 in Insectes Sociaux, an international scientific journal focusing on social insects such as bees, ants, and wasps, reveal that honeybees make a calculation on the fly, as it were. They decide if a predator is close enough to the colony to be a legitimate threat and, further, if the colony has enough reproductive potential at that point to warrant her ultimate sacrifice. If the moment meets those criteria—genuine peril (check), fertile colony (check)—the honeybees are fierce fighters, happy to perish for the greater good.

But if not… well, no. They don’t engage. “Bees must make life‑or‑death decisions based on risk-benefit evaluations,” Schmidt tells me. Like a gymnast facing a dizzyingly difficult maneuver that could prove to be lethal, they weigh the danger of their next move against what’s at stake, measuring the imminent peril against the chances of success and the potential reward. They calculate odds.

And if the ratio doesn’t make sense, they quit.

That’s a bit oversimplified, for the calculus is not only unconscious (I doubt bees can weigh threats this way), but the decision capability has been molded by competition over evolutionary time between different forms of genes with different propensities to sting or give up. Further, individual worker bees are sterile, and so what’s at stake is the number of gene copies in the nest as a whole—and especially in the queen. The asymmetrical relatedness between the queen, her workers, and their useless drone brothers (produced by unfertilized eggs) makes the calculus especially complicated.

On the other hand, explaining the gene calculus to lay readers is hard, and it might be better to read the seminal work on how this all operates: Dawkins’s The Selfish Gene. 

Here’s Schmidt’s short paper (click to read; if it’s paywalled, ask for a copy). He died just this February.

Svante Pääbo nabs Medicine and Physiology Nobel

October 3, 2022 • 7:30 am

I had totally forgotten that it’s Nobel Prize season, and the first one, the Medicine or Physiology Prize, was awarded today—to the human evolutionary geneticist Svante Pääbo, a Swede. The reader who sent me the news had these immediate reactions:

  • Highly unusual that there is a single winner nowadays
  • How often has the prize gone to an evolutionary scientist (of any shape or form) ?
  • Probably being Swedish helped a bit!

Yes, the last “solo” prize was given in this field in 2016 to Yoshinori Ohsumi for his work on lysosomes and autophagy. As for the evolutionary biology, I’m not aware of anybody working largely on evolution who has won a Nobel Prize. The geneticist Thomas Hunt Morgan won one, but it was his students who became evolutionary geneticists.  I also remember that when I entered grad school, my Ph.D. advisor Dick Lewontin was helping prepare a joint Nobel Prize nomination for Theodosius Dobzhansky and Sewall Wright, but Dobzhansky died in 1975 before it could be submitted, and posthumous Prizes aren’t given.)

Of course, Pääbo has worked on the evolution of the genus Homo, and a human orientation helps with the Prize, but his substantial contributions fully qualify him for the Big Gold Medal.  As for him being Swedish, I don’t know if there’s some national nepotism in awarding prizes, but again, Pääbo’s work is iconic and no matter what nationality he was, he deserves one. And of course I’m chuffed that an evolutionary geneticist—one of my own tribe—won the Big One.

Click on the Nobel Committee’s press release or the NYT article below to read about Pääbo or go to his Wikipedia page.

NYT:

Pääbo is the leader of a large team, and has had many collaborators, but it’s clear that, if fewer than four people were to get the prize for work on human evolution, Pääbo would stand out as the main motive force, ergo his solo award.  Sequencing the Neanderthal genome and estimating the time of divergence from “modern” H. sapiens (about 800,000 years)? That was Pääbo and his team. Finding the Denisovans, a separately-evolved group from Neanderthals? Pääbo and his team.  Discovering that both of these groups interbred with our own ancestors, and we still carry an aliquot of their genes? Pääbo and his team. Learning that some of the introgressed genes from Denisovans have conferred high-altitude adaptations to Tibetans? Pääbo and his team. And that some Neanderthal genes confer modern resistance to infections? Pääbo and his team.

The man can truly be seen as the father of human paleogenetics—and he’s five years younger than I? Oy!

Although born in Sweden. Pääbo works mostly in Germany. Here’s his bio from the Nobel Prize Committee:

Svante Pääbo was born 1955 in Stockholm, Sweden. He defended his PhD thesis in 1986 at Uppsala University and was a postdoctoral fellow at University of Zürich, Switzerland and later at University of California, Berkeley, USA. He became Professor at the University of Munich, Germany in 1990. In 1999 he founded the Max Planck Institute for Evolutionary Anthropology in Leipzig, Germany where he is still active. He also holds a position as adjunct Professor at Okinawa Institute of Science and Technology, Japan.

A prize for work in evolutionary genetics! Well done, Dr. Pääbo!

Svante Pääbo

And a bit of biography from the NYT article:

Dr. Pääbo has a bit of Nobel Prize history in his own family: In a 2014 memoir, “Neanderthal Man,” he wrote that he was “the secret extramarital son of Sune Bergstrom, a well-known biochemist who had shared the Nobel Prize in 1982.”

It took some three decades of research for Dr. Pääbo to describe the Neanderthal genome that won him his own prize. He first went looking for DNA in mummies and older animals, like extinct cave bears and ground sloths, before he turned his attention to ancient humans.

“I longed to bring a new rigor to the study of human history by investigating DNA sequence variation in ancient humans,” he wrote in the memoir.

It would be no easy feat. Ancient genetic material was so degraded and difficult to untangle that the science writer Elizabeth Kolbert, in her book “The Sixth Extinction,” likened the process to reassembling a “Manhattan telephone book from pages that have been put through a shredder, mixed with yesterday’s trash, and left to rot in a landfill.”

Lactase persistence in populations that drink milk: a classic story of human evolution re-evaluated

July 29, 2022 • 9:15 am

The classic tale of “gene-culture coevolution” in humans—the notion that cultural changes in behavior changed the selection pressures that impinged on us—is the evolution of “lactase persistence” (LP) over the past four thousand years.  LP is a trait that allows you to consume, as an adult, lots of milk or dairy products without suffering the side effects of indigestion, flatulence, or diarrhea.

Young children are able to tolerate milk while nursing, of course, but after weaning many of them no longer tolerate milk—they are lactose intolerant (LI). The ability to digest lactose goes away after weaning because the gene producing the necessary enzyme gets turned off.

The gain of LP, which enables you to drink milk and eat dairy products into adulthood without ill effect, rests on single mutations in the control region of the gene producing lactase, an enzyme that breaks down the milk sugar lactose.  These mutations have arisen independently several times, but only after humans began “pastoral” activities: drinking milk from domesticated sheep, goats, and cows. And the mutations act to keep lactase turned on even after weaning. (Why humans turn off the gene after weaning isn’t known, but presumably involved the metabolic cost of producing an enzyme that wasn’t used in our ancestors, who didn’t drink milk after weaning until about about 10,000 years ago—when farming and animal domestication began.)

Based on analysis of fossil DNA, the LP mutations began spreading through Europe (starting from what is now Turkey) about 4000 years ago. And so the classic story—one that I taught my evolution classes—is that humans began drinking milk from captive herds, and that gave an advantage to retaining the ability to digest milk even after weaning. Ergo, natural selection for the nutritional benefits of milk led to the spread of LP mutations, as their carriers may have had better health (ergo more offspring) than individuals who turn off the enzyme at weaning).

This leads to the “coevolution” that is the classic evolutionary tale: a change in human behavior (raising animals for milk) led to selection for the persistence of the milk-digesting enzyme, and thus to genetic evolution. The “coevolution” part is the speculation that being able to digest milk without side effects would cause humans to raise even more dairy animals and drink even more milk, intensifying the selection for LP, and so the gene for LP would keep increasing in frequency.

A new paper in Nature, which is being touted all over social media, argues against this classic story, suggesting that it’s more complex than previously envisioned.  Although the new results are touted as overturning the earlier story, they really don’t. There is still human genetic evolution promoted by a change in culture, and there’s still a reproductive advantage in drinking milk.

The new part of the story is simply that that reproductive advantage comes not constantly (as previously envisioned), but only during times of famine and disease, when those who couldn’t digest lactose were at a severe disadvantage because the diarrhea caused by lactose intolerance would contribute to the death of diseased or malnourished individuals. This is a twist on the main story, but doesn’t overturn it completely. There’s still the connection between culture and human evolution, and there’s still a reproductive advantage to LP that leads to natural selection and genetic evolution of our species.  What’s different is how and when the selection acts (see “the upshot” at the bottom).

Click the title screenshot below to read, or you can download the pdf here. The full reference is at the bottom, and Nature deemed this worthy of two News and Views pieces in the same issue: (here and here).

First, the authors show the spread of dairy use in the figure below (the redder the color, the more milk usage over time in Eurasia. This was estimated from looking at the frequency of pot shards that had milk residue (click to enlarge). By 1500 BC, milk use was widespread.

Caption (from Nature): Interpolated time slices of the frequency of dairy fat residues in potsherds (colour hue) and confidence in the estimate (colour saturation) using two-dimensional kernel density estimation. Bandwidth and saturation parameters were optimized using cross-validation. Circles indicate the observed frequencies at site-phase locations. The broad southeast to northeast cline of colour saturation at the beginning of the Neolithic period illustrates a sampling bias towards earliest evidence of milk use. Substantial heterogeneity in milk exploitation is evident across mainland Europe. By contrast, the British Isles and western France maintain a gradual decline across 7,000 years after first evidence of milk about 5500 BC. Note that interpolation can colour some areas (particularly islands) for which no data are present.

One reason the authors doubt the classical story is that while dairying and milk-drinking by adults began about 10,000 years ago, the gene for LP (determined from sequencing “fossil DNA”) didn’t spread widely until about 4,000 years ago.  Why is that? The mutation for LP is dominant, which means it could have spread widely very quickly, as even carriers of one copy would have a reproductive advantage. This temporal disparity is what led the authors to propose their alternative hypotheses for the spread of the LP alleles (there are several).

Further, when the authors tried to correlate the frequencies of the LP allele with the frequency of milk use (the classical explanation), they found no correlation—that pattern was indistinguishable from a general rise in frequency over Europe regardless of milk use.

One other set of data led to the new hypothesis. That is the observation that LI people in both Britain and China can still drink lots of milk without suffering any measurable health or reproductive effects (milk drinking has recently proliferated in China).  Of course, things are different now from 4000 years ago, but one of the differences led to the authors’ two hypotheses: the spread of the LP allele was promoted especially strongly in prehistoric times by the prevalence of famine and of disease—with the latter coming often from animals, either domesticated or those that hang around settlements. (As the authors note: “about 61% of known and about 75% of emerging human infectious disease today come from animals”).

So the authors erected two hypotheses, the crisis mechanism and the chronic mechanism. I’ll let them describe the hypotheses that they tested (my emphases)

Given the widespread prehistoric exploitation of milk shown here and its relatively benign effects in healthy LNP individuals today, we propose two related mechanisms for the evolution of LP. First, as postulated in ref. 24, the detrimental health consequences of high-lactose food consumption by LNP individuals would be acutely manifested during famines, leading to high but episodic selection favouring LP. This is because lactose-induced diarrhoea can shift from an inconvenient to a fatal condition in severely malnourished individuals and high-lactose (unfermented) milk products are more likely to be consumed when other food sources have been exhausted. This we name the ‘crisis mechanism’, which predicts that LP selection pressures would have been greater during times of subsistence instability. A second mechanism relates to the increased pathogen loads—especially zoonoses—associated with farming and increased population density and mobility. Mortality and morbidity due to pathogen exposure would have been amplified by the otherwise minor health effects of LNP in individuals consuming milk—particularly diarrhoea—due to fluid loss and other gut disturbances, leading to enhanced selection for LP We name this the ‘chronic mechanism’, which predicts that LP selection pressures would have increased with greater pathogen exposure.

In other words, the reproductive advantage of having the LP allele came from the reproductive disadvantage (through death) of lactose-intolerant people during times of famine and disease.

They tested the two hypotheses by correlating indices of famine and of disease deduced from archeological and paleontological evidence:

Crisis mechanism: “Subsistence instability”, or famine, was assessed by prehistoric fluctuations in population size, which, the authors say, is correlated with the likelihood of famine (they provide no evidence for the latter supposition). But the correlation gives a significantly better fit to the pattern of LP allele frequency than just assuming uniform selection over time and space.

Chronic mechanism:  The authors hypothesized that the frequency of disease would correlate with the likelihood of “zoonoses” (diseases caught from animals), which itself would correlate with temporal variation in settlement densities.  These data, which to me would be correlated with “prehistoric fluctuations in population size” above, also explained LP allele frequencies better than an assumption of uniform selection.

Of course, there’s no reason (and the authors say this) that both mechanisms couldn’t operate together. Curiously, though, indices of the density of domestic animals did not support the “chronic mechanism” though measurements of the proportion of wild animals around humans did.  This implies that, if the “chronic mechanism” is correct, people were getting sick not from their horses, dogs, cattle, or sheep, but from wild animals (perhaps from eating them).

Other hypotheses that the authors mention but didn’t test include “drinking milk as a relatively pathogen-free fluid”, allowing “earlier weaning and thus increased fertility.” I would add that if diseases are causal here, they could come not from being around animals, but having drunk contaminated water, giving an advantage to those who prefer milk. But there’s no way of assessing that from the archaeological record.

The upshot: On the last page of the paper the authors say that they’ve debunked the prevailing narrative:

The prevailing narrative for the coevolution of dairying and LP has been a virtuous circle mechanism in which LP frequency increased through the nutritional benefits and avoidance of negative health costs of milk consumption, facilitating an increasing reliance on milk that further drove LP selection. Our findings suggest a different picture. Milk consumption did not gradually grow throughout the European Neolithic period from initially low levels but rather was widespread at the outset in an almost entirely LNP population. We show that the scale of prehistoric milk use does not help to explain European LP allele frequency trajectories and thus it also cannot account for selection intensities. Furthermore, we show that LP status has little impact on modern milk consumption, mortality or fecundity and milk consumption has little or no detrimental health impact on contemporary healthy LNP individuals.

Instead, they say that they find support for the increase of LP alleles through both famine or pathogen exposure.

Well, the data are the data, and their indices comport better with those data than does the classical hypothesis—the “prevailing narrative.” I’m still not convinced that their proxies for famine or disease are actually correlated with famine and disease themselves, but other researchers will undoubtedly dig into that.

What I want to emphasize is that if the work of Evershed et al. is accurate, it still does not overturn the story of gene-culture “coevolution”.  The “coevolution” is still there, the fact that a change in human culture influenced our evolution is still there, and the fact that drinking milk conferred higher reproductive fitness is still there. What has changed is only the nature of selection. Granted, that’s a significant expansion in understanding the story, but to listen to the media—social or otherwise—you’d think that the “classical narrative” is completely wrong. It isn’t. It’s still correct in the main, but the way selection acted may be different from what we used to think. The media love “evolution scenarios are wrong” tales, and that seems to be the cast of at least some stuff I’ve seen in the news and on social media.

___________________________

Reference: Evershed, R.P., Davey Smith, G., Roffet-Salque, M. et al. 2022. Dairying, diseases and the evolution of lactase persistence in Europe. Nature. https://doi.org/10.1038/s41586-022-05010-7

Once again: are “races” social constructs without scientific or biological meaning?

July 19, 2022 • 9:20 am

Every day, it seems, I hear that “races have no biological reality or meaning; they are purely social constructs.” And that statement is somewhat misleading, for even the crudely designated races of “white, black, Hispanic, and East Asian” in the U.S. are, as today’s paper shows, biologically distinguishable to the point where if you look at the genes of an unknown person, you have a 99.86% chance of diagnosing their self-identified “race” as one of the four groups above. That is, if you ask a person how they self-identify as one of the four SIRE groups (SIRE: “self identified race/ethnicity), and then do a fairly extensive genetic analysis of each person, you find that the groups fall into multivariate clusters.

More important, there’s little deviation between one’s SIRE and which genetic cluster they fall into. Over 99% of people in the sample from this paper can be accurately diagnosed as to self-identified race or ethnicity by looking at just 326 regions of the genome.

This in turn means that there are biological differences between different SIREs, so race cannot be simply a “social construct.” This is in direct contradiction between the extreme woke view of “race”, as expressed in the Journal of the American Medical Association, a statement I discussed in an earlier post:

Race and ethnicity are social constructs, without scientific or biological meaning.

Nope, and we’ve known that statement is wrong for nearly 20 years. Of course, if you take “biological meaning” as “data show that there are a finite number of distinct groups with huge genetic differences”, then it is a correct statement. But nobody thinks that any more except for racists or those ignorant of modern population genetics in humans.

The meaning of the biological reality adduced in papers like the one we’re discussing today is this: genes can be used to diagnose biological ancestry, which is surely involved in one’s SIRE. And therefore “races” or “ethnicities” aren’t just made-up groups, but say something about the evolutionary origin of group members.

As I said, the “old concept” of races as a small number of genetic groups that differ strongly in their genes is dead. But there are still groups, and there are groups within groups, and groups within groups within groups. Thus genetic variation in our species is hierarchical, as expected if variation among groups evolved in geographically isolated populations, between which there was some but not complete mixing.

This view of human variation leads me to abandon the use of the word “race” in general and use “ethnicity” instead. I’ll use “race” in this article, though, as I’m addressing the JAMA statement above, and also using individuals’ own diagnosis of their own “race”.

I’ve emphasized this before—in August of last year. There I cited the 2002 paper of Rosenberg et al. reporting that “one can show by using data from many genes and gene sites, and clustering algorithms, that humanity can be shown to form genetic clusters that correspond to geography (different continents or subcontinents), which of course correspond to evolutionary history.” As I also said then,

. . . . the paper of Rosenberg et al.,. . .  shows that the genetic endowment of human groups correlates significantly with their geographical location (for example, if you choose to partition human genetic variation into five groups (how many groups you choose is arbitrary), you get a pretty clear demarcation between people from Africa, from Europe, from East Asia, from Oceania, and from the Americas. (To show further grouping, if you choose six groups, the Kalash people of Asia pop up). This is one reason why companies like 23 And Me stay in business.

This association of location with genetic clustering (and these geographic clusters do correspond to old “classical” notions of race) is not without scientific meaning, because the groupings represent the history of human migration and genetic isolation. That’s why these groups form in the first place. Now you can call these groups “ethnic groups” instead of “races”, or just “geographic groups” (frankly, you could call them almost anything, though, as I said, I avoid “race”), but they show something profound about human history. The statement in bold above could be used to dismiss that meaning, which is why I consider that statement misleading.

The Rosenberg et al. paper was published two decades ago, and since then we are now able to look at more genes (potentially the entire genome of individuals) and use bigger samples over smaller areas. When we do that, we’re able to see the clusters within clusters. Here’s a reference to a 2008 paper:

Even within Europe,a paper by Novembre et al. reported, using half a million DNA sites, 50% of individuals could be placed within 310 km of their reported origin and 90% within 700 km of their origin.. And that’s just within Europe (read the paper for more details). Again, this reflects a history of limited movement of Europeans between generations.

I wanted to delve a bit into the 2005 paper of Tang et al. (mentioned in my earlier post), because it concentrates on self-reported race or ethnicity, not geographic origin, but also looks at variation over space. geography. Click on the title below to read the paper (pdf here and reference at bottom).

Tang et al. got their data from a study of hypertension in which individuals gave blood and also indicated their self-identified race as one of the four groups mentioned above. Then, each of the 3,636 individuals (taken from 15 geographic locales in the U.S., and three from Taiwan) were analyzed for 326 “microsatellite” markers—short repeated segments of DNA. (These segments may not all be independent because of genetic linkage, but certainly a lot of them are independent. The authors don’t discuss this issue, which is relevant but not invalidating.)

Tang et al. then determined whether the microsatellite data fell into clusters using on multiple genes and the clustering algorithm “structure”—the method also used by Rosenberg et al. to show ethnic variation was correlated with geography. Remember, the Tang et al. study took place mostly in American populations, with each SIRE sampled from several places. But the geographic sampling within the U.S. was limited (e.g., “Hispanics” came from only one place in Texas), and this is a potential problem.

Tang et al. did indeed find clustering using multivariate analysis: here are the clusters for all sites and SIRE combinations. Note that there are four clusters: one each for self-identified Caucasians from 6 populations (upper left), East Asians from 7 populations (middle right), African-Americans from 4 populations (lower left), and self-identified Hispanics from a single location (“K” from Starr County, Texas). Clearly we need more data from self-identified Hispanics from other areas, especially because “Hispanic” can denote many diverse ancestries.

The clusters are pretty distinct. Not only do are they distinct, but they match almost perfectly an individual’s self-identified race or ethnicity. As the authors note:

Of 3,636 subjects of varying race/ ethnicity, only 5 (0.14%) showed genetic cluster membership different from their self-identified race/ethnicity. On the other hand, we detected only modest genetic differentiation between different current geographic locales within each race/ethnicity group. Thus, ancient geographic ancestry, which is highly correlated with self-identified race/ ethnicity—as opposed to current residence—is the major determinant of genetic structure in the U.S. population.

As I said earlier “there is almost perfect correspondence between what “race” (or ethnic group) Americans consider themselves to be and the genetic groups discerned by cluster algorithms.  Because these are Americans, and move around more, the genetics reflect ancestry more closely than geography, though, as Novembre et al. found, in Europe geographic origin is also important. Americans move around more than Europeans do!I

In other words, individuals within a cluster are more geographically dispersed than what Novembre et al. found, so that membership in a cluster indicates ancient ancestry, not geographic origin. For example, members of the “East Asian” cluster come from Taiwan, Hawaii, and Stanford.

But to show that there are clusters within clusters, so that “East Asian” can’t be considered a “race” in the old sense, the authors repeated the cluster analysis using only the East Asian sample, and found that those of Chinese ancestry formed a cluster distinct from those of Japanese ancestry.  This is expected if self-identified ethnicity still reflects genetic differences that evolved in Asia. You would doubtless find similar relationships if you dissected Caucasians or African-Americans by the location of their ancestors.

What this shows, then, is that in the US, and in a limited sample of populations whose members self-identified their “race” into one of four groups, those groups can be differentiated using multiple segments of the genome. Not only that, but the differentiation is substantial enough that if you had an individual’s genetic information without knowing anything about them, you could diagnose their “self identified race/ethnicity” with 99.86% accuracy.

The take-home message:

In the U.S.—and in the world if you look at the Rosenberg study—one’s self-identified race, or race (again, I prefer “ethnicity”) identified by investigators—are not purelysocial constructs. Ethnicity or race generally say something about one’s ancestry, so that those members of the same self-identified race tend to group together in a multigenic analysis.

Note that this does not mean that there is extensive genetic differentiation between self-identified races. The old conclusion from my boss Dick Lewontin that there is more variation within an ethnic group than between ethnic groups remains true. But there is enough genetic difference on average that, if you lump all the genes together, the small differences accumulate sufficiently to allow us to diagnose a person’s self-declared race. Remember, these are “self-declared” groupings, so you can’t say they are imposed on the data by investigators. (That of course doesn’t mean that they aren’t social constructs. They may be in some sense, but they’re also social constructs that contain scientific information.)

So, the big lesson is that the JAMA was wrong: if races/ethnic groups can be diagnosed with over 99% accuracy by using information from many bits of the genome, then the statement “Race and ethnicity are social constructs, without scientific or biological meaning” is simply wrong. Race and ethnicity, even when diagnosed by individuals themselves, do have scientific biological meaning: namely, they tell us about an individual’s ancestry and where their ancestors probably came from. This is true in the U.S. (this paper) or worldwide (the Rosenberg et al. paper). Further, if you look on a finer scale, as Novembre et al. did, you can even diagnose what part of Europe a European’s ancestors came from (it’s not perfect, of course, but it’s pretty good).

This is not a new conclusion, and the papers I’ve cited are older ones.  There may be newer ones I haven’t seen, but I’d be willing to bet that their results would be pretty much the same as that above. Though genetic differentiation between groups is not large, it’s sufficient to tell us where they came from, confirming that geographic origin (reflecting ancient geographic isolation) is the source of what we call ethnic or racial differences.

Just remember this: when you hear that human race/ethnicity is a purely social construct, and doesn’t say anything about biology or evolution, that’s just wrong.

I shouldn’t have to point out that these genetic differences in no way buttress racism, for we don’t even know what they mean in terms of individual traits. But they do give us insights into evolutionary history. And that is something of scientific and biological meaning.

________________________

Reference: Tang H, Quertermous T, Rodriguez B, Kardia SL, Zhu X, Brown A, Pankow JS, Province MA, Hunt SC, Boerwinkle E, Schork NJ, Risch NJ. Genetic structure, self-identified race/ethnicity, and confounding in case-control association studies. Am J Hum Genet. 2005 Feb;76(2):268-75.

Brian Greene: We don’t have free will: one idea in a wide-ranging book

July 8, 2022 • 9:20 am

Physicist Brian Greene published the book below in 2020, and it appears to cover, well, just about everything from the Big Bang to consciousness, even spiritually and death. Click image to go to the Amazon site:

Some of the book’s topics are covered in the interview below, and its breadth reminds me of Sean Carroll’s book The Big Picture: On the Origins of Life, Meaning, and the Universe Itself. I’ve read Sean’s book, which was good (though I did disagree with his free-will compatibilism), but I haven’t yet read Greene’s. If you have, weigh in below.

I’ll try to be brief, concentrating on Greene’s view of free will, which is that we don’t have it, we’re subject only to the laws of physics, and our idea of free will is an illusion stemming from our sense that we have a choice. The interview with Greene is in, oddly, the July 1 issue of Financial Review, and is paywalled, but our library got me a copy. (Judicious inquiry may yield you one, too.) You might be able to access it one time by clicking below, but otherwise ask or rely on my excerpts:

Greene also dwells on the fact that we’re the only creatures that know that we’re going to die, an idea that, he says, is “profoundly distressing” and in fact conditions a lot of human behavior. More on that below. Here are a few topics from the interview:

Free will:  Although Greene, as I recall, has floated a form of compatibilism before (i.e., our behaviors are subject to natural laws and that’s all; we can’t have done otherwise by volition at any given moment, but we still have free will), this time he appears to be a rock-hard determinist, which I like because I’m one, too. Excerpt from the interview are indented:

What’s more, beyond thoughts of death, my colleagues, according to Greene, are mistaken in their belief they are making their own choices to change their lives. Thoughts and actions, he argues, are interactions between elementary particles, which are bound by the immutable laws of mathematics. In other words, your particles are doing their thing; we are merely followers.

“I am a firm believer,” he says, “that we are nothing but physical objects with a high degree of order [remember these words, “high degree of order” – we’ll circle back to that], allowing us to have behaviours that are quite wondrous, allowing us to think and feel and engage with the world. But our underlying ingredients – the particles themselves – are completely, and always, governed by the law of physics.”

“Free will is the sensation of making a choice. The sensation is real, but the choice seems illusory. Laws of physics determine the future.”

So then, free will does not stand up against our understanding of how the universe works.

“I don’t even know what it would mean to have free will,” he adds, “We would have to somehow intercede in the laws of physics to affect the motion of our particles. And I don’t know by what force we would possibly be able to do that.”

Do you and I have no more options than say, a fish, in how we respond to the world around us?

“Yes and no,” says Greene. “All living systems, us included, are governed by the laws of physics, but the ways in which our collection of particles can respond to stimuli is much richer. The spectrum of behaviours that our organised structure allows us to engage in is broader than the spectrumof behaviours than a fish or a fly might engage in.”

He’s right, and there’s no attempt, at least in this interview, to be compatibilistic and say, well, we have a form of free will worth wanting. 

Death: From the interview:

“People typically want to brush it off, and say, ‘I don’t dwell on dying, I don’t think about it,”‘ says Greene via Zoom from his home in New York, where he is a professor at Columbia University. “And the fact that we can brush it off speaks to the power of the culture we have created to allow us to triumph over the inevitable. We need to have some means by which we don’t crumble under the weight of knowing that we are mortal.”

. . . Greene believes it is this innate fear of death twinned with our mathematically marching particles that is driving my colleagues to new horizons, and driving my decision to write this story, and your choice to read it, all bolstered by Charles Darwin’s theory of evolution.

Greene’s view appears to be that a substantial portion of human behavior is driven by a combination of two things: the “naturalism” that deprives us of free will, combined with our learned (or inborn) knowledge and fear of death. The death part is apparently what, still without our volition, forces us into action. I’m not sure why that’s true, as the explanation’s not in the interview but perhaps it’s in the book. After all, some people argue that if you’re a determinist doomed to eternal extinction, why not just stay in bed all day? Why do anything?  If we do things that don’t enhance our reproduction, it’s because we have big brains and need to exercise and challenge them. Yes, we know we’re mortal, but I’m not sure why this makes me write this website, write books, read, or do science. I do these things because they bring me pleasure. What does mortality have to do with it?

Natural selection:  According to the writer and interviewer Jeff Allen (an art director), Greene thinks that the promulgation of our mortality, as well as much of our communication, comes from storytelling, which has been instilled into our species by natural selection. Things get a bit gnarly here as the interview becomes a bit hard to follow. I’m sure Greene understands natural selection better than Allen, but Greene’s views are filtered through the art director:

Natural selection is well known for driving physical adaptation, yet it also drives behavioural change, including complex human behaviours such as language and even storytelling. Language is a beneficial attribute that helps us as a species succeed, as is the ability to tell stories, which prepare the inexperienced with scenarios that may benefit them in the future.

“Evolution works by tiny differentials in adaptive fitness, over the course of long timescales. That’s all it takes for these behaviours to become entrenched,” says Greene. “Storytelling is like a flight simulator, that safely allows us to prepare ourselves for various challenges we will face in the real world. If we fail in the simulator, we won’t die.”

Darwin’s theory of evolution is one of the recurring themes of Greene’s book.

Note in the first paragraph that evolved language and storytelling “helps us as a species succeed”. That’s undoubtedly true—though I’m yet to be convinced that storytelling is anything more than an epiphenomenon of evolved language—but whatever evolved here was undoubtedly via individual (genic) selection and not species selection. Traits don’t evolve to enable a species to succeed; they evolve (via selection) because they give their bearers a reproductive advantage. I’m sure Greene knows this, but Allen balls things up by throwing in “species success”.

Consciousness: If you’re tackling the Big Issues that deal with both philosophy and science, it’s consciousness, defined by Greene (and I) as both self-awareness and the presence of qualia, or subjective sensations (Greene calls it “inner experience”).  I’ve written about this a lot, and don’t propose to do more here. We have consciousness, we don’t know how it works, but it’s certainly a physical property of our brains and bodies that can be manipulated by physical interventions. The two issues bearing on Greene’s piece are where it came from and how will we figure out how it works. (Greene implicitly rejects panpsychism by asking “”How can particles that in themselves do not have any awareness, yield this seemingly new quality?”. That will anger Philip Goff and his coterie of panpsychists.)

I’m not sure about the answer to either., We may never know whether consciousness is an epiphenomenon of having a big brain or is partly the result of natural selection promoting the evolution of consciousness. I suspect it’s partly the latter, since many of our “qualia” are adaptive.  Feeling pain is an aversive response that protects us from bodily damage; people who lack the ability to feel pain usually accumulate substantial injuries. And many things that give us pleasure, like orgasms, do so because they enhance our reproduction. But this is just speculation.

Greene also thinks that natural selection has something to do with human consciousness, but it’s not clear from the following whether he sees consciousness as an epiphenomenon of our big brain and its naturalistic physical properties, or whether those properties were molded by natural selection because consciousness enhanced our reproduction:

“My gut feeling,” says Greene, “Is that the final answer will be the Darwinian story. Where collections of particles come together in a certain kind of organised high order ‘brain’, that brain is able to have particle motions that yield self-awareness. But it’s still a puzzle at this moment.”

Where Green and I differ is in what kind of work might yield the answer to how consciousness comes about. Greene thinks it will come from work on AI, while I think it will come, if it ever does, from neurological manipulations. Greene:

“That’s perhaps the deepest puzzle we face,” says Greene. “How can particles that in themselves do not have any awareness, yield this seemingly new quality? Where does inner experience come from?”

Greene’s suspicion is that this problem will go away once we start to build artificial systems, that can convincingly claim to have inner awareness. “We will come to a place where we realise that when you have this kind of organisation, awareness simply arises.”

In June this year, Google engineer Blake Lemoine said an AI he was working on, named LaMDA (Language Models for Dialogue Applications), got very chatty and even argued back.

I suppose this is a version of the Turing test, but it will be very, very hard to determine if an AI bot has “inner awareness”.  Hell, I don’t even know if my friends are conscious, since it depends on self-report! Can you believe any machine that says it has “inner experiences”?

With that speculation I’ll move on. Greene also muses on the origin and fate of the universe, and whether it might “restart” after it collapses, but cosmology is above my pay grade, and I’ll leave you to read about that yourself.

h/t: Ginger K.

Richard Leakey dies at 77

January 2, 2022 • 12:22 pm

This just in: Richard Leakey, well known paleoanthropologist, conservationist, and politician, has died at 77.  (Two of his team’s finds are H. rudolfensis and “Turkana Boy,” placed in H. ergaster.) A brief bio from the France 24 website:

World-renowned Kenyan conservationist and politician Richard Leakey, who unearthed evidence that helped to prove humankind evolved in Africa, died on Sunday at the age of 77, the country’s president said.

“I have this afternoon… received with deep sorrow the sad news of the passing away of Dr Richard Erskine Frere Leakey, Kenya‘s former Head of Public Service,” said Kenyan President Uhuru Kenyatta in a statement late Sunday.

Leakey, the middle son of famed paleoanthropologists Louis and Mary Leakey, had no formal archaeological training of his own but led expeditions in the 1970s that made groundbreaking discoveries of early hominid fossils.

His most famous find came in 1984 with the uncovering of an extraordinary, near-complete Homo erectus skeleton during one of his digs in 1984, which was nicknamed Turkana Boy.

In 1989, Leakey was tapped by then President Daniel arap Moi to lead the national Kenya Wildlife Service (KWS), where he spearheaded a vigorous campaign to stamp out rampant poaching for elephant ivory.

In 1993, his small Cessna plane crashed in the Rift Valley. He survived but lost both legs.

He also tried his hand at politics, ran civil society institutions, and briefly headed Kenya’s civil service.

In 2015, despite ailing health, he returned to the helm of the KWS for a three year term at the request of Kenyatta.

Here’s Leakey in 2010:

And Turkana boy (1.5-1.6 million years old), the most complete early hominin skeleton found to date:

A nice lecture from Matthew on genetics and human evolution

October 15, 2021 • 12:45 pm

Here’s a virtual lecture on genetics and evolution that Matthew gave the other day to the Cardiff University’s School of Medicine. It was intended for the general public, was just posted on YouTube, and I’ve listened to it.  I have been most enlightened, and unless you already know this stuff you will be, too—it’s an up-to-date explication of what we know about the evolution of the genus Homo and what genetics tells us about our post. Of course, this field changes rapidly, and more surprises are in store. And mysteries remain about what we do know: why, for example, did the Neanderthals disappear?

At the end, Matthew considers the question, “What does it meant to be human?” and reprises the lessons and implications of recent genetic studies of anthropology. You can see how Matthew’s knowledge of the topic and his enthusiasm for conveying it has made him a popular lecturer, and garnered him teaching awards.

The formal lecture ends at 54:00, and then Matthew answers the viewers’ questions.

Fire use by hominins: an example of rapid cultural evolution?

July 26, 2021 • 9:15 am

Yesterday we discussed the possibility of cultural evolution (dissemination of a behavior or skill through imitation and learning) in cockatoos, which attracted a lot of attention, probably because of its parallel with human cultural evolution. (The cockatoos seem to have learned to open garbage bins by watching each other.) And in our species there are a gazillion examples, especially since transportation allowed innovations to be spread quickly and widely. You can think of lots of cases: blue jeans, cuisines from other places, music, and, earlier than that, printing, the wheel (some cultures never got it) and even religion.

The new paper in Proc. Nat. Acad. Sci. below, however, suggests what may have been the very first behavior that spread though species of Homo (not only H. sapiens, but perhaps Neanderthals, which some consider a different species) through movement of individuals: the use of fire.  Click on the screenshot to read the article (free) below, or get the pdf here. The reference is at the bottom.

Fire, of course, has many uses: besides cooking meat and tubers, it can be used to harden wood to make spear points, change the quality of stone to make it easier to flake, and to keep yourself warm. Other uses are given in the Wikipedia article “Control of fire by early humans.”

The MacDonald et al. paper collects evidence of fire use from species of Homo, concluding that it got started about 400,000 to 350,000 years ago and then spread rapidly throughout the species. The rapidity of spread then led them to propose what kind of social structure was present in humans at that time.  This contradicts speculations H. erectus controlled the use of fire about 1.5 million years ago; the authors find that evidence unconvincing.

The problem is to distinguish anthropogenic (“human caused”) fire from natural wildfires. But there are ways of doing this, as the article summarizes. Hearths and charred animal bones are one way. Here’s another bit of evidence: a fire-hardened wooden spear from, coincidentally, about 380,000-400,000 years old, part of a group of artifacts found in Germany:

I can’t evaluate the quality of the evidence, but the authors summarize a lot of data to conclude that regular fire use began about 400,000 years ago, and spread quickly throughout the Old World, with evidence coming from Portugal, Spain, France, Israel, and Morocco. Two quotes:

. . . a review by Roebroeks and Villa identified a clear pattern for Europe: there the record strongly suggests that anthropogenic fire use was very rare to nonexistent during the first half of the Middle Pleistocene, as exemplified by the absence—bar a few dispersed charcoal particles—of fire proxies in deeply stratified archaeological karstic sequences, such as the Atapuerca site complex in Spain or the Caune de l’Arago at Tautavel (France), as well as from such prolific open-air sites as Boxgrove in the United Kingdom. In contrast, the record from 400 ka onward is characterized by an increasing number of sites with multiple fire proxies (e.g., charcoal, heated lithics, charred bone, heat-altered sediments) within a primary archaeological context.

. . . The spatiotemporal pattern of the appearance in the archaeological record of an innovation provides evidence relevant for identifying how the innovation came to be widely distributed: that is, through independent innovation, demic processes, cultural diffusion, or genetic processes. The fact that regular fire use appeared relatively quickly across the Old World and in different hominin subpopulations strongly suggests that the behavior diffused or spread from a point of origin rather than that it was repeatedly and independently invented.

Since fire appeared in both warm and cold places around the same time, the authors suggest that its inception was not correlated with “environmental pressures” (e.g., cold). And because the spread was so rapid, the authors claim, correctly, that the spread throughout the Old World was very unlikely to have been caused by the diffusion of genes producing the tendency to create fire, which would spread only very slowly. Likewise, the near-simultaneity makes it seems unlikely that the use of fire was invented independently by several groups.

If fire use did spread through imitation and learning, then, what does that say about the social structure of early humans? If we were divided up into groups of xenophobic hunter-gatherers who didn’t interact, that would not facilitate the spread of fire. Why would a group give the skill to a competitor group? There are two alternatives.

The first, “demic diffusion,” is that a “deme” (a cohesive populations of hominins) spread rapidly, taking with it the fire use they invented. This seems unlikely given that the spread was more rapid than one could imagine a single population could migrate.

The alternative comprises groups that tolerated each other, and were at least somewhat friendly. As the authors suggest, there was a more “fluid social structure with multiple levels of clustering in social networks”. In other words, perhaps hominims were more interactive than we thought.

Well, we have no direct evidence for that, and it would be hard to come by. And I’ll let other physical anthropologists judge the “simultaneous spread” hypothesis. But I wanted to bring this up because the scenario is at least plausible, and it may be the first evidence for cultural evolution in our genus.

There’s one other trait they add in to the mix as another behavior that spread by cultural evolution: the “Levallois technology” for knapping stone (striking flakes off a stone like flint to make weapons and other implements). This, say the authors, can be learned only through “close and prolonged observation combined with active instruction.” Here’s the Levallois method, which involves producing a flint core in such a way that sharp flakes, useful for tools, can be easily struck off:

The authors posit that this technology also originated in one place, but about 100,000 years later than fire (and surely in a different place), and then spread rapidly among groups in a similar way: non-hostile group interactions in a multi-level social network.

I’ll close with the authors’ final paragraph, summarizing their views:

We hypothesize that around 400 ka, cultural processes supported change in technology across wide areas. This indicates, at a minimum, a degree of social tolerance for individuals from different groups, and suggests the less minimal but still plausible hypothesis that more intensive cooperative interactions within larger-scale networks were already in place, occasionally crossing the boundaries between what we usually infer to have been different biological populations within the wider hominin metapopulation. [JAC: I think they’re referring to movement between “modern H. sapiens and Neanderthals. After all, these groups did mate with each other] We conclude that the spatial and temporal pattern of the appearance of regular Middle Pleistocene fire use documented in the archaeological record signals more than the advent of an important tool in the hominin toolbox: the presence of cultural behavior more like that of humans today than of our great ape relatives. We suggest that long before the cultural florescence associated with the late MSA/Middle Pleistocene and to a greater extent LSA/Upper Paleolithic periods, hominins were beginning to develop the capacities for complexity, variability, and widespread diffusion of technology and behavior that we tend to associate only with H. sapiens.

__________________________

MacDonald, K., F. Scherjon, E. van Veen, K. Vaesen, and W. Roebroeks. 2021. Middle Pleistocene fire use: The first signal of widespread cultural diffusion in human evolution. Proceedings of the National Academy of Sciences 118:e2101108118.

Sex with a stranger? Evolutionary psychology and sex differences in behavior

June 6, 2021 • 9:15 am

In the early days of evolutionary psychology—that is, when it was just beginning to be applied to humans—I was rather critical of the endeavor, though not so much about “sociobiology”, the application of evolutionary principles to animal behavior. A lot of the early evo psych stuff on humans was weak or overly speculative.

Since then, I’ve mellowed somewhat in light of replicated research findings about human behavior that show phenomena predicted by or very consistent with the theory of evolution. Not only are the phenomena predicted and replicated, but they are in line with what other animals show. Further, researchers have also falsified some alternative explanations (“culture” or “patriarchy” is the most common one).

I’ll add here that the disturbingly common claim that evolutionary psychology is “bogus” or “worthless” as an entire field is ridiculous, both in principle and in practice. In principle, why should human behavior, or behavioral differences between the sexes, be the one area that is exempt from evolutionary influence, especially given that we evolved in small hunger-gatherer groups for at least five million years, on top of which is overlaid a thin veneer (about 20,000 years) of modern culture? That position—that all differences between men and women, say, are due to cultural influence—is an ideological and not an empirical view. If physical differences, both between sexes and among groups, are the result of evolution, why not mental ones? After all, our brain is made of cells just like our bodies!

In practice, there are several types of human behavior that, using my mental Bayes assessment, I consider likely to reflect at least some of the workings of evolution, past and present, although culture may play a role as well. There will be an upcoming paper on these fairly solid evo-psych behaviors (I’m not an author), but I’ll highlight it when it’s published.

In the meantime, we have one behavior, described in this 2017 article from Areo Magazine, that describes a “universal human behavior” involving sex differences, and a behavior that’s likely to reflect our evolutionary heritage. Although the article is four years old, it’s worth reading. The author, David P. Schmitt, has these bona fides:

David P. Schmitt, PhD, is Founding Director of the International Sexuality Description Project, a cross-cultural research collaboration involving 100s of psychologists from around the world who seek to understand how culture, personality, and gender combine to influence sexual attitudes and behaviors.

See also his Wikipedia page, which describes him as “a personality psychologist who founded the International Sexuality Description Project (ISDP). The ISDP is the largest-ever cross-cultural research study on sex and personality.”

The article, which I recommend you read, is chock-full of data. Click on the screenshot for a free read:

 

The behaviors Schmitt discusses in this longish but fascinating and readable piece are summarized in the first two paragraphs (there are lots of references should you want to check his claims):

Choosing to have sex with a total stranger is not something everyone would do. It probably takes a certain type of person. Quite a bit of evidence suggests, at least when it comes to eagerly having sex with strangers, it might also take being a man. Let’s look at the evidence.

Over the last few decades almost all research studies have found that men are much more eager for casual sex than women are (Oliver & Hyde, 1993; Petersen & Hyde, 2010). This is especially true when it comes to desires for short-term mating with many different sexual partners (Schmitt et al., 2003), and is even more true for wanting to have sex with complete and total strangers (Tappé et al., 2013).

Of course this is “common wisdom” in American culture: it is the heterosexual guy who does the pursuing, and does so without many criteria beyond the lust object having two X chromosomes, and he’s still often rejected, while women are far choosier about who they mate with.

There are many studies, described and cited by Schmitt (usually using lab experiments or good-looking students on campus approaching members of the opposite sex) that show the same thing. An attractive man propositioning a woman for sex is accepted about 0% of the time, while, in the opposite situation far more than half the males accept a sexual proposition from an attractive female stranger. Here are two studies, but there are more:

In a classic social psychological experiment from the 1980s, Clark and Hatfield (1989) put the idea of there being sex differences in consenting to sex with strangers to a real life test. They had experimental confederates approach college students across various campuses and ask “I’ve been noticing you around campus, I find you to be very attractive, would you go to bed with me tonight?” Around 75 percent of men agreed to have sex with a complete stranger, whereas no women (0 percent) agreed to sex with a complete stranger. In terms of effect size, this is one of the largest sex differences ever discovered in psychological science (Hyde, 2005).

Twenty years later, Hald and Høgh-Olesen (2010) largely replicated these findings in Denmark, with 59 percent of single men and 0 percent of single women agreeing to a stranger’s proposition, “Would you go to bed with me?” Interestingly, they also asked participants who were already in relationships, finding 18 percent of men and 4 percent of women currently in a relationship responded positively to the request.

This of course jibes with the behavior of many animals (in my flies, for example, males will court almost any female, even wooing pieces of dust or small blobs of wax), while females repeatedly reject males. It’s true of primates in general, and of many animal species. And it makes evolutionary sense. If a male mates with five females instead of one, he’s likely to have five times more offspring. In the reverse situation, though, a female who mates with five males in a short period will have roughly the same number of offspring as if she mated just once. That’s because she makes a huge investment in eggs and (in some species like ducks) maternal care, and so she should be selected to be choosy about her mates, looking for a male who is fit, healthy, may have good genes, and, if there’s parental care, will be an attentive father. Since the male has far less to lose, and far more to gain, by repeatedly mating with different females, this explains the strategy of “wanton male versus choosy female” sexual preference. These are likely to be evolved sexual behaviors.

This of course is a generalization. There are certainly picky men and women who are less choosy about their partners. But it’s a generalization that holds up not only in the “choice” studies I just mentioned, but in other aspects as well. Psychological studies show that (here I quote Schmitt, bolding is his)

. . . men have more positive attitudes towards casual sex than women, have more unrestricted sociosexuality than women, and generally relax their preferences in short-term mating contexts (whereas women increase selectivity, especially for sexual attractiveness.

. . . Cognitively and emotionally, men are more likely than women to have sexual fantasies involving short-term sex and multiple opposite-sex partners, men perceive more sexual interest from strangers than women, and men are less likely than women to regret short-term sex or “hook-ups.”

Considering sexual fantasies, men are much more likely than women to report having imagined sex with more than 1,000 partners in their lifetime (Ellis & Symons, 1990).

Behaviorally, men are more likely than women to be willing to pay for short-term sex with (male or female) prostitutes, men are more likely than women to enjoy sexual magazines and videos containing themes of short-term sex and sex with multiple partners, men are more likely than women to actually engage in extradyadic sex, men are more likely than women to be sexually unfaithful multiple times with different sexual partners, men are more likely than women to seek one-night stands, and men are quicker than women to consent to having sex after a very brief period of time (for citations, see Buss & Schmitt, 2011).

Here’s a table reproduced in the Areo paper taken from Buss and Schmitt (2011), where you can find the original references. Click to enlarge.

These patterns hold in nearly all studies in different parts of the world. That in itself suggests that culture may play an insignificant role in the difference I’m discussing.

Now if you’re thinking hard, you can think of at least four non-evolutionary explanations for these behaviors (I’ve combined disease and pregnancy in #3 below). Both, however, have been shown to be unlikely to be the major explanation for the sex difference in choosiness.

1.) Patriarchy: These could be cultural differences enforced by the patriarchy and socialization. Why a patriarchy exists itself may be evolutionary (e.g., males are stronger and thus can control females more easily than the other way around), but male dominance itself is not the explanation we’re testing here. Schmitt explains why (beyond observed cultural universalism), this is unlikely to explain the entire behavioral difference (all emphases are the author’s):

For instance, Schmitt (2015) found sex differences in the sociosexuality scale item “I can imagine myself being comfortable and enjoying ‘casual’ sex with different partners” were largest in nations with most egalitarian sex role socialization and greatest sociopolitical gender equity (i.e., least patriarchy, such as in Scandinavia). This is exactly the opposite of what we would expect if patriarchy and sex role socialization are the prime culprits behind sex differences in consenting to sex with strangers.

How can this be? Why are these sex differences larger in gender egalitarian Scandinavian nations? According to Sexual Strategies Theory (Buss & Schmitt 1993), among those who pursue a short-term sexual strategy, men are expected to seek larger numbers of partners than women (Schmitt et al., 2003). When women engage in short-term mating, they are expected to be more selective than men, particularly over genetic quality (Thornhill & Gangestad, 2008). As a result, when more egalitarian sex role socialization and greater sociopolitical gender equity “set free” or release men’s and women’s mating psychologies (which gendered freedom tends to do), the specific item “I enjoy casual sex with different partners” taps the release of men’s short-term mating psychology much more than it does women’s. Hence, sex differences on “I enjoy casual sex with different partners” are largest in the most gender egalitarian nations.

Overall, when looking across cultures, reducing patriarchy doesn’t make these and most other psychological sex differences go away, it makes them larger (Schmitt, 2015). So much for blaming patriarchy and sex role socialization.

2.) Fear of injury. In general, men are stronger than women (this is almost surely the result of evolution affecting competition for mates). Perhaps women are leary of accepting propositions from unknown men because they might get hurt, as do many prostitutes. But several studies show that safety alone cannot be the whole explanation:

Clark (1990) was among the first to address the issue of physical safety. He had college-aged confederates call up a personal friend on the phone and say “I have a good friend, whom I have known since childhood, coming to Tallahassee. Joan/John is a warm, sincere, trustworthy, and attractive person. Everybody likes Joan/John. About four months ago Joan/John’s five year relationship with her/his high school sweetheart dissolved. She/he is was quite depressed for several months, but during the least month Joan/John has been going out and having fun again. I promised Joan/John that she/he would have a good time here, because I have a friend who would readily like her/him. You two are just made for each other. Besides she/he has a reputation as being a fantastic lover. Would you be willing to go to bed with her/him?” Again, many more men (50%) than women (5%) were willing to have sex with a personally “vouched for” stranger. When asked, not one of the 95% of women who declined sex reported physical safety concerns were a reason why.

3.) Fear of pregnancy and/or disease. Since venereal diseases can be passed in both directions, I’m not sure that disease is a good explanation, though perhaps women are more likely to get serious disease than are men. As far as pregnancy is concerned, there’s at least one study showing it can’t be the sole factor:

Surbey and Conohan (2000) wondered whether worries of safety, pregnancy, stigma, or disease were what was holding women back from saying yes to sex with a stranger. In a “safe sex” experimental condition, they asked people “If the opportunity presented itself to have sexual intercourse with an anonymous member of the opposite sex who was as physically attractive as yourself but no more so (and who you overheard a friend describe as being a well-liked and trusted individual who would never hurt a fly), do you think that if there was no chance of forming a more durable relationship, and no risk of pregnancy, discovery, or disease, that you would do so?” On a scale of 1 (certainly not) to 4 (certainly would), very large sex differences still persisted with women (about 2.1) being much less likely to agree with a “safe sex” experience with a stranger compared to men (about 2.9).

So, sex differences in agreeing to sex with strangers are not just a matter of safety issues, pregnancy concerns, slut-shaming stigma, or disease avoidance. Controlling for all of that, researchers still find large sex differences in willingness to have sex with a stranger.

There’s a lot more in this paper, including Schmitt’s critique of the two papers cited widely as disproving the “pickiness” hypothesis. Both papers, however, suffer from extreme methodological flaws, and in both cases the results support the “pickiness” hypothesis when the flaws are corrected.

You can read the hypothesis and judge for yourselves, but I think this is one of the best examples we have of evolutionary psychology explaining a difference between men and women in behavior*. As I said, it’s shown up throughout the world in different cultures, it’s paralleled in many species of animals, alternative explanations fail to explain the data, other, unrelated data support at least a partial evolutionary basis of the choice difference, and the few papers that claim to disprove it wind up actually supporting it.

Aside from “universal” behavior like sleeping, eating, or wanting to reproduce, which are surely instilled in us by evolution (and nobody questions those), we shouldn’t ignore differences between groups, especially the sexes, as having an evolutionary origin. It’s likely that morphological differences between geographic populations, like the amount of melanin in the skin, are adaptive responses to natural selection, so why is behavior the one trait that is always off limits to evolutionary explanation?  It’s ideology, Jake.

h/t: Steve Stewart-Williams

 

*As a reader points out below, and even more obvious evolutionary difference is that the vast majority of men are sexually attracted to women, and vice versa. That would be hard to explain as a result of the patriarchy or of socialization.