How apes (including humans) lost their tails

March 1, 2024 • 9:45 am

One of the most striking differences between monkeys and other primates on the one hand and apes on the other is that—with a few exceptions—other primates have tails but apes don’t.

A new paper in Nature, which is really cool, investigates the genetic basis for the loss of tails in apes. (The phylogeny below shows that the primate ancestor had a tail, and it was lost in apes.)

Why did apes lose their tails? We don’t know for sure, but it may be connected with the facts that apes are mostly ground-dwellers and a tail would be an impediment for living on the ground and moving via knuckle-walking or bipedal walking. (The gibbon, considered an ape that branched off early from the common ancestor with monkeys, is an exception, as it’s mostly arboreal. Gibbons move by swinging from branch to branch but they have no tails However, this form of locomotion, called brachiation, really doesn’t require a tail for grasping or balance.) I suspect that because apes who move in these ways don’t need tails makes it disadvantageous to have a tail: it’s metabolic energy wasted on an appendage that you don’t need, and one that could get injured. Thus natural selection likely favored the loss of a tail.

Regardless, the new Nature paper, which you can access below (pdf here, reference at bottom), involves a complex genetic analysis that pinpoints one gene, called Tbxt, as a key factor in tail loss.  By genetically engineering the tail-loss form of the gene from apes and putting it into mice, they found that the mice engineered to have the ape form of the gene either had very short tails or no tails at all. But I’m getting ahead of myself.

Click to read:

First, here’s a phylogeny of the primates from the paper. Apes diverged from monkeys (or rather “other monkeys”, since apes can be considered a subgroup of monkeys) about 25 million years ago. The tailless apes are shown in blue, with the common ancestor of Old World monkeys and apes shown about 25 million years ago.


How can you find the genes that are involved in tail loss in apes? The best way to do it, which the authors used, is to first look for mutations in primates that cause loss or shortening of the tails, and then see whether the forms of those genes differ between apes and monkeys.  Xia et al. looked at 31 such genes but didn’t find any genes whose forms were concordant with tail loss.

They then went on to mice, looking another 109 genes associated with tail loss or reduction in the rodents.  Here they found one gene, Tbxt, that had an unusual form in all apes that was lacking in other primates.  Tbxt, by the way, is a transcription factor: a gene that produces a protein that itself controls the action of other genes, regulating how and whether they are transcribed, that is, how these other genes make messenger RNA from the DNA. (Messenger RNA, as you know, is then “translated” into proteins.)

And this transcription factor had an unusual feature in apes but not in other primates: it contained a small sequence called Alu, about 300 base pairs long, that was inserted into the DNA of the Tbxt gene, but in a noncoding region (“intron”) separating the coding regions of Tbxt that make the transcription-factor protein. (Genes are often in coding segments, or exons, separated by introns, and the exons are spliced together into one string before the mRNA goes off to make protein.)

Only primates have Alu elements; they formed by a genetic “accident” about 55 million years ago and spread within genomes. We humans have about one million Alu elements in our genomes, and sometimes they move around, which gives them the name “jumping genes.”  They are often involved in gene regulation, but can also cause mutations when they move, since they seem to move randomly.

Here’s a diagram of a monkey Tbxt gene on the left and the human version on the right. Note that in both groups the gene has coding regions, which are spliced together when mRNA is made to produce the full transcript.  But note that in humans there is an Alu element, “AluY” stuck into the gene between Exon 6 and Exon 7. I’ve put a red circle around it. This inserted bit of DNA appears to be the key to the loss of tails. (Note the nearby Alu element AluSx1 in both groups.)

(From paper) Schematic of the proposed mechanism of tail-loss evolution in hominoids. Primate images in a and c were created using BioRender (https://biorender.com).

Here’s why the authors singled out the Tbxt gene as a likely candidate for tail loss? This is from the paper:

Examining non-coding hominoid-specific variants among the genes related to tail development, we recognized an Alu element in the sixth intron of the hominoid TBXT gene (Fig. 1b). This element had the following notable combination of features: (1) a hominoid-specific phylogenetic distribution; (2) presence in a gene known for its involvement in tail formation; and (3) proximity and orientation relative to a neighbouring Alu element. First, this particular hominoid-specific Alu element is from the AluY subfamily, a relatively ‘young’ but not human-specific subfamily shared among the genomes of hominoids and Old World monkeys. Moreover, the inferred insertion time—given the phylogenetic distribution (Fig. 1a)—coincides with the evolutionary period when early hominoids lost their tails. Second, TBXT encodes a highly conserved transcription factor crucial for mesoderm and definitive endoderm formation during embryonic development. Heterozygous mutations in the coding regions of TBXT orthologues in tailed animals such as mouse, Manx cat, dog and zebrafish lead to the absence or reduced forms of the tail, and homozygous mutants are typically non-viable.

In other words it matches the distribution of tails or their absence, mutations in the gene affect tail lengths in mice, the insertion is about the same age as the common ancestor of apes and other primates (25 myr), its function at least suggests the potential to affect tail length, and, finally, mutations of the gene in other animals result in taillessness, including producing MANX CATS. Here’s a tailless Manx male.

Karen Weaver, CC BY 2.5 <https://creativecommons.org/licenses/by/2.5&gt;, via Wikimedia Commons

But the real key to how this form of the gene causes tail loss rests in another speculation: there is another Alu element (“AluSx1” in both figures) which is inserted backwards in the same gene, lying between coding regions (exon) 5 and 6. The new AluY element is of a similar sequence to the old one, but in reverse orientation. So, when the Tbxt gene is getting ready to form mRNA, the two Alu elements pair up, which makes a loop of DNA between them that is simply spliced out of the mRNA sequence.

Here’s a diagram of that happening. Note the loop formed at top right by the pairing of the two Alu elements (red and dark gray), a loop that includes a functional part of the gene (exon 6 in royal blue). When the transcript of this gene is made, the code from exon 6 is simply cut out of the mRNA. This produces an incomplete protein product that could conceivably affect the development of the tail.

But does it work that way?

The authors did two tests to show that, in fact, removal of exon 6 in mice does shorten their tails, and in some cases can remove them completely.

The first experiment simply involved inserting a copy of Tbxt missing exon 6 into mice (they did this without the complicated loop-removal mechanism posited above).  Sure enough, mice with one copy of this exon-missing gene showed various alterations of the tail, including no tails, short tails, and kinked tails.

This shows that creating the putative product of the ape loop-formation process, a Tbxt gene missing exon 6, can reduce the tail of mice.

But then the authors went further, because they wanted to know whether putting both the Alu elements AluSx1 and AluY into mice in the same positions they have in primates could produce reduced tails in mice via loop formation.  They did this using a combination of CRISPR genetic engineering and crossing, for mice having two copies of the Tbxt gene that forms loops and excise exon 6 turn out to be lethal.  Viable mice have only one copy of the loop-forming gene.

And when they engineered mice having one copy of the normal Tbxt gene and one engineered copy with the two Alu elements whose pairing eliminated exon 6 (they showed this by sequencing), lo and behold, THEY GOT TAILLESS MICE!  Here’s a photo of the various mice they produced. The two mice on the right have a single copy of the engineered gene with reversed Alu elements that produces a transcript missing exon 6. They are Manx mice! They have no tails! They are bereft of caudal appendages!

f, (from paper) Representative tail phenotypes across mouse lines, including wild type, TbxtinsASAY/insASAY, TbxtinsRCS2/insRCS2 and TbxtinsRCS2/Δexon6. Each included both male (M) and female (F) mice.

This complicated but clever combination of investigation and genetic engineering suggests pretty strongly that tail loss in apes involved the fixation of a mutant Tbxt gene that reduced tails via snipping out of an exon.  This is not a certainty, of course, but the data are supportive in many ways.

So is this likely one mutation that caused apes, over evolutionary time, to lose their tails (we have only a small tail (“coccyx”), consisting of 3-5 fused caudal vertebrae, as shown below in red in the second picture (both are from Wikipedia)

licensed under the Creative Commons Attribution-Share Alike 4.0 International license.DrJanaOfficial

 

Our tail, in red:

The author and licenser of the contents is “BodyParts3D, © The Database Center for Life Science licensed under CC Attribution-Share Alike 2.1 Japan.”

Now if this gene was indeed involved in the evolutionary loss of tails in apes, it would constitute a form of “macromutation”: a character change of large effect due to a single mutation. But surely more genes were involved as well. For one thing, even a single copy of this gene causes neural-tube defects, so any advantage of a smaller tail would have to outweigh the disadvantage of the possibly producing a defective embryo or adult. Also, even if this gene is responsible for the missing or tiny tails of apes, there are likely other genes that evolved to further reduce the tail and to mitigate any neural-tube problems that would arise. (Evolution by selection is always a balance between advantageous and deleterious effects: it was advantageous for us to become bipedal, but that came with the bad side effects of bad backs and hernias).

I really like this paper and have no substantial criticisms. The authors did everything they could to test their hypothesis, which stood up well under phylogenetic, temporal, and genetic analysis.  We can’t of course be absolutely sure that the insertion of the AluY element helped the tailed ancestor of apes lose their tails, but I’d put my money on it.

What’s further appealing about this paper is that the genetic underpinning of the tail loss was completely unpredictable: the function of a gene was changed (and its phenotype as well) simply by the insertion of a “jumping gene” into a noncoding part of a functional gene.  That formed a loop that caused a cut in the gene that, ultimately, affected tail formation. Apes with smaller tails presumably had a reproductive advantage over their bigger-tailed confrères, but the genetics of it is complex, weird, and wonderful.

h/t: Matthew

Reference: Xia, B., Zhang, W., Zhao, G. et al. On the genetic basis of tail-loss evolution in humans and apesNature 626, 1042–1048

Gene flow from Neanderthals and Denisovans to “modern” humans, and vice versa

February 26, 2024 • 10:45 am

Today I’ll try to summarize another paper that is difficult for one reason: figuring out how the authors (including Nobel laureate Svante Pääbo) sussed out which genes originated in Neanderthals and Denisovans (both lineages now extinct) and which in their sister lineage: the separate lineage leading to modern humans.  (All three lineages shared a common ancestor.) As we know, the modern human genome contains a lower percentage of genes that originated in Neanderthals, and this is also true for genes that originated in Denisovans (the latter are found more often in modern populations from Oceania, Asia, and in native North Americans).

As for the two extinct lineages, both derived from a single ancestor that, according to the paper below, left Africa for Eurasia about 600,000 years ago. That traveling lineage then split at an uncertain time to give rise to the Neanderthals, who went extinct about 35,000 years ago, and to the Denisovans (known from but a handful of teeth and bones), who went extinct around the same time as did Neanderthals. In the meantime, the lineage that gave rise to us—”modern” humans—stayed in East Africa, leaving for  Eurasia about 60,000 years ago, and some individuals in this group lived near the already-present Neanderthals and Denisovans.

Although human paleobiologists, who love to identify new species, call the Denisovans and Neanderthals species different from modern humans (i.e, different from “Homo sapiens“), I’m stubborn and consider all three groups members of the same biological species. That’s because there’s evidence of gene flow among all the groups: from Neanderthals and Denisovans to modern humans, from modern humans to Neanderthals, and even from Denisovans to Neanderthals and vice versa. Because these archaic genes persist in modern humans, the hybrids between the lineages must have been fertile to allow such backcrossing. Since we have populations who lived at least partly in the same area and produced fertile hybrids, they can be considered biological species, though perhaps biological species in statu nascendi.

Here’s a diagram of the three lineages from the paper whose title is below. The arrows show the direction of gene exchange and some examples of variants transferred by hybridization.

(from paper): Figure 1. Schematic illustration of the history of archaic and modern humans and DNA sequence evolution Derived mutations are highlighted. The occurrence of gene flow between groups is illustrated by arrows. The archaic groups contributed both derived and ancestral variants to modern humans. Note that the extent, number of gene flow events, and when they occurred are only partially known.

You can read the paper below by clicking on the title. The object of the paper is to answer this question:

Which genes were transferred between the lineages, and what effect did they have on the individuals who carried them?

Now, of course, the first question is “How do we know which genes actually originated in one of the three lineages after it split off from the others, and how do we know in which direction that gene was transferred to another lineage.”  This is not an easy question, and I asked my friend Phil Ward, an entomologist and systematist who works at UC Davis. I’ve put his explanation below the fold in case you want to know.

I trust the authors’ determinations of which genes originated where, and which ones were introduced into a given lineage by hybridization; after all, this is Pääbo and his group!  So I’ll just give a list of a few of genes transferred, which way they went, and what they appear to do in modern populations. Let me add two things. First, while it’s easy to find out what a gene does (it can be tested in cell culture or by inserting it in mice), it’s not so easy to determine what effect it has on the phenotype or reproduction of modern humans.

Second, the paper is quite “adaptationist”, with the authors suggesting reasons why a transfer might have been adaptive. (If it was really bad for the carrier, it would have disappeared from the population.) However, very few of the transferred genes are present in very high frequency in modern humans or Neanderthals, and so, if they had a really beneficial effect on reproduction or survival, one would expect that they would be “fixed” (present in every individual) or at least high frequency.  Since that’s not usually the case, the authors float hypotheses that transferred genes are good in some populations but not others. This seems to some extent like post facto rationalization based on a diehard adaptive viewpoint.

On to the genes! Indented sections are mine except for doubly-indented sections, which are excerpts from the paper:

Mutations in the Neanderthal lineage that got in modern humans via hybridization. 

Genes affecting fatty acid and lipid metabolism.  These genes appear to increase the risk of type 2 diabetes, so they’re not good for us.

Genes increasing sensitivity to pain (a sodium-channel gene in the nervous system). This gene appears to have a salubrious effect, because it’s associated with an increase in lifespan (genes that decrease pain sensitivity can allow you to suffer injuries and infections without noticing them, which is why sufferers from Hansen’s disease, lacking pain sensitivity, lose digits and other body parts). But if the gene is so good for us, why is it present in so few of us (0.4% of people in the UK)?

Genes affecting gestation. We received a progesterone receptor-variant that mutated in Neanderthals and that actually increases the frequency of premature births.  Here’s how the authors explain its persistence at high frequencies in both Neanderthals and modern human populations (some of the latter have a frequency of the gene of 20% or higher):

Since [this variant] is associated with an increased risk of premature births in present-day humans, it has been suggested to represent an evolutionary disadvantage to Neandertals, especially in the absence of modern medical care.  However, the Neandertal variants are also associated with an approximately 15% decreased risk for bleeding and miscarriages early in pregnancy as well as with having more siblings.  It is therefore tempting to speculate that it represents an evolutionary trade-off where the Neandertal variants rescues pregnancies that would otherwise have resulted in miscarriages, but the price paid is that some of these pregnancies result in premature births. Notably, two different versions of the Neandertal progesterone receptor gene have been contributed to modern humans, and both have risen in frequency, as shown by an increase in their occurrence in skeletal remains of individuals over the past 10,000 years.  Both Neandertal versions result in higher expression of the progesterone receptor and may thus mediate a higher progesterone effect during pregnancies. This is compatible with the finding that progesterone administration lowers miscarriage rates in women who previously experienced miscarriages  and suggests that increased progesterone effects mediated either by higher hormone levels or by higher receptor levels may protect at-risk pregnancies.

It’s okay to speculate, but perhaps there are other effects of the gene that we don’t know about, and are the bad effects of premature births overcome by the beneficial effects on bleeding and reduced early miscarriage? What we have here is a reflection of the author’s view that the transferred genes must in general have a net positive effect on reproduction, even if they can’t demonstrate it.

Genes affecting the immune system. Many of the genes we got from Neanderthals appear to interact with viruses and are at high frequencies in humans; the authors thus speculate that they spread to ward off infections and still do so in modern populations.  Also, some of the variants have big differences in frequency between populations, which the authors attribute to population-specific infections. Further, some of the variants may cause autoimmune disease (again, we know little about their effects on modern humans, which may be small.)  But they speculate that the existence of so many Neanderthal variants in modern humans strongly suggest that they spread in our lineage via selection for disease resistance.

Mutations in the Denisovan lineage that got in modern humans via hybridization. 

Genes affecting adaptation to high altitude. Here, taken from the paper, is the best example of a gene entering the modern human genome that is likely to have spread by natural selection. This example is pretty well known.

High altitude adaptation

One striking example of Denisovan influence on present-day populations is a 33-kb Denisovan DNA segment on chromosome 2 that occurs at an allele frequency of over 80% among Tibetans, while being absent or very rare in other Asian populations. It encodes EPAS1, a transcription factor induced by hypoxia that is involved in adaptation to low oxygen levels. Denisovans were present on the Tibetan high plateau; some of them may thus have been adapted to life at high altitudes and presumably contributed this genetic predisposition to modern humans as they arrived in the region.

We also received genes involved in producing adaptation to low temperature by “inducing brown fat”: these too seem to have spread in cold-climate populations by natural selection:

Cold adaptation and facial morphology:

Another example of a Denisovan genetic contribution is a 28-kb segment on chromosome 1, carrying the genes WARS and TBX15. It is present in almost 100% of Greenlandic Inuit and several other populations. The Denisovan variants affect the expression of genes that may influence adaptation to low temperatures, possibly by inducing brown fat.

At the end, the authors discuss genes that emerged in modern humans and found their way into Neanderthal lineages, including variants affecting purine (nucleotide) biosynthesis, preventing oxidative stress, gene splicing, and chromosome segregation.  The authors then present a “combinatorial view” of the modern human genome, noting that we’re likely to contain a variety of variants coming from our now-extinct ancestors, but different modern individuals have different combinations. Here’s that view from the paper’s abstract:

We propose that the genetic basis of what constitutes a modern human is best thought of as a combination of genetic features, where perhaps none of them is present in each and every present-day individual.

_________

Reference: Zeberg H, Jakobsson M, Pääbo S. The genetic changes that shaped Neandertals, Denisovans, and modern humans. Cell. 2024 Feb 14:S0092-8674(23)01403-4. doi: 10.1016/j.cell.2023.12.029. Epub ahead of print. PMID: 38367615.

Click “read more” to see the method for determining where a mutation originated and which way it was transferred:

Continue reading “Gene flow from Neanderthals and Denisovans to “modern” humans, and vice versa”

Did humans evolve in water?

November 27, 2023 • 9:30 am

On my post the other day about a new PNAS paper, “Censorship and science: a new paper and analysis,” I received a comment below from reader “Stephen”, which was held up because it was his/her first. I decided to make the comment a post because it might be educational.  Stephen argues that that one important example of scientific censorship is the “aquatic ape” hypothesis.

You’ve probably heard of this hypothesis, first broached in a Darwinian manner by British marine biologist Alister (now “Sir Alister”) Hardy, and made public as recently as 1960, when he wrote this in New Scientist.

 “My thesis is that a branch of this primitive ape-stock was forced by competition from life in the trees to feed on the sea-shores and to hunt for food, shellfishsea-urchins etc., in the shallow waters off the coast.”

Reader Stephen thinks this hypothesis was not only “undeniable when the data are analyzed objectively,” but has been “censored out of existence by the scientific community. Here’s his comment:

Not all scientific censorship can be understood through a left/right prism. In palaeoanthropology, for example, the idea that human ancestors may have gone through a period where underwater foraging was important, undeniable when the data are analysed objectively, has been censored out of existence by the scientific establishment. Peer review, which acts to reinforce established paradigms, is great for keeping unfounded ideas from taking up valuable scientific space, but for paradigm shifting breakthroughs it fails utterly.

The reason the hypothesis appealed to many people is that it superficially seemed to explain a number of features of humans that distinguish us from other apes to which we’re related. These features include our lack of body hair and the presence of subcutaneous fat (hair is useless in water and fat keeps us warm). As Scientific American wrote in 2016:

Hardy put forward all sorts of features which could be explained as “aquatic adaptations”: our swimming ability—and our enjoyment of it; loss of body hair, as well as an arrangement of body hair that he supposed may have reduced resistance in the water; curvy bodies; and the layer of fat under our skin. He even suggested that our ability to walk upright may have developed through wading, with the water helping to support body weight.

Note that this “aquatic phase” was supposed to have occurred during a specific time period when we lacked transitional fossils between our common ancestor with apes and creatures fully on the hominin side of the tree (my bolding). More from Sci Am:

For Hardy, this aquatic phase would have occupied the gap in the fossil record that then existed—between around 4m and 7m years ago. He sensibly concluded his paper saying that this was all only speculation—a “hypothesis to be discussed and tested against further lines of evidence”.

In the 50-odd years since the presentation of this hypothesis, it has enjoyed a certain fame—or perhaps notoriety. The writer Elaine Morgan championed it in her book The Aquatic Ape, and developed the hypothesis further, marshalling a seemingly impressive range of characteristics to support it, including breath control and diet. It seems such a tantalising and romantic idea—but a closer look at the evidence reveals it to be little more than that.

Other features supposedly suggesting that we went through an aquatic phase are “stretched hindlimbs, voluntary respiration, and dilute urine.” Those features were suggested by Belgian biologist M. J. Verhagen, who also posited that our evolution occurred this way (my bolding)

 The Aquatic Ape Theory states that our ancestors once spent a significant part of their life in water. Presumably, early apes were plant and fruit eaters in tropical forests. Early hominids also ate aquatic food; at first mainly weeds and tubers, later sea shore animals, especially shellfish. With the Pleistocene cooling, our ancestors returned to land and became bipedal omnivores and scavengers and later hunters of coastal and riverside animals.

Unfortunately, despite reader Stephen’s assertion that the hypothesis is “undeniable,” it’s been denied by most human evolutionary biologists, to the point where Wikipedia says this:

While the hypothesis has some popularity with the lay public, it is generally ignored or classified as pseudoscience by anthropologists.

Anthropologists do not take the hypothesis seriously: John Langdon characterized it as an “umbrella hypothesis” (a hypothesis that tries to explain many separate traits of humans as a result of a single adaptive pressure) that was not consistent with the fossil record, and he said that its claim that it was simpler and therefore more likely to be true than traditional explanations of human evolution was not true. According to anthropologist John Hawkes, the AAH is not consistent with the fossil record. Traits that the hypothesis tries to explain evolved at vastly different times, and distributions of soft tissue the hypothesis alleges are unique to humans are common among other primates.

To see why the hypothesis is now seen as pseudoscience, you can read either the 2016 Sci. Am. article linked above or John Hawks’s website post from just last year, “Why anthropologists rejected the aquatic ape theory.

As Hawks notes, the discovery of hominin fossils in Africa and development of molecular dating methods, all taking place after the hypothesis was first adumbrated, led to rejection of the hypothesis that major features of our body and social behavior evolved when we were largely immersed in water between 7 and 4 million years ago. But he adds that surely populations of hominins that lived along the coast did use aquatic resources such as fish and shellfish, and we have some evidence for that in the presence of fish bones associated with human remains dating back 1.95 million years and continuing up to Neandertals. These, of course, are found only in populations that lived near the sea.  But problems with the aquatic ape hypotheses are in the second paragraph below:

Still, evidence for fish or shellfish consumption in Pleistocene sites is mostly localized to coastal or riverside locations. Many populations, both modern and ancient, have lived far from coastlines and relied upon terrestrial foods. The nutritional advantages of aquatic foods may be matched in some populations by edible insects and other invertebrates. Human populations and nonhuman primates that eat only terrestrial foods do not suffer from a deficiency of essential fatty acids. Fish and shellfish are clearly valuable to many human populations and some primates, and they are strong signs of our lineage’s increasing diet breadth during the Pleistocene.

In 1997 the anthropologist John Langdon reviewed the evidence with which aquatic ape adherents had supported their ideas. He observed that the traits proposed as aquatic adaptations in humans appear in the fossil record at radically different times. Hominins were obligate bipeds more than two million years before any had a projecting nose or descended larynx. Early bipeds evolved larger jaws and teeth, the opposite expected from high-energy aquatic foods, and large brains appeared in only one branch of Homo during the last part of its history. These features were not evidence of an aquatic stage; they appeared at different times and in different contexts.

Now the fact that different adaptations appeared at different times is not strong evidence against the aquatic ape hypothesis, as adaptations to live in a novel environment could arise at different times. But the times they appeared are not the times suggested by adherents to the Aquatic Ape Hypothesis.  As for other features:

Yet the record does not preserve hard evidence of body fat, body hair, or sweat. To aquatic ape thinkers these soft-tissue traits were some of the most persuasive similarities between humans and certain water-living mammals.

Better data from other primates shows the flaws in this idea. For example, humans are extreme in our high fraction of eccrine compared to apocrine glands, but chimpanzees and gorillas also have a higher fraction of eccrine glands in comparison to other primates. Humans have sparse body hair but chimpanzees also have notably sparse body hair, and all great apes have lower body hair density than other primates. The body fat percentage of human hunting and gathering peoples is indeed higher than chimpanzees and most arboreal primates, but the human range of body fat is much closer to that seen in gorillas and orangutans. Humans are not a departure from other primates in these traits; we follow the same trends as our close relatives, some to a greater degree.

But Hawks does give the eating of fish and shellfish some credit for molding modern humans. It’s just that we have no evidence for an aquatic phase of human evolution:

But the science was not kind to the aquatic scenario sketched by Hardy and Morgan. The growing fossil and genetic data of the 1970s and 1980s showed that the aquatic idea was finely tailored around missing evidence. When this evidence started to appear, it showed that there was no long Miocene gap in the fossil record during which an aquatic ancestor might have been hidden from view. Hardy and Morgan had both adopted stereotypes of how humans differ from other apes, leading them to emphasize skin, fat, and hair patterns in ways that are not borne out by better datasets from living primates. The skeletal traits these writers suggested as adaptations to the water actually evolved at different times and in different lineages.

What remains of their ideas is the value of fish, shellfish, and aquatic plants to some ancient hominins and other primates. Tool use and extractive foraging techniques like digging enabled some hominin populations to broaden their resource use within both savanna and woodland settings. Marshes, swamps, and shorelines had some valuable foods that were open to clever hominins. Eating fish and shellfish in particular may have facilitated the habitation of coastal areas and islands by some members of our genus.

Scientific American gives more critiques:

All the suggested anatomical and physiological adaptations can be explained by other hypotheses, which fit much better with what we actually know about the ecology of ancient hominins. Hairlessness, for instance, is only a feature of fully aquatic mammals such as whales and dolphins. Semi-aquatic mammals such as otters and water voles are extremely furry. Sexual selection and adaptations to heat loss better explain our pattern of body hair. Sexual selection may also explain our body fat distribution, which differs between the sexes. Voluntary breath control is more likely to be related to speech than to diving.

The diet of many of our ancestors certainly included marine resources—where people lived on the shores of lakes or the sea. But this was a relatively late development in human evolution, and humans can also survive and thrive on food obtained entirely on land. Compared with other animals, we are not actually that good at swimming, and our skin leaks as well, letting in water so that our fingers become prune-like after a long bath.

What about walking on two legs? That’s something all apes do a bit of—while wading in water, certainly, but also while reaching for fruit, performing aggressive displays or simply moving around in trees. If we evolved from ancestors who already stood up in trees, we don’t need an extraordinary explanation for why we ended up standing on the ground rather than running around on all fours.

And this:

Since Hardy and Morgan’s hypothesis was advanced, many of the gaps in the human fossil record have been filled, with at least 13 new species found since 1987. We have also made great strides in reconstructing the environment in which our ancestors lived. And we know that species as far as part in time as Sahelanthropus tchadensis 7m years ago andHomo erectus 2m years ago all lived in forested or open woodland environments. While some of these woods included wetland, this was just part of the mosaic of habitats that our ancestors learned to survive in, and there is absolutely no trace of a hominin ancestor as aquatic as that described by Hardy and Morgan.

We also have evidence our ancestors had to survive periods of extremely dry climate with little or no aquatic resources. Coping with these highly variable, patchwork environments required behavioural flexibility and co-operation, and our large brains and ultra-social nature likely emerged as a result. This flexibility ultimately led to the invention of culture and technology.

I’m not an expert in human evolution, but the failure of an aquatic lifestyle to explain our large brains, our bipedalism and, importantly, the lack of evidence that hominins didn’t live in aquatic habitats during the time that important features of our body developed—all this counts against the Aquatic Ape Hypothesis.  However, simply giving alternative scenarios for the lack of body hair of presence of subcutaneous fat based on a terrestrial existence is not in itself strong evidence, but simply a terrestrial “adaptive story”. The decisive evidence seems to me to be where we lived—almost entirely on land—as judged from fossil evidence, as well as the failure of the aquatic ape theory to explain notable features of our bodies: our bipedality and large brains. (These evolved mostly after the time when we were supposed to be living largely in water).

Contrary to reader Stephen, then, the aquatic ape hypothesis is NOT “undeniable,” nor is there evidence that human evolutionists are in some kind of cabal to suppress the “aquatic ape hypothesis” because it goes against the terrestrial “established paradigm.”  As far as I can see, scientists did take the aquatic theory seriously, but rejected it based on the preponderance of evidence.  The accusation that scientists are suppressing novel and counterintuitive evidence out of a group desire to avoid major paradigm changes in their field is one sign of pseudoscience. In fact, these same accusations have been offered as reasons why scientists reject creationism. But, as you should know (read Why Evolution is True), we don’t reject creationism because we’re sworn to defend Darwin; we reject it because the evidence doesn’t support it! If any cabal existed to reject evidence, it consisted of the creationists who rejected Darwin’s paradigm-changing theory published in 1859. But that cabal couldn’t hold together, for it was crushed by the weight of the evidence for evolution.

One might say (and I suppose this has been said before): at present the Aquatic Ape Hypothesis is dead in the water.

Convincing evidence for human evolution

November 2, 2023 • 11:30 am

I occasionally get questions like this one: “What do you consider the most convincing evidence for evolution?”  My answer is usually “the fossil record combined with dating methods,” but I often add that “the evidence from biogeography is so convincing that I’ve never seen a creationist even try to rebut it.” (You can see some of the biogeographic evidence in chapter 4 of Why Evolution is True, and I give the fossil evidence in Chapter 2.)

And if someone asks me, “What’s the most convincing evidence for human evolution?”, I’d also give the first answer above. That’s because the temporally ordered record of human evolution shows a fairly clear progression from the morphology of an ape somewhat like a chimp (i.e., our common ancestor with the chimp and bonobo that lived about 6.4 million years ago).  It’s not a straight line pathway, and we don’t know all the details, for human evolution, like all evolution, is a branching bush, and some branches went extinct.

When I was on a BBC Three show, “Conspiracy Road Trip,” with each of us assigned to convince a group of British creationists of the truth of one bit of evolution (mine was to dispel Noah’s Ark and the great flood scenario), the most convincing evidence to the creationists was the presentation of an evolutionary series of hominin skulls by Tim at White at Berkeley. That bit begins at 42:26 in the video below (I appear earlier).

This week I got a note from an upset parent whose child attended a religious school where the kid was told that humans could not possibly have descended from apes. I responded that humans were apes, and we descended from a common ancestor with chimps (and from other ancestors with other primates)—an ancestor that, I suspect, looked rather chimplike. (It is of course a misconception that we descended from living chimps.)

I tried to help the parent by giving him evidence for human evolution, and that included this photo from the Smithsonian, posted on Talk Origins, Doug Theobald’s site), showing (with the exception of the skull at the top left corner), various hominin skulls laid out in temporal order.  

The key:

Figure 1.4.4. Fossil hominid skulls. Some of the figures have been modified for ease of comparison (only left-right mirroring or removal of a jawbone). (Images © 2000 Smithsonian Institution.)

Note that the skull at upper left is the skull of a modern chimp, so it doesn’t really belong with the others. It’s just there for comparison. But look how things change over time: the face gets pulled back, the teeth get smaller, the brow ridges shrink, and most evident, the braincase gets larger.

Creationists have big trouble with this because they don’t know where to draw the line between “apes” and “humans”. Some maintain that every fossil earlier than some arbitrary one (say a Homo habilis) is an “ape”, while everything after that is simply a human (they might even say “a malformed human”!) But that tactic is so arbitrary and capricious that it’s not convincing even to some of the British creationists above.

I like the photo simply because it’s a wonderful piece of evidence for human evolution, with the skulls laid out in temporal order. (Now they’ve eliminated the “robust” hominins, and that would confuse things a bit though it would be more accurate, for the robust hominins are still hominins. It also leaves out more recently discovered fossils such as Homo floresiensis, the tiny “hobbit” hominin that went extinct about 50,000 years ago.

Also, we don’t know that this is the line of evolution to modern humans (and it probably isn’t), but it does show gradual change over time that’s undoubtedly genetic, and that is what evolution means.  We do not see fossils resembling modern humans 3 million years ago, but we see them now. The earliest hominin skulls we see resemble the skulls of early apes, and gradually evolve into skulls that look like those those of modern humans.  What better evidence of human evolution could we wish for? I’m always amazed that fossils really exist, and also that human fossils are especially rare—yet there are enough of them to provide convincing evidence that our species evolved from a common ancestor with other apes.

Putting the chimp skull in the figure does cause some confusion, as described at Anthropology.net  by Kambiz Kamrani:

I have some slight problems with this image, though. The biggest problem, and a common misconception I see in regards to understanding human evolution, is the whole we descended from chimpanzees train of thought. This image compounds it. The lineage of primates that have become the chimpanzees have been evolving independently of the human lineage. And because the non-human primate fossil record is rather spotty — it is hard to see these types of trends and transitions that we see in the above image happen along in chimpanzees.

Working on that note, this composition implies that our ancestral form was a chimp and once the chimp and human lines diverged then humans went through many natural selection events while chimps just remained stagnant as chimps. That’s wrong. Chimps and humans share a common ape ancestor.

But if you point out that the modern chimp skull is simply there for comparison, and that in all likelihood is fairly similar to the skull of our common ancestor with modern chimps, the problem disappears. Still, many people think that we evolved from modern chimps, and it takes some doing to dispel that idea by explaining the branching pattern of evolution and the idea of common ancestry. Those are a bit harder.

Scientific American is back to distorting the facts to buttress its ideology

October 24, 2023 • 11:00 am

It’s been a while since Scientific American has published misleading and distorted articles to buttress its “progressive” Left ideology, and I hoped they had shaped up. (To be honest, I haven’t followed the magazine, and got the following link from a reader.) My hope was dashed yesterday when I read this new article claiming that women constituted a high proportion of hunters in early hunter-gatherer societies.  It is full of misconceptions and distortions (some of which must be deliberate), neglects contrary data, is replete with tendentious ideological claims, and even misrepresents the claim they’re debunking.  You can read it for free by clicking on the screenshot below or by going here:

First, the idea that they’re trying to debunk is that women were “second class citizens” in early societies, forced to gather food because they were tied to childcare duties, while men did all the hunting. This is apparently an attempt to buttress the editors’ and authors’ feminism. But feminism doesn’t need buttressing with data on hunting; women’s equality is a moral proposition that doesn’t depend on observations about hunting. In other words, women have equal moral rights and should not be treated unfairly because fair treatment is the moral thing to do. If women never hunted, would we then be justified in treating them as second-class citizens? Hell, no!  Here’s their thesis:

Even if you’re not an anthropologist, you’ve probably encountered one of this field’s most influential notions, known as Man the Hunter. The theory proposes that hunting was a major driver of human evolution and that men carried this activity out to the exclusion of women. It holds that human ancestors had a division of labor, rooted in biological differences between males and females, in which males evolved to hunt and provide, and females tended to children and domestic duties. It assumes that males are physically superior to females and that pregnancy and child-rearing reduce or eliminate a female’s ability to hunt.

Man the Hunter has dominated the study of human evolution for nearly half a century and pervaded popular culture. It is represented in museum dioramas and textbook figures, Saturday morning cartoons and feature films. The thing is, it’s wrong.

The story is in fact the cover story of the November issue, so the magazine will never, ever issue a correction or clarification:

Click to read for yourself:

First, note that I’ve written at least five pieces on the “woman hunter” hypothesis: here, here, here, here, and here. The source of the hypothesis was a PLOS One paper arguing the following (from the PLOS One paper):

Of the 63 different foraging societies, 50 (79%) of the groups had documentation on women hunting. Of the 50 societies that had documentation on women hunting, 41 societies had data on whether women hunting was intentional or opportunistic. Of the latter, 36 (87%) of the foraging societies described women’s hunting as intentional, as opposed to the 5 (12%) societies that described hunting as opportunistic. In societies where hunting is considered the most important subsistence activity, women actively participated in hunting 100% of the time.

According to the authors’ data, then, 36 out of 50 societies in which there were data on women hunting (72%), the hunting was intentional.  That is the important result: in most societies, women participated in hunting.  The present paper also implies that this was not rare participation—say a few women included in a big hunting party—but that women constituted a substantial proportion of those engaged in hunting, and that a substantial proportion of hunter-gatherer societies had women hunting.  Here’s how the new Sci Am paper ends:

Now when you think of “cave people,” we hope, you will imagine a mixed-sex group of hunters encircling an errant reindeer or knapping stone tools together rather than a heavy-browed man with a club over one shoulder and a trailing bride. Hunting may have been remade as a masculine activity in recent times, but for most of human history, it belonged to everyone.

“Hunting. . .  belonged to everyone” clearly implies, as the paper does throughout, that women’s hunting was nearly as frequent and important as men’s hunting. This is an essential part of the authors’ ideological contention, for if women hunted only rarely, or constituted only a small fraction of hunting groups, that would imply intolerable hunting inequity.

But the authors’ defense of their hypothesis is deeply flawed. Here are six reasons, and I’ll try to be brief:

1.)  Nobody maintains that, as the authors assert, “men carried this activity out to the exclusion of women”. This may have been a trope in the past, but even those rebutting Obocock and Lacy’s (henceforth O&L’s) data these days do not claim that women never hunted. Of course they did, and no scientist would say that “no women ever hunted” because we cannot document that. The question, which the authors don’t address, is how frequently they hunted and what proportion of hunters did they constitute?  (See below for more.)

2.) I don’t know anyone (I may have missed some) who argues that men evolved to hunt: that is, natural selection acting on hunting behavior itself caused a difference in the sexes in their propensity to hunt. The alternative hypothesis—and one that is far more credible—is that sexual selection based on male-male competition and female choice led, in our ancestors, to the evolution of greater size, strength, musculature, and physiology in men than in women. Once that had evolved, then men would obviously be the sex that would participate in hunting. (And yes, childcare by women is also a possible reason.) The authors’ claim that “males evolved to hunt and provide, and females tended to children and domestic duties” is thus misleading in that males probably got their generally superior athletic abilities (see below) as a result of selection, and their hunting then became a byproduct of that. Similarly, women tend to their children more because that’s another result of sexual selection (women have greater reproductive investment in children), and their lower participation in hunting could also be a byproduct of that.

O&L don’t mention this alternative hypothesis in their paper.

3). The authors neglect important data casting doubt on O&L’s conclusions. Soon after the original paper by Anderson et al. appeared, other anthropologists began to find fault with it. To see examples of how Anderson et al.’s data is dubious,  see my posts here, here and here giving other people’s rebuttals.

Here are the conclusions from one critique, which does recognize women’s value in hunting small animals:

100% of the societies had a sexual division of labor in hunting. Women may have participated with men in some hunting contexts, typically capturing small game with nets, but participated much less in large game hunting with weapons or by persistence. Even within these contexts, it was usually the case that the role of women during the communal hunt was different. For example, women flushed wild game into nets while men dispatched the game.

These are my subjective ratings based on the papers I read in Anderson et al. (2023) and the supporting literature I cited. You may disagree and assign some different ratings. The point is that there is substantial variation across cultures in sex-based hunting roles. Additionally, none of the societies truly have an absence of these roles.

. . . Why did the perception of “man the hunter” arise? It’s likely because we see many sex-segregated hunting practices, particularly in hunting large game with weapons. Additionally, when you think of hunting, the first thing that comes to mind may not be chasing birds into nets. You probably think of a man with a spear — usually a man, not a woman, with a spear.

Here are tweets from another anthropologist looking at many societies, about which I wrote this:

Before I go, I’ll call your attention to a series of tweets by Vivek Venkataraman (start here on Twitter), an assistant professor in the Department of Anthropology and Archaeology of the University of Calgary. His university webpage describes his interests:

Dr. Venkataraman is an evolutionary anthropologist who is broadly interested in the evolution of the human diet and food systems, and their relation to life history and behavior. He is assistant director of the Guassa Gelada Research Project ,and also the co-founder and co-PI of the Orang Asli Health and Lifeways Project (OAHeLP)

Venkataraman is somewhat dubious about some of the PLOS One paper’s results, especially the 80% frequency of women hunting among all hunter-gather societies. On the other hand, like me, he applauds any new data that can change our views of biology, and thinks the frequency of hunter-gatherer societies in which women hunt is somewhere between 13% and 80%; but he also thinks that women’s hunting was even more frequent in the past than it is now (see below)

Have a look at these. . . .  tweets, which involve examining many more “forager” societies:

 

The O&L paper does not mention these criticisms, and therefore does not answer them.  They are relying on data that has come into severe question because of its incompleteness and possible cherry-picking. They simply cannot be unaware of these data; they just ignored them.  (Note: I haven’t looked for more recent data addressing O&L’s claim,)

4.) The authors repeatedly imply that, in effect, males and females are equal in athletic performance, undercutting the idea that men hunted because they were athletically better equipped to hunt. But O&L’s claim of “athletic equity” is false. The authors note that women outcompete men in some endurance sports, citing this:

Females are more regularly dominating ultraendurance events such as the more than 260-mile Montane Spine foot race through England and Scotland, the 21-mile swim across the English Channel and the 4,300-mile Trans Am cycling race across the U.S.

I looked up the Montane Spine Foot race, and the Wikipedia tables for summer and winter events give the results of 17 races, one of which was won by women. (I presume they compete together; if not, the women’s times are still slower.)

Likewise, in all English Channel crossings in which there are men’s and women’s records (there are two- and three-way crossings in addition to single crossings), the men have faster times.

Finally, in all the Trans Am Bike Race results given on Wikipedia (11 are shown), a woman won only once: Lael Wilcox in the 2016 eastbound race. In all other races save one, in which a woman finished third, no women ever placed in the top three.

I conclude that O&L’s claim that women “regularly dominate” in these events is at best a distortion, at worst a lie. There is no “dominance” evident if a woman only had the fastest time in a single event.

Further, while it may be the case (I didn’t look it up) that women more often win events in archery, shooting, and badminton, in every other competitive sport I know of, men do better than women. Here is a table from Duke Law’s Center for Sports Law and Policy giving men’s and women’s best performances in 11 track and field events, as well as boys’ and girls’ best performances. In every case, not only was the record held by a man, but the best boy’s performance was better than the best women’s performance.

There is no doubt that, across nearly all sports, men perform better than women. That’s expected because of men’s greater upper-body strength, bone strength, athletic-related physiology, and grip strength. I didn’t look up sports like tennis, but we all know that the best men outcompete the best women by a long shot, something Serena Williams has admitted.  And. . .

She and her sister Venus were both thrashed by Germany’s world No.203 Karsten Braasch at the Australian Open in 1998 while trying to prove they could beat any man outside the top 200.

If I erred here, please correct me!

Here’s a quote by O&L (my bolding)

The inequity between male and female athletes is a result not of inherent biological differences between the sexes but of biases in how they are treated in sports. As an example, some endurance-running events allow the use of professional runners called pacesetters to help competitors perform their best. Men are not permitted to act as pacesetters in many women’s events because of the belief that they will make the women “artificially faster,” as though women were not actually doing the running themselves.

Here the authors are wading into quicksand. In fact, the entire quote is offensive to reason, for it implies that, if women were treated the same as men in sports, they would do as well. Given the differences between the sexes in morphology and physiology, such a claim flies in the face of everything we know.  The “pacesetters” argument is purely hypothetical, and I’m betting that women who had pacesetter men (note: not pacesetter women), would not turn women into winners. But of course it’s worth a try if O&L are right.

5.) O&L claim that both sex and gender are a spectrum, and sex is not binary. Here’s their quote (emphasis is mine):

For the purpose of describing anatomical and physiological evidence, most of the literature uses “female” and “male,” so we use those words here when discussing the results of such studies. For ethnographic and archaeological evidence, we are attempting to reconstruct social roles, for which the terms “woman” and “man” are usually used. Unfortunately, both these word sets assume a binary, which does not exist biologically, psychologically or socially. Sex and gender both exist as a spectrum, but when citing the work of others, it is difficult to add that nuance.

No, Scientific American: I know your editor thinks that biological sex is a spectrum, but she’s wrong and so are you. The “sex is a spectrum” mantra is another ideological tactic mistakenly used to buttress trans people or people of non-standard genders. But Mother Nature doesn’t care about ideology, and, as Luana Maroja and I showed in our paper on “The Ideological Subversion of Biology” (see point #1, about sex), sex is binary in all animals. In humans, for example, the frequency of exceptions to the binary is only 0.018%, or 1 person in 5600. That is about the same probability of flipping a nickel and having it land on its edge, but we don’t say “heads, tail, or edge?” when calling a coin toss.  For all practical purposes, sex is binary, and if you want to argue about it, don’t do so here. And, as Luana and I emphasized, whether or not sex is binary has no bearing on the treatment (or nearly all rights) of trans and non-standard-gender folks.

6.) Whether or how often women hunted is irrelevant to our views of men and women. Really, why does ideology push Scientific American, and in this case O&L, to distort the facts and to leave out contrary data, when the rights of women don’t depend in the least on whether they hunted or on their relative athletic performance?  Women’s rights rest on morality, not on observations of nature. Yes, there are some trivial exceptions, like those of us who don’t think that transwomen should be allowed to compete athletically against biological women, but there are many feminists who agree with that.  The real feminist program of equal rights and opportunities for women has nothing to do with whether they hunted as much as men in ancient (or in modern) hunter-gatherer societies.

In the end, we have still more evidence that Scientific American is no longer circling the drain, but is now in the drain, headed for, well, the sewers. It used to have scientists writing about their field, with no ideological bias, but now has ideologues (these authors happen to be scientist-ideologues) writing about science in a biased and misleading way.

Apparently this trend will continue, and apparently the publishers won’t do anything about it. So it goes. But those of you who want your science untainted by “progressive” ideology had best look elsewhere.

More Sophisticated Theology: a religious scholar ponders whether Neanderthals had immortal souls

August 10, 2023 • 8:30 am

Lest you think that Sophisticated Theology™ has fallen on hard times, here we have an article pondering at great and tedious length the immensely important question, “Did Christ die for Neanderthals?” That can be rephrased, according to author Simon Francis Gaine, as “Did the Neanderthals have immortal souls?” (The “OP” after his name stands for Ordinis Praedicatorum, meaning of the Order of Preacher in the Dominican sect of Catholicism.)

And he gets paid to write stuff like this; his biography gives his bona fides, include a degree from Oggsford:

Fr Simon is currently assigned to the Angelicum, Rome, where he teaches in the Theology Faculty of the Pontifical University of St Thomas. He lectures on the Theology of Grace and Christian Anthropology, and oversees the Faculty’s Doctoral Seminar.

Fr Simon holds the Pinckaers Chair in Theological Anthropology and Ethics in the Angelicum Thomistic Institute, of which he is also the Director. He is a member of the Advisory Board of Blackfriars’s Aquinas Institute, the Pontifical Academy of St Thomas, Rome, and the Vatican’s International Theological Commission.

He studied theology at Oxford, and completed his doctorate in modern Catholic theology before joining the Dominican Order in 1995.

Click on the screenshot for a paradigmatic example of Sophisticated Theology™. The paper appeared in 2020 in New Blackfriars, a Wiley journal that’s apparently peer reviewed.

Here’s the Big Question:

 I have no expertise in any of these sciences, but have tried as best I can to understand what they have to say, in order to take account of what they have to say within a theological framework. Today I am going to look at the Neanderthals and their relationship to us from a theological perspective in the Catholic tradition, asking what a disciple of St Thomas Aquinas should make of them. Are they to be counted among the humanity God created in his image and likeness and which fell into sin, or are they to be counted instead among the other animal species of our world represented in the first chapter of Genesis? Or are they something else? While creation itself is to be renewed through Christ at the last, according to Christian faith Christ is said to die for our trespasses, for our sins. So did Christ die for Neanderthals?

This comes down to the question, says Gaine, of whether Neanderthals had immortal souls, so we have to look for evidence of that. If they did, then they could be saved by Jesus, though since the Neanderthals’ demise antedated the appearance of Jesus by about 40,000 years, their souls must have lingered in somewhere like Purgatory (along with the souls of Aztecs and other pre-Christian believers) for millennia. Gaine does not take up the question of whether other hominins, like H. erectus or H. floresiensis, much less the Denisovans, also had souls.

Since we have no idea whether Neanderthals had immortal souls (indeed, we can’t be sure that anybody else has an immortal soul, since it’s like consciousness), we have to look for proxies for souls. The question is complicated by the fact that Neanderthals interbred with “modern” Homo sapiens, so that most of us carry some a few percent of Neanderthal genes in our genome.

To answer his question of whether Neanderthals are “theologically human” (i.e., whether they had immortal souls), Gaine turns to his hero Aquinas:

So were Neanderthals theologically human or not? I think the only way we can approach this question is to ask whether or not Neanderthals had immortal souls, as we do. But, apart from Christian teaching, how do we know that we even have such souls? We cannot just have a look at our immaterial souls, and Aquinas thought that we only know the character of our souls through what we do. Aquinas argues from the fact that we make intellectual acts of knowledge of things abstracted from their material conditions, to the immateriality of the intellectual soul. Our knowledge is not just of particulars but is universal, enabling pursuits like philosophy and science, and the potential to be elevated by God to supernatural knowledge and love of him. If human knowing were more limited to a material process, Aquinas does not think our souls would be such subsistent, immaterial souls. Finding evidence of intellectual flights throughout the history of sapiens is difficult enough, however, let alone in Neanderthals.

. . .  What we need to look for in the case of Neanderthals is evidence of some behaviour that bears the mark of an intellectual soul such as we have.

And so an “intellectual soul” then becomes a proxy for the immortal soul, which is itself the proxy for whether you can be saved by Christ. Did Neanderthals have these? Gaine uses several lines of evidence to suggest that they did.

  • Neanderthals buried their dead (religion!)
  • Language. We don’t know if Neanderthals could speak, but they had a vocal apparatus similar to that of modern H. sapiens. Gaine concludes that they had language, though of course that’s pure speculation. But since when have Sophisticated Theologians™ bridled at usupported speculation?
  • Neanderthals made cave paintings and may have adorned themselves with feathers and jewelry: signs of a “material culture” similar to H. sapiens.

And so he concludes, without saying so explicitly, that Neanderthals had immortal souls and were save-able by Christ. This supposedly allows us to use science to expand theology:

How though does any of this make a difference to theology in the tradition of Aquinas? If Neanderthals were created in God’s image and saved by Christ, this must expand our understanding of Christ’s ark of salvation and raise questions about how his saving grace was made available to them. Because the Church teaches that God offers salvation through Christ to every person in some way.  theologians have often asked in recent times how this offer is made to those who have not heard the Gospel, members of other religions, and even atheists. It seems to me that, just as modern science has enlarged our sense of the physical universe, the inclusion of Neanderthals in theological humanity must somehow expand our sense of human salvation, given that it was effected in the kind of life Neanderthals lived.

. . . But even if Neanderthal inclusion does not pay immediate theological dividends, at least for apologetic reasons it seems necessary for theology to take account of their discovery. Unless theologians do, they risk the appearance of leaving faith and science in separately sealed worlds, as though our faith cannot cope with advancing human knowledge, leaving it culturally marooned and seemingly irrelevant to many. That is exactly the opposite of the attitude of Aquinas, who, confident that all truth comes from God, in his own day confirmed Christian wisdom by integrating into it what he knew of human science.’

But why stop at Neanderthals when you’re “expanding your faith through science”. There are lots of other hominins that must be considered (see below).  Can we rule most of these out because they might not have had language?

From the Encyclopedia Brittanica

And what about other mammals? In 2015 the great Sophisticated Catholic Theologians™ Edward Feser and David Bentley Hart argued about whether dogs can go to Heaven. (Hart said “yes,” while Feser said “no”, both of them furiously quoting Church Fathers like Aquinas to support their positions.)

These are tough questions, and of course to answer them theologians have to construct confected arguments based on casuistry. What amazes me is that people get paid to corrupt science with such ridiculous theological questions. It is unsupported speculation about unevidenced empirical assertions.

h/t: David

The biology of quitting: when you should hold ’em and when you should fold ’em

April 20, 2023 • 12:30 pm

Someone called this Big Think piece to my attention because some quotes from me are in it. And they are, but that’s not the important part, which is the evolutionary biology of giving up, and I guess I’m the Expert Evolutionist in this take.  The piece is by Julia Keller, a prolific author and journalist who won a Pulitzer Prize for feature writing in 2004, and this is an excerpt from her new book  Quitting: A Life Strategy: The Myth of Perseverance and How the New Science of Giving Up Can Set You Free. which came out April 18.

Although I had some association with Julia when she wrote for the Chicago Tribune (I think she helped me get a free-speech op-ed published), I don’t remember even speaking to her on this topic, but it must have been quite a while back. At any rate, I certainly want to be set free from my maladaptive compulsions, which include persisting when I should give up, so I’ll be reading her book.

Click on the screenshot to read:

The science involved is largely evolutionary: it pays you to give up when you leave more offspring by quitting than by persisting. Or to couch it more accurately, genes that enable you to assess a situation (consciously or not) and give up at the right point—right before the relative reproductive gain from persisting turns into a relative loss compared to other gene forms affecting quitting—will come to dominate over the “nevertheless she persisted” genes.  Keller engages the reader by drawing at the outset a comparison between Simone Biles stopping her gymnastic performance in the 2021 Tokyo games, and, on the other hand, a honeybee deciding whether or not to sting a potential predator of the nest.

If the bee does sting, she invariably dies (her innards are ripped out with the sting), and can no longer protect the nest. But if that suicidal act drives away a potential predator, copies of the “sting now” gene are saved in all the other nest’s workers, who are her half sisters. (And of course they’re saved in her mother—the queen, the only female who can pass on her genes.) If a worker doesn’t sting, every copy of that gene might be lost if the nest is destroyed, for if the nest goes, so goes the queen, and every gene is lost.  On the other hand, a potential predator might not actually prey on a nest, so why give up your life if it has no result? You have to know when stinging is liable to pay off and when it isn’t.

Inexorably, natural selection will preserve genes that succeed in this reproductive calculus by promoting stinging at the right time and place—or, on the other hand refraining from stinging if it’s liable to have no effect on colony (ergo queen) survival.  And in fact, as you see below, honeybees, while they surely don’t consciously do this calculus, they behave as if they do, and they do it correctly.  Often natural selection favors animals making “decisions” that cannot be conscious, but have been molded by selection to look as if they were conscious. 

As for Simone Biles, well, you can read about her. Her decision was clearly a conscious one, but also bred in us by selection—selection to avoid damaging our bodies, which of course can severely limit our chance to pass on our genes. This is why we usually flee danger when there is nothing to gain by meeting it. (She did have something to gain—gold medals—which is why she’s like the bees.)

Why do young men street race their cars on the street, a dangerous practice? What do they have to gain? Well, risk-taking is particularly prevalent in postpubescent males compared to females, and I bet you can guess why.

I’ll first be a bit self aggrandizing and show how I’m quoted on evolution, and then get to the very cool bee story. It’s a short piece, and you might think of other “quitting vs. non quitting” behaviors of animals that could have evolved. (Hint: one involves cat domestication.)

“Perseverance, in a biological sense, doesn’t make sense unless it’s working.”

That’s Jerry Coyne, emeritus professor at the University of Chicago, one of the top evolutionary biologists of his generation. [JAC: a BIT overstated, but I appreciate it.] I’ve called Coyne to ask him about animals and quitting. I want to know why human beings tend to adhere to the Gospel of Grit—while other creatures on this magnificently diverse earth of ours follow a different strategy. Their lives are marked by purposeful halts, fortuitous side steps, canny retreats, nick‑of‑time recalculations, wily workarounds, and deliberate do‑overs, not to mention loops, pivots, and complete reversals.

Other animals, that is, quit on a regular basis. And they don’t obsess about it, either.

In the wild, Coyne points out, perseverance has no special status. Animals do what they do because it furthers their agenda: to last long enough to reproduce, ensuring the continuation of their genetic material.

We’re animals, too, of course. And despite all the complex wonders that human beings have created—from Audis to algebra, from hot-fudge sundaes to haiku, from suspension bridges to Bridgerton—at bottom our instincts are always goading us toward the same basic, no‑nonsense goal: to stick around so that we can pass along little copies of ourselves. [JAC: note how this is an individual-centric view rather than the correct gene-centric one, but it’s good enough.] It’s axiomatic: the best way to survive is to give up on whatever’s not contributing to survival. To waste as few resources as possible on the ineffective. “Human behavior has been molded to help us obtain a favorable outcome,” Coyne tells me. We go for what works. We’re biased toward results. Yet somewhere between the impulse to follow what strikes us as the most promising path—which means quitting an unpromising path—and the simple act of giving up, something often gets in the way. And that’s the mystery that intrigues me: When quitting is the right thing to do, why don’t we always do it?

Well, who ever said that every aspect of human behavior was molded by natural selection? Please don’t think that I was implying that it was, as we have a cultural veneer on top of the behaviors conditioned by our genes. In this piece Keller doesn’t get to the subject of why we don’t quit when we should. I’m sure that’s in the book.

Now the very cool bee story:

Justin O. Schmidt is a renowned entomologist and author of The Sting of the Wild, a nifty book about a nasty thing: stinging insects. Living creatures, he tells me, echoing Coyne, have two goals, and those goals are rock-bottom rudimentary: “To eat and not be eaten.” If something’s not working, an animal stops doing it—and with a notable absence of fuss or excuse-making. . . .

. . . For a honeybee, the drive to survive carries within it the commitment to make sure there will be more honeybees. And so she defends her colony with reckless abandon. When a honeybee stings a potential predator, she dies, because the sting eviscerates her. (Only the females sting.) Given those odds—a 100 percent mortality rate after stinging—what honeybee in her right mind would make the decision to sting if it didn’t bring some benefit?

That’s why, Schmidt explains to me from his lab in Tucson, sometimes she stands down. When a creature that may pose a threat approaches the colony, the honeybee might very well not sting. She chooses, in effect, to quit—to not take the next step and rush forward to defend the nest, at the cost of her life.

His experiments, the results of which he published in 2020 in Insectes Sociaux, an international scientific journal focusing on social insects such as bees, ants, and wasps, reveal that honeybees make a calculation on the fly, as it were. They decide if a predator is close enough to the colony to be a legitimate threat and, further, if the colony has enough reproductive potential at that point to warrant her ultimate sacrifice. If the moment meets those criteria—genuine peril (check), fertile colony (check)—the honeybees are fierce fighters, happy to perish for the greater good.

But if not… well, no. They don’t engage. “Bees must make life‑or‑death decisions based on risk-benefit evaluations,” Schmidt tells me. Like a gymnast facing a dizzyingly difficult maneuver that could prove to be lethal, they weigh the danger of their next move against what’s at stake, measuring the imminent peril against the chances of success and the potential reward. They calculate odds.

And if the ratio doesn’t make sense, they quit.

That’s a bit oversimplified, for the calculus is not only unconscious (I doubt bees can weigh threats this way), but the decision capability has been molded by competition over evolutionary time between different forms of genes with different propensities to sting or give up. Further, individual worker bees are sterile, and so what’s at stake is the number of gene copies in the nest as a whole—and especially in the queen. The asymmetrical relatedness between the queen, her workers, and their useless drone brothers (produced by unfertilized eggs) makes the calculus especially complicated.

On the other hand, explaining the gene calculus to lay readers is hard, and it might be better to read the seminal work on how this all operates: Dawkins’s The Selfish Gene. 

Here’s Schmidt’s short paper (click to read; if it’s paywalled, ask for a copy). He died just this February.

Svante Pääbo nabs Medicine and Physiology Nobel

October 3, 2022 • 7:30 am

I had totally forgotten that it’s Nobel Prize season, and the first one, the Medicine or Physiology Prize, was awarded today—to the human evolutionary geneticist Svante Pääbo, a Swede. The reader who sent me the news had these immediate reactions:

  • Highly unusual that there is a single winner nowadays
  • How often has the prize gone to an evolutionary scientist (of any shape or form) ?
  • Probably being Swedish helped a bit!

Yes, the last “solo” prize was given in this field in 2016 to Yoshinori Ohsumi for his work on lysosomes and autophagy. As for the evolutionary biology, I’m not aware of anybody working largely on evolution who has won a Nobel Prize. The geneticist Thomas Hunt Morgan won one, but it was his students who became evolutionary geneticists.  I also remember that when I entered grad school, my Ph.D. advisor Dick Lewontin was helping prepare a joint Nobel Prize nomination for Theodosius Dobzhansky and Sewall Wright, but Dobzhansky died in 1975 before it could be submitted, and posthumous Prizes aren’t given.)

Of course, Pääbo has worked on the evolution of the genus Homo, and a human orientation helps with the Prize, but his substantial contributions fully qualify him for the Big Gold Medal.  As for him being Swedish, I don’t know if there’s some national nepotism in awarding prizes, but again, Pääbo’s work is iconic and no matter what nationality he was, he deserves one. And of course I’m chuffed that an evolutionary geneticist—one of my own tribe—won the Big One.

Click on the Nobel Committee’s press release or the NYT article below to read about Pääbo or go to his Wikipedia page.

NYT:

Pääbo is the leader of a large team, and has had many collaborators, but it’s clear that, if fewer than four people were to get the prize for work on human evolution, Pääbo would stand out as the main motive force, ergo his solo award.  Sequencing the Neanderthal genome and estimating the time of divergence from “modern” H. sapiens (about 800,000 years)? That was Pääbo and his team. Finding the Denisovans, a separately-evolved group from Neanderthals? Pääbo and his team.  Discovering that both of these groups interbred with our own ancestors, and we still carry an aliquot of their genes? Pääbo and his team. Learning that some of the introgressed genes from Denisovans have conferred high-altitude adaptations to Tibetans? Pääbo and his team. And that some Neanderthal genes confer modern resistance to infections? Pääbo and his team.

The man can truly be seen as the father of human paleogenetics—and he’s five years younger than I? Oy!

Although born in Sweden. Pääbo works mostly in Germany. Here’s his bio from the Nobel Prize Committee:

Svante Pääbo was born 1955 in Stockholm, Sweden. He defended his PhD thesis in 1986 at Uppsala University and was a postdoctoral fellow at University of Zürich, Switzerland and later at University of California, Berkeley, USA. He became Professor at the University of Munich, Germany in 1990. In 1999 he founded the Max Planck Institute for Evolutionary Anthropology in Leipzig, Germany where he is still active. He also holds a position as adjunct Professor at Okinawa Institute of Science and Technology, Japan.

A prize for work in evolutionary genetics! Well done, Dr. Pääbo!

Svante Pääbo

And a bit of biography from the NYT article:

Dr. Pääbo has a bit of Nobel Prize history in his own family: In a 2014 memoir, “Neanderthal Man,” he wrote that he was “the secret extramarital son of Sune Bergstrom, a well-known biochemist who had shared the Nobel Prize in 1982.”

It took some three decades of research for Dr. Pääbo to describe the Neanderthal genome that won him his own prize. He first went looking for DNA in mummies and older animals, like extinct cave bears and ground sloths, before he turned his attention to ancient humans.

“I longed to bring a new rigor to the study of human history by investigating DNA sequence variation in ancient humans,” he wrote in the memoir.

It would be no easy feat. Ancient genetic material was so degraded and difficult to untangle that the science writer Elizabeth Kolbert, in her book “The Sixth Extinction,” likened the process to reassembling a “Manhattan telephone book from pages that have been put through a shredder, mixed with yesterday’s trash, and left to rot in a landfill.”