Earliest evidence for humans making fire: 400,000 years ago

December 14, 2025 • 10:50 am

Although, as the authors of this new Nature article note, there is some evidence of human fire use in Africa going back 1.6 million years, they don’t consider the evidence definitive because “the evidence for early fire use is limited and often ambiguous, typically consisting of associations between heated materials and stone tools.”   They also note that there is more direct evidence but it’s quite recent:

. . . . direct evidence of fire-making by pre-Homo sapiens hominins has, until recently, been limited to a few dozen handaxes from several French Neanderthal sites, dating to around 50 ka, that exhibit use-wear traces consistent with experimental tools that were struck with pyrite to create sparks.

In this paper the authors investigate a site in Sussex, dated about 400,000 years ago, that has several lines of evidence suggesting regular use of fire, and controlled use, since there were materials like pyrite that could be used to strike sparks.  Note that the paper considers this the earliest evidence for making fire, not simply using fire.  The authors consider their work to provide pretty definitive evidence of fire-making and fire use in H. sapiens. (Note that we are the only species to use fire.)

Click the headline below to read the article, or you can find the pdf here.

The evidence came from an unused clay pit in the Breckland area of Suffolk, with deposits of clay and silt as well as human artifacts like hand axes. The evidence for persistent fire use at this site (the authors suggest at least two groups of humans, and comes from five observations and experiments. I’ve put them below under the letters.

a.) Red clayey silt (RCS) in the layers, silt that seems to have required prolonged heating to form. Here’s what it looks like.  The unexcavated section is in the top photo, and the bottom is the partly excavated area which is an enlargement of the box in (a). I’ve put a red arrow in (a) at the RCS layer thought to reflect heating of the sediments by the presence of “hearths”: areas where cooking or other uses of fire regularly took place. The layer is more obvious in the bottom photo:

The authors say that the red layer reflecs heating or sediments containing iron:

The reddening is attributable to the formation of haematite—a mineral produced through heating of iron-rich sediments. Its distribution is homogeneous and not associated with particular microfacies or voids, indicating that it was preserved in situ.

b.) Experimental heating of the non-red sediments. The authors showed that the magnetic properties of material in the RCS differ markedly from unheated “control” samples of material taken from the lower layer (“YBCS” in second photo above). But by heating the YCBS layer extensively, it assumed some of the magnetic properties of the RCS, suggesting that the RCS involved heating of clays by fire. As they say (bolding is mine):

Three samples were taken from the RCS and two from the adjacent YBCS, which served as unheated control samples. The magnetic properties of the RCS (Supplementary Information, section 5) differ markedly from those of the unheated control samples, exhibiting elevated levels of secondary fine-grained ferrimagnetic and superparamagnetic minerals of pyrogenic origin, unlike the control samples. To assess whether these characteristics could result from heating, a series of experiments of single and multiple heating events of varying durations, was conducted. The aim was to determine whether the reddening could have arisen from one or multiple heating events, as repeated, localized burning is more typical of human than natural fire events (S.H. et al. manuscript in preparation).

The closest experimental analogue in terms of the minerology and grain size distribution, was observed after 12 or more heating events, each lasting 4 h at temperatures of 400 °C or 600 °C. Although the archaeological samples exhibit substantially lower magnetic susceptibility values, this may result from post-depositional mixing with unheated illuviated clay. Overall, the experiments indicate that the magnetic properties of the RCS result from an indeterminate number of short-duration heating events, consistent with repeated human use (Fig. 3).

Note that prolonged heating—nearly 50 hours of heating at 400-600 degrees C, was required to approximate the magnetic properties of the presumed fire-use layer.  This suggests also that the heating did not reflect wildfires, but repeated, localized, and intentional burning.

c.) Infrared spectroscopy of heated control samples changed in infrared absorbtion spectra of the “control” samples, making it closer to that of the presumed hearth layer of RCS.

d.)  The area contained four handaxes that showed marks of heat-shattering.  Here is a picture of a handaxe with “closeup of fractured surface caused by fire.”:

Presumably this is based on experiments using recently made handaxes, with some treated by fire and then compared to unheated controls.

e.) Fragments of pyrite were found in the heated area, and pyrite  is used with flint to produce fire (before that, people presumably had to get fire from lightning burns and somehow preserve it). Moreover, pyrite was not found in this locality; the nearest accessible mineral was about 15 km away, suggesting that people picked it up and brought it to the site to strike against flint (flint was also found in the area). As the authors note:

The occurrence of pyrite at Barnham warrants further consideration. Pyrite is a naturally occurring iron sulfide mineral that can be struck against flint to produce sparks to ignite tinder. Its use for this purpose is well documented in ethnographic accounts worldwide. Pyrite has been recovered from European archaeological sites dating from the late Middle Palaeolithic to the historic periods, occasionally bearing wear traces consistent with use for fire-making and, in some cases, found in association with flint striking tools.

Here are some fragments of pyrite; caption is from paper:

(from paper): b, Fragment of pyrite found on the surface of palaeosol in Area IV(6). c, Fragment of pyrite from palaeosol in Area VI, found in association with concentrations of heated flint.

e.) The heated sites were located in areas amenable to prolonged fire use. This is weak evidence, but I present it nevertheless. From the authors:

Notably, all three sites occupy marginal locations, away from the main river valleys and associated with small ponds or springs. In the absence of caves, these locations probably provided safer, more sheltered environments for domestic activities. Taken together, these findings present a strong case for controlled fire use across the Breckland region during MIS 11.

The upshot:  We often forget that any meat eaten by people before the advent of cooking would have to be raw, and raw meat is tough and, at least to us, somewhat unpalatable. (I do like a very rare steak, as well as steak tartare, though.) But our ancestors didn’t grind up meat, though they may have pounded it to make a kind of raw Pleistocene schnitzel. By making meat more palatable, cooking would promote eating more of it, and that itself could change the selective pressures on humans, giving them the extra nutrients they’d need if they were to evolve big brains (brains use a lot of energy!).  This is one (disputed) theory for a rapid increase of human brain size that lasted between 800,000 and 200,000 years ago, though brain size was also getting bigger, albeit at a slower pace, before then. Cooking has also been suggested to have changed human social behavior (and perhaps social evolution), with pair bonding and mutual aid increasing as a way to gather, store, and protect food that needed to be cooked. And more complex social behavior could itself have promoted the evolution of larger brains to figure out how to regulate and get along in your small social group.

These theories, while suggestive, really should be downgraded to “hypotheses,” since there isn’t much evidence to support them—only correlation and speculation. However, they are interesting to contemplate, even if we never can get strong evidence for them.  At the end of the paper, the authors do seem to sign onto some of these, but not strongly.

The kernel of this paper is the several lines of evidence that do, to my mind, support the idea that humans were making and using fire at least 400,000 year ago.  Here’s what the authors say about the advantages, evolutionary and otherwise, of controlling fire:

The advantage of fire-making lies in its predictability, which facilitated better planning of seasonal routines, the establishment of domestic sites in preferred locations and increased structuring of the landscape through enculturation. Year-round access to fire would have provided an enhanced communal focus, potentially as a catalyst for social evolution. It would have enabled routine cooking, could have expanded the consumption of roots, tubers and meat, reduced energy required for digestion and increased protein intake. These dietary improvements may have contributed to increase in brain size, enhanced cognition and the development of more complex social relationships, as articulated in the Social Brain Hypothesis. Moreover, controlled fire use was instrumental in advancing other technologies, such as the production of glues for hafting. The widespread appearance of Levallois points from Africa to Eurasia by MIS 7 (243–191 ka), often interpreted as spear-tips, provides strong evidence of effective hafting. This interpretation is supported by use-wear evidence and the identification of heat-synthesized birch bark tar as a stone tool adhesive.

The sad fate of human evolutionary biology in Australia

October 20, 2025 • 11:30 am

Although the times when Homo sapiens reached Australia are under revision, the latest data suggests that they arrived between 45,000 and 60,000 years ago—about the time that our species left Africa for parts east.  And although changes in water levels made it easier to get to Australia by water then now humans still had to use boats of a kind. What kind of boats they used is a mystery.

But there are a number of other questions that remain about the colonization of Australia. How many colonizations were there? Did any of the colonizing H. sapiens carry genes from H. erectus?  How much genetic material in the colonists came from Denisovans? (There are some suggestions of both of these possibilities based on aspects of skull morphology.) Did the aboriginal colonists evolve in the last 50,000 years? Regardless of how many colonizations were there, what was the population structure of indigenous people since they arrived here? And based on artifacts, what were the cultures of the early indigenous people?

All of this can be studied not just by digging up skull or artifacts, but also now by genetic testing: looking a “fossil DNA” from specimens. Unfortunately, what is happening in the U.S. and Canada is also happening in Australia: people who identify as “aboriginal” (and you can do this by self-identification, not necessarily by ancestry) are preventing the scientific study of skulls and artifacts by claiming that fossils or artifacts were from their ancestors, even though, as in the U.S., determination of “ancestry” of fossil remains can be dubious. Further, some indigenous people living today want to know their history, but are blocked by the reburial policy adopted by the some state governments. Further, many people recognized as aboriginals today also claim that their ancestors have been in Australia forever,  and don’t want data that dispels the myth.

The article below, in Palladium Magazine, recounts the tremendous loss to science of specimens that, even without firm ancestral documentation, get reburied without study.  This is even true of material found in the Willandra Lakes region of New South Wales, which is in fact a World Heritage Site and contains important human remains:

The Willandra Lakes Region is a World Heritage Site in the Far West region of New South WalesAustralia. The Willandra Lakes Region is the traditional meeting place of the Muthi MuthiNgiyampaa and Paakantyi Aboriginal peoples. The 2,400-square-kilometre (930 sq mi) area was inscribed on the World Heritage List at the 5th Session of the World Heritage Committee in 1981.

The Region contains important natural and cultural features including exceptional examples of past human civilization including the world’s oldest evidence of cremation. . . .’

. . . . Aboriginal people lived on the shores of the Willandra Lakes from 40,000 to 35,000 years ago. It is one of the oldest known human occupation sites in Australia. There is abundant evidence of Aboriginal occupation over the last 10,000 years.

Interesting and controversial fossils like WLH-50 have been found in Willandra, but now many fossils are being reburied, and fossils found weathering out of that region cannot be excavated or studied scientifically.

Click the headline to read the article:

I’ll give some quotes to apprise you of the situation. In toto, it seems that the study of human evoution (not, as the title implies, “evolutionary science”) is dying in Australia. But there are many other creatures besides humans, including the many marsupials.

Quots from the article are indented, while bold headings are mine:

Possible evolutionary change from the earliest inhabitants until now:

As more fossilized remains were discovered [after WWI], sometimes hidden within collections of recent bones, comparisons could be drawn between ancient Australians and the ones first encountered by Europeans. While sharing some skeletal similarities with recent populations, ancient individuals were often distinguished by “heavy-boned faces, enormous teeth and jaws, receding foreheads and flask-shaped skulls.” The mosaic of modern and archaic traits, seen to a lesser degree in contact-era skulls, emphasized the importance of these fossils to evolutionary history. The largest collections included Kow Swamp (c. 20,000 years old), Coobool Creek (c. 14,000 years old) and Willandra Lakes (c. 43,000 to c.14,000 years old).

One exceptional specimen was designated “Willandra Lakes Human 50,[WLH-50] also known as Garnpung Man. It shared traits with Javan Homo erectus and was more similar to ancient humans from Skhūl Cave in Israel than contact-era Australian foragers. At an estimated 26,000 years old, it may have had significantly more Denisovan ancestry than the 2-4% seen in recent Melanesians and Australian foragers.

The morphological variability seen in the fossil record led some researchers to hypothesize multiple migrations into Australia, with some genes coming from Homo erectus and some from ancient Chinese Homo sapiens. Others argued for local adaptation of a single Homo sapiens founding population. This debate featured significantly in the global discourse between proponents of “multiregional evolution,” which claims that modern Homo sapiens evolved simultaneously in multiple parts of the world, versus the “Recent Out of Africa” theory, which holds that Homo sapiens first evolved in Africa and then spread into Europe and Asia, replacing older human species.

What made these collections particularly valuable was their status as a comparative series. The ability to compare a group’s average morphology across eras and regions allowed scientists to track evolutionary changes and adaptations in ways that singular remains could not.

Who counts now as “aboriginal”? Bolding in the text below is mine.

Today, three separate groups are often conflated under the single term “Aboriginal.” These are:

  1. The ancient humans who first settled the continent.
  2. The contact-era foragers encountered by British colonists.
  3. The citizens currently classified as “Aboriginal” by the government.

This third category was formed when Australia’s 1967 constitutional referendum empowered the federal government to make laws for people of the “aboriginal race.” The government subsequently changed its definition of Aboriginal from requiring over 50% forager ancestry to a new standard based on self-identification, any degree of biological descent, and community recognition. This pivotal change meant even those with minimal forager ancestry could join the Aboriginal “class.” A separate legal class, “Torres Strait Islanders,” was eventually split off, with both classes now subclasses of the “Indigenous” slash “First Nations” class.

Before 1967, “Aboriginal” was a legal class with restricted rights. To avoid stigma, many mixed-descent Australians kept their forager ancestry a secret. But as membership criteria relaxed, and additional rights and privileges granted, more people publicly claimed forager ancestry. The Indigenous population exploded and is still growing faster than birth rates can explain. This is the result of people joining the class as adults, sometimes inspired by family legends or personal conviction. Also notable is that most Indigenous-class Australians marry non-Indigenous-class partners, but 90% of children from these unions are assigned Indigenous at birth. Archaeologist Josephine Flood observes that “Many people who identify as Aboriginal have white skin, blue eyes, narrow noses and blond, brown or red hair. Others resemble Japanese, Chinese, Melanesians, Polynesians, or Afghans.” In 2015, a government official estimated that 15% of Indigenous citizens had no forager ancestry whatsoever. 

This of course means that many people can have a claim to have ancient aboriginal ancestry, and then bring lawsuits against scientists taking and studying skulls.  There need be no genetic evidence of ancestry to bring such suits, as the genetic ancestry has been muddied by state-specific laws that prevent the study and excavation of the skulls or DNA analysis.

Not all human remains or artifacts must be repatriated: WLH-50, for example, is still in the hands of scientists. But AI says this about laws, and I’ve verified the claims by looking at several other sites:

In Australia, laws prohibit the burial of Aboriginal ancestral remains, sometimes referred to as fossils, by anyone other than the relevant Aboriginal community with traditional or familial linkes to them.  The legal and ethical framework is centered on the principle of repatriation: the return of ancestral remains from museums, universities, and private collections back to their Traditional Owners for culturally appropriate care and reburial.

This system is governed by a combination of federal and state or territory legislation.

Reburials began en masse in the 1980s, though there were some that were earlier. Now, it seems, the Australian government, perhaps infused with a view of the “sacralization of the oppressed,” seems ready to rebury fossils and artifacts in view of simple and poorly documented claims.  And given the difference in time between modern aboriginals living in the same area as ancient aboriginals, the claim of “ancestry” giving one rights over fossils from tens of thousands of years ago seems weak, especially because no genetics is involved. But it’s strong enough to overrule the scientists:

By 1984, the massive Murray Black skeletal collection had been transferred following legal action by the Victorian Aboriginal Legal Service and, in 1985, remains of thirty-eight foragers were buried in a public ceremony in Melbourne’s Kings Domain park.

The movement now targeted fossilized remains with only tenuous connections to contact-era foragers. The ancient skulls from Eagle Hawk Neck and Mount Cameron West (c. 4260 years old) were transferred to the TAC in 1988 and cremated. In Victoria, the Coobool Creek collection was reburied in 1989, followed a year later by the Kow Swamp collection. In 1991, Alan Thorne voluntarily surrendered Mungo Lady, the first individual excavated at Willandra Lakes. During the handover, he implored the 3TTG (Three Traditional Tribal Groups) to preserve the fossils for future generations.

But as the voices of opposition grew weaker, the burials continued: in 2022, both Mungo Man and Mungo Lady were reburied, secretly. The final blow came in March 2025 when the rest of the Willandra Lakes collection, 106 fossilized individuals, was buried in an unmarked grave, despite a last-minute legal appeal from Gary Pappin, a local Mutthi Mutthi man, and efforts by archaeologist Michael Westaway, who compared it to the Taliban’s destruction of the Bamiyan Buddhas. As of this year, Australia’s human fossil record, as well as the biological history of many extinct contact-era populations, has been effectively erased.

The rationale for reburial is weak, and even involves the supernatural. Get a load of this:

In general, the activists won the war of words. They used language that bolstered ownership claims like “repatriation,” “return,” and “ancestors,” which implied already-proven connections. While scientists used rigorous but dry terminology, activists referred to bones as “our Old People” whose “spirits cannot rest,” claiming that the mere existence of museum collections caused unverifiable harms like “cultural trauma.” Opponents who accepted this linguistic frame found it hard to argue without appearing callous.

Michael Mansell soon took the campaign overseas, convincing European institutions to hand over remains they had acquired during the colonial period. By the 2000s, the removal movement had won widespread support from museums, governments, and even previously-opposed archaeologists. This shift in attitudes resulted in formal policies and funding that allowed the transfer of thousands of forager remains to Aboriginal-class organizations.

The upshot is indeed the dying of ancient human anthropology in Australia. Even new Willandra Lakes fossils, which are important ones, cannot be removed or studied:

Happily, local Aboriginal land councils have allowed a few accidental discoveries to be briefly studied and dated, such as Kiacatoo Man (c. 27,000 years old), the largest Pleistocene skeleton ever found in Australia. But no intentional excavations have taken place for decades. At the Willandra Lakes UNESCO World Heritage Site, fossilized skulls are occasionally observed eroding from the ground but study is forbidden and they soon disintegrate.

According to archaeologist Colin Pardoe in 2018, “The repatriation of skeletal collections has meant that student access to teaching collections containing Australian material has become almost impossible…. This has resulted in researchers moving into other fields or other parts of the world.” And Vesna Tenodi explains, “Replicas or even drawings cannot be displayed, or discussed, as that also is too offensive without ‘Aboriginal permission.’”

Replicas and drawings have been forbidden in the U.S. too, as Elizabeth Weiss documents. She wasn’t even allowed to photograph the boxes containing fossil bones found in the U.S.! The article continues

Other archaeologists note that “fieldwork in Australia essentially ground to a halt as much of the modern debate over the origins of modern humans was beginning to take shape.” Just as DNA analysis, 3D imaging, and other revolutionary techniques were entering the field, the fossil record of an entire continent was wiped clean. Only a handful of specimens were ever studied by geneticists. Archaeologist Steve Webb estimated that the Pleistocene series from Willandra Lakes contained 38 individuals suitable for DNA testing. But that analysis was never done and the window of opportunity has now closed.

This kind of “anthropological activism” has been extensively documented by Elizabeth Weiss  in other countries (see her books here), and the power of indigenous peoples to impede scientific study is strong. They have tried compromises in Australia, like allowing scientists to study remains over 7,000 years old, but these have failed. And there is virtually no possibility of compromise in the United States or Canada.

Now that these activists have acquired most of Australia’s human fossils and bones, they have expanded their removal and censorship campaigns to “include the return of cultural heritage materials, including objects, photographs, manuscripts, and audio-visual recordings.” Each concession leads to more expansive claims rather than resolution. They claim ownership over what questions can be asked about the past and the very words that can be used to ask them.

And the same script is being followed in Canada and the United States, with Indigenous-class activists reburying ancient remains and artifacts under NAGPRA legislation, censoring photographs, and even asserting ownership over dinosaur fossils based on creationist mythology. In 2017, the 9,000 year-old fossil Kennewick Man was buried after years of controversy. In Europe as well, museums and universities face shrinking collections and pressure to censor information.

Everyone agrees that the loss of ancient hominin fossils during World War II was a tragedy. Someday, hopefully, they will feel the same way about the artifacts and fossils currently being destroyed.

The future.  As the article notes, not every state government has bowed to these demands, and some museums are refusing to surrender their collections. Other people are trying to forge productive relationships between “colonists” and modern aboriginals to permit research, including DNA research.  More compromises could be forged that allow at least some scientific study, including extraction of DNA, which takes only a bit of earbone, before remains are reburied. But governments have been all too timorous to stand up to the increasingly strong demands of modern aboriginals to force reburial.  In the absence of demonstrating recent ancestral-descendant relationships between modern and older aboriginals, or to set cutoffs to allow study of older remains, science should trump mythology.  In the end, this seems to be more about power than anything else.

h/t: Coel, Luana

Science conundrum of the day: why do we need to urinate when we hear running water?

August 29, 2025 • 9:40 am

I use a Water-Pik after flossing (and so should you!), and I’ve noticed repeatedly that when I am squirting water between my teeth, I develop a sudden urge to urinate.  Then I remembered the old summer-camp trick of putting a sleeping boy’s hands into a bowl of water, which supposedly made him wet his bed.  I think asked a few friends if they also had an urge to micturate when they heard running water, and to a person they said “yes.” (One emphasized the need to pee in the shower.)

Well, immediately this brings up a question: “why does this happen?”  There are two ways to approach this question.

First, there’s the physiological or “proximal” approach, which asks, “What is the neuronal/physiological basis of having to pee when you hear running water?”  This question is in principle answerable, and, as you’ll see, appears to have been answered.

But there’s the evolutionary or “ultimate” approach. If one assumes this connection between water and urination arose directly via natural selection (and remember, it could be fortuitous: simply a byproduct of how our bodies evolved), why is it adaptive to respond to the sound of running water this way?

This question may not be answerable, as we weren’t around to see when it happens. (One could I suppose, at least see if the connection exists in other primates, which would bnttress the fact that it arose in a common ancestor and has persisted, but doesn’t answer whether the connection evolved directly by natural selection.)

These are two different ways of thinking about the question: the “how” approach versus the “why” approach.  As I said, the “how” appears to have an answer in humans, as evidenced in this article from Australia’s Swinburn University (click to read):

First of all, the article asserts that the urge to pee when you hear running water is widespread, and occurs not just in the presence of water. (I haven’t had “nervous wees” before a date, though.)

We all know that feeling when nature calls – but what’s far less understood is the psychology behind it. Why, for example, do we get the urge to pee just before getting into the shower, or when we’re swimming? What brings on those “nervous wees” right before a date?

But let’s take a readers’ poll to see how widespread it is. Remember, your answer is anonymous, so please answer:

Do you get the urge to urinate when you hear or see water, running or not?

View Results

Loading ... Loading ...

Now, the “how” answer as given in the article above:

Research suggests our brain and bladder are in constant communication with each other via a neural network called the brain-bladder axis.

This complex web of circuitry is comprised of sensory neural activity, including the sympathetic and parasympathetic nervous systems. These neural connections allow information to be sent back and forth between the brain and bladder.

The brain-bladder axis not only facilitates the act of peeing, but is also responsible for telling us we need to go in the first place.

How do we know when we need to go?

As the bladder fills with urine and expands, this activates special receptors detecting stretch in the nerve-rich lining of the bladder wall. This information is then relayed to the “periaqueductal gray” – a part of the brain in the brainstem which constantly monitors the bladder’s filling status.

Once the bladder reaches a certain threshold (roughly 250-300ml of urine), another part of the brain called the “pontine micturition centre” is activated and signals that the bladder needs to be emptied. We, in turn, register this as that all-too-familiar feeling of fullness and pressure down below.

Beyond this, however, a range of situations can trigger or exacerbate our need to pee, by increasing the production of urine and/or stimulating reflexes in the bladder.

An illustration of where the brain’s “bladder control center”:

The periaqueductal gray is a section of gray matter located in the midbrain section of the brainstem. The periaqueductal gray is a section of gray matter located in the midbrain section of the brainstem. Image: Wikimedia/OpenStax, CC BY-SA

There’s more:

Peeing in the shower:

If you’ve ever felt the need to pee while in the shower (no judgement here) it may be due to the sight and sound of running water.

In a 2015 study, researchers demonstrated that males with urinary difficulties found it easier to initiate peeing when listening to the sound of running water being played on a smartphone.

Symptoms of overactive bladder, including urgency (a sudden need to pee), have also been linked to a range of environmental cues involving running water, including washing your hands and taking a shower.

This is likely due to both physiology and psychology. Firstly, the sound of running water may have a relaxing physiological effect, increasing activity of the parasympathetic nervous system. This would relax the bladder muscles and prepare the bladder for emptying.

At the same time, the sound of running water may also have a conditioned psychological effect. Due to the countless times in our lives where this sound has coincided with the actual act of peeing, it may trigger an instinctive reaction in us to urinate.

This would happen in the same way Pavlov’s dog learnt, through repeated pairing, to salivate when a bell was rung.

I’m not sure that a physiological effect differs from a psychological effect, except that the latter would be “learned” rather than inborn. But remember that any physiological effect like this has to come in through the senses and brain, which could be seen as “psychological”.

More “how” answers, involving different mechanisms:

But it’s not just the sight or sound of running water that makes us want to pee. Immersion in cold water has been shown to cause a “cold shock response”, which activates the sympathetic nervous system.

This so-called “fight or flight” response drives up our blood pressure which, in turn, causes our kidneys to filter out more fluid from the bloodstream to stabilise our blood pressure, in a process called “immersion diuresis”. When this happens, our bladder fills up faster than normal, triggering the urge to pee.

Interestingly, immersion in very warm water (such as a relaxing bath) may also increase urine production. In this case, however, it’s due to activation of the parasympathetic nervous system. One study demonstrated an increase in water temperature from 40℃ to 50℃ reduced the time it took for participants to start urinating.

or:

But it’s not just the sight or sound of running water that makes us want to pee. Immersion in cold water has been shown to cause a “cold shock response”, which activates the sympathetic nervous system.

This so-called “fight or flight” response drives up our blood pressure which, in turn, causes our kidneys to filter out more fluid from the bloodstream to stabilise our blood pressure, in a process called “immersion diuresis”. When this happens, our bladder fills up faster than normal, triggering the urge to pee.

Interestingly, immersion in very warm water (such as a relaxing bath) may also increase urine production. In this case, however, it’s due to activation of the parasympathetic nervous system. One study demonstrated an increase in water temperature from 40℃ to 50℃ reduced the time it took for participants to start urinating.

Conclusions:

We all pee (most of us several times a day). Yet research has shown about 75% of adults know little about how this process actually works – and even less about the brain-bladdder axis and its role in urination.

Well, you know now! More:

Most Australians will experience urinary difficulties at some point in their lives, so if you ever have concerns about your urinary health, it’s extremely important to consult a healthcare professional.

And should you ever find yourself unable to pee, perhaps the sight or sound of running water, a relaxing bath or a nice swim will help with getting that stream to flow.

This article was originally published on The Conversation.

. . . and that’s pretty much the whole article. The word “evolution” does not appear in it at all, so the “why” question isn’t answered. The “fight or flight” explanation is purely mechanistic, and in that case the urge to urinate is simply a byproduct of what happens when we’re frightened or angry.  But the rest—the sudden need to pee when you hear , see, or feel running water, remains unaddressed.

 

The mechanistic explanation also predicts that if you’re in a restroom and you hear other people peeing, that would increase your urge to join them.

As I said, the water-urination response may not be a direct response to natural selection. That is, there may be no reproductive advantage to having to pee when you encounter water. It could simply be, as Gould and Lewontin called it, a “spandrel.”  But let’s engage in some “adaptive storytelling” here and think up ways the connection might have been adaptive.

There don’t seem to be many.  The first one that struck me was that, as noted above, a lot of people have bladder issues (only ones that occur before reproduction ceases can be considered).  If this is the case, and if retaining urine is bad for you. which it is, then anything that facilitates peeing when you have bladder issues would be adaptive. If you already have a physiological system in place for peeing when your bladder’s full, it might be easier to hijack this system in those with bladder issues by using the same stimulus: the sound of running fluid. (This presumes that the sound stimulates urination even in people without bladder issues, which is apparently does.) But somehow I’m not satisfied with this

I asked a colleague, who gave a response that sounded good at the time but now seems dubious as well. He said that if you hear running water, you have an opportunity to hydrate yourself by drinking, and running water is more likely to be clean water that is good to drink. But the connection between having to drink and having to pee is obscure to me.

A question, then, for readers:

So, if one assumes that the connection between water and urination is the result of natural selection, please tender your own theory. Even crazy theories should be given, because, after all, “evolution is cleverer than you are.”

Remember this old joke?

Readers’ wildlife photos

May 19, 2025 • 8:15 am

Today we have a lighthearted change of pace: reader Athayde Tonhasca Júnior is writing not about pollination, but about aging.  His captions are indented, and you can enlarge the photos by clicking on them.

In banana years, we are bread

I think it’s safe to assume that a good many WEIT readers, like me, have already accrued many miles on their personal odometers. Or, as Brazilians say it, dobraram o Cabo da Boa Esperança (have rounded the Cape of Good Hope): our odyssey  is almost completed, the distance to the end is much shorter than to the starting point. We can fall into anguish about it, despair, deny, ignore, fight back to slow the rate of decrepitude, or be philosophical regarding the inevitable outcome. Because, as Maurice Chevalier quipped, old age isn’t so bad when you consider the alternative.

Here are some thoughts about ageing and some assorted images lifted from Private Eye magazine (hopelessly lefty but unbeatable with their cartoons), or sent by fellow old codgers.

Some quotes:

There are three deaths: the first is when the body ceases to function. The second is when the body is consigned to the grave. The third is that moment, sometime in the future, when your name is spoken for the last time. David Eagleman

Death does not make us equal. There are skulls with all their teeth. Mário Quintana

At my age, I don’t even buy green bananas. Often credited to Claude Pepper

First you forget names, then you forget faces. Next you forget to pull your zipper up and finally, you forget to pull it down. George Burns

Nothing is more responsible for the good old days than a bad memory. Franklin P. Adams

Tom Smith is dead, and here he lies, / Nobody laughs and nobody cries; / Where his soul’s gone, or how it fares, / Nobody knows, and nobody cares. Grave epitaph, Newbury, England, 1742

You start off irresistible. And, then you become resistible. And then you become transparent – not exactly invisible but as if you are seen through old plastic. Then you actually do become invisible. And then — and this is the most amazing transformation — you become repulsive. But that’s not the end of the story. After repulsive then you become cute – and that’s where I am. Leonard Cohen

I refuse to spend my life worrying about what I eat. There is no pleasure worth forgoing just for an extra three years in the geriatric ward. John Mortimer

There is still no cure for the common birthday. John Glenn

She said she was approaching forty, and I couldn’t help wondering from what direction. Bob Hope

Happiness is good health and bad memory. Ingrid Bergman

Older people shouldn’t eat health food: they need all the preservatives they can get. Robert Orben

An autobiography is an obituary in serial form with the last instalment missing. Quentin Crisp

About the only thing that comes to us without effort is old age. Gloria Pitzer

Time is a great teacher, but unfortunately it kills all its pupils. Hector Berlioz

Inside every old person is a young person wondering what happened. Terry Pratchett

In the long run, we’re all dead. John Maynard Keynes

The best time to plant a tree is twenty years ago. The second best time is now. Anon

The time you enjoy wasting is not wasted. Bertrand Russell 

The fluffy newborn chick of hope tumbles from the eggshell of life and splashes into the hot frying pan of doom. Humprey Lyttleton

A doctor is seeing an old millionaire who had started using a revolutionary hearing aid:

– So, Mr Humphrey, are you enjoying the new device?

– Very much so.

– Did your family like it?

– I don’t know, I haven’t told anyone yet. But I’ve already changed my will three times.

And for the final image: my wife suggested that I should hang a sign like this by my desk. I declined because it is not truthful: I am not on a diet.

Evolution of a human artery in modern times?

January 20, 2025 • 11:20 am

This article, published in the Journal of Anatomy four years ago, was also highlighted in ScienceAlert this January 18, which is how Matthew Cobb found it.  And although the results aren’t new, I find them interesting from an evolutionary point of view and sure didn’t know about them before. (I’m not sure why ScienceAlert chose to highlight them this week.)

The paper (and the shorter popular summary) describes an Australian study of a variable trait: an extra artery in the forearm and hands of humans called “the median artery”.  It is present in fetuses, where it feeds the growing arm and hand, but regresses during development so that it’s not usually present in newborns. However, in a substantial number of cases—now about 30%—it remains as a functioning artery in adults.  The paper describes a present study of the incidence of this “vestigial artery” in modern adult Australians, and compares this incidence with that seen in adults going back to the late 19th century. There has been a marked increase in persistence—threefold!—over that period.  What we don’t know is why this is happening.  It could be strong natural selection, an environmental change we don’t understand, or both.

You can see the paper by clicking on the title below, or download a pdf here.

First, here’s what the artery looks like in an adult (caption from the paper). I’ve put a red oval around the artery:

Median artery and superficial palmar arch (anterior dissection of the left lower forearm, wrist and hand) – Median artery accompanied the median nerve and completed the superficial palmar arch laterally.

Now although the artery feeds the arm and hand, we don’t know whether it actually benefits those who have it.  The authors and ScienceAlert appear to favor natural selection as the reason for the increase over time, but we don’t know that. To know for sure, we’d have to do long-term studies of the reproductive output of individuals having the artery versus those lacking it, or perhaps genetic studies (see below). We don’t have that data and therefore cannot say anything about natural selection.

Further, perhaps its increased persistence into adulthood is due to some environmental effect. We have no data on that, either. All we can say, and we can’t even say that with a high degree of confidence, is that the percentage of adults having the artery seems to have increased drastically over time.

But I’m getting ahead of myself. The authors dissected 78 arms of Australians aged from 51 to 101 years who died between 2015 and 2016, determining how many of them had the persisting median artery.  Individuals were excluded who might have skewed the studies, including individuals with only the hands and not arms examined, people who had carpal tunnel syndrome (possibly caused by persistence of the artery), and examinations using angiography, which has a greater ability to detect arteries.  Exactly a third of adults (33.3%) showed the artery.

The authors then went back and scoured the literature, using data on adults from 47 published papers going back to 1897. Using data from that arms in individuals who died at a known age, we have a dataset of individuals born from about 1846 to 1997—a span of roughly 150 years, or about 5 human generations.  That’s a remarkably short span of time from an evolutionary viewpoint.

Nevertheless, they found a significant increase over this period of the proportion of individuals having a median artery nearly tripled—from about 10% to 30%. Here’s the most relevant graph plotting the percentage of individuals showing the artery as adults born between 1880 and 2000. (There’s considerable scatter because sample sizes at each date are small.). The authors gives a probability of less than 0.0001 that this temporal trend would be due to chance, so it’s highly statistically significant (they don’t specify whether they’re testing the regression coefficient or the correlation coefficient, but it doesn’t really matter with p values that low.

They also extrapolate this trend and say that one “could predict that the median artery will be present in 100% of individuals born in the year 2100 or later.”  It will then no longer be a persisting fetal trait, but a trait that persists throughout life, and the persisting adult trait could no longer be seen as “vestigial”, like persisting wisdom teeth in some people.

The authors do suggest that environmental factors could play a role in this increase, but also that it could be due to natural selection. Such selection, to cause such a strong change in just a few generations, would have to be strong! The ScienceAlert article plays up the selection part, saying this:

“This increase could have resulted from mutations of genes involved in median artery development or health problems in mothers during pregnancy, or both actually,” said Lucas.

We might imagine having a persistent median artery could give dexterous fingers or strong forearms a dependable boost of blood long after we’re born. Yet having one also puts us at a greater risk of carpal tunnel syndrome, an uncomfortable condition that makes us less able to use our hands.

Nailing down the kinds of factors that play a major role in the processes selecting for a persistent median artery will require a lot more sleuthing.

Indeed, a TON of more sleuthing. What would be required to show selection would be either or both of two things:

1.) Show that, over a long period of time, individuals with median arteries as adults leave more offspring than individuals lacking these arteries. This is how the Framingham Heart Study, which began in 1948, showed that there appeared to be natural selection in women for reduced height, increased stoutness, reduced total cholesterol levels, and lower systolic blood pressure. Further, there appears to have been selection for women to produce their first child earlier and to reach menopause later. This is what I tell people who ask me, as they inevitably do when I lecture on human evolution, where our spercies is going. Not that exciting, is it? But of course the time span of such studies are necessarily limited.

2.) Find the genes responsible for the persistence of the artery and show, by population-genetic analysis, that those genes leading to persistence have been undergoing positive selection. This would be even harder because we have no idea what those genes are.

Absent those two types of studies, all we can say is that we have a putative case of evolution occurring over a short period of human evolution.

Caveats: The authors offer these caveats, and I have one more:

Limitations of the present study include the fact that the number of whole cadavers that were available for the study was not adequate. In addition, our search of the literature may have missed some publications not listed in Google Scholar. Finally, the definitions of ‘persistent median artery’ may have differed somewhat among the various published studies included in the present study.

Finally, as far as I can determine from looking at a few of the papers they cite in the older literature, the samples of arms came not just from Australia, but from other countries like Brazil and South Africa. Given that we know that at present populations from different places differ in the persistence of the artery, this could also throw some bias into the data. However, to create a time course this significant, I don’t think that using arms from different places could be the explanation, for it would require that arms from older people tended to come from places which had a lower incidence of the artery in general.

h/t: Matthew Cobb

An ideologically-based and misleading critique of how modern genetics is taught

October 10, 2024 • 9:30 am

Over at sapiens.org, an anthropology magazine, author Elaine Guevara (a lecturer in evolutionary anthropology at Duke) takes modern genetics education to task.  Making a number of assertions about what students from high school to college learn in their genetics courses, Guevara claims that this type of education imparts “zombie ideas”: outdated but perpetually revived notions that prop up biological racism.  Her main topic is race, and she does offer some insights that modern genetics has given us about differences between geographic populations (I prefer to use “populations” rather than “races”), but these insights have been known for a long time. By failing to tell us that the errors earlier biologists have made about race have been refined and, to a large degree, dispelled, Guevara is herself deficient in describing the state of modern genetics.

Click the screenshot to read:

Guevara makes several accusations that, I think, are misleading. I’ll group her misleading conclusions under bold headings (the wording of those is mine). Quotes from her paper, or my paper with Luana Maroja, are indented and identified

1.) Human populations are not as different as we think, and the concept of “race” is incorrect: classical “races” are not genetically distinguishable. Guevara first cites a famous 1972 paper by my Ph.D. advisor, Richard Lewontin, “The Apportionment of Human Diversity“. The paper looked at genetic variation of 17 proteins detected by gel electrophoresis, apportioning the worldwide variation of proteins among individuals within a population, among populations within a classical “race”, and then between seven “races”. He found that of the total genetic variation seen worldwide, 85% occurred among individuals within one geographic population, 8% among populations within a race, and only 6% was found among races.

Thus races were not as genetically different as some people assumed. Lewontin concluded this (bolding is mine):

It is clear that our perception of relatively large differences between human races and subgroups [JAC: note that Lewontin’s “subgroups” correspond to what I would call “populations’], as compared to the variation within these groups, is indeed a biased perception and that, based on randonly chosen genetic differences, human races and populations are remarkably similar to each other, with the largest part by far of human variation being accounted for by the differences between individuals.

Human racial classification is of no social value and is positively destructive of social and human relations. Since such racial classification is now seen to be of virtually no genetic or taxonomic significance either, no justification can be offered for its continuance.

The first paragraph is correct. Later studies using better methods (DNA) have shown that yes, the apportionment of human diversity shows most of it within populations and only a fraction among populations or among “races”.  The classical view that races like “Caucasion”, “Asian” or “Black” showed large and diagnostic genetic differences at single genes was wrong

But the second paragraph is wrong, too, because Lewontin did not raise the possibility (as I’m sure he realized) that small differences among populations (or the groups of populations that constitute classical “races”) can, taken across many, many genes, add up to significant statistical and biological differences. The failure to recognize the power of using genetic data from many genes (we have three billion DNA nucleotides in our genome) is called “Lewontin’s fallacy.” This fallacy was pointed out in 2003 by A.W.F. Edwards and has its own Wikipedia page.

The power of using many genes instead of just an unweighted average of data from individual genes is shown by several things, as Luana Maroja and I pointed out in our paper published in Skeptical Inquirer last year. For one thing, if there were no meaningful genetic differences between populations, you couldn’t use genetic differences to diagnose someone’s ancestry. Yet you can, and with remarkable accuracy, as anyone knows who is aware of their family history and has taken a genetic test like those offered by 23andMe.  My test showed that I have complete Eastern European ancestry, with 98% of it from Ashkenazi Jews, which comports with what I know of my family history. (I also have a small percentage of genes from Neanderthals.)

Now this tells you the area of the world—the population—from which your ancestors probably came.  It doesn’t deal with “races” as classically defined. Yet a multiple-gene analysis using four races that Americans themselves use in self-identification (African-American, white, east Asian, or Hispanic) can indeed be diagnosed with remarkable accuracy. As Luana and I said in our paper (I’ve bolded the money quote):

Even the old and outmoded view of race is not devoid of biological meaning. A group of researchers compared a broad sample of genes in over 3,600 individuals who self-identified as either African American, white, East Asian, or Hispanic. DNA analysis showed that these groups fell into genetic clusters, and there was a 99.84 percent match between which cluster someone fell into and their self-designated racial classification. This surely shows that even the old concept of race is not “without biological meaning.” But that’s not surprising because, given restricted movement in the past, human populations evolved largely in geographic isolation from one another—apart from “Hispanic,” a recently admixed population never considered a race. As any evolutionary biologist knows, geographically isolated populations become genetically differentiated over time, and this is why we can use genes to make good guesses about where populations come from.

And this:

More recent work, taking advantage of our ability to easily sequence whole genomes, confirms a high concordance between self-identified race and genetic groupings. One study of twenty-three ethnic groups found that they fell into seven broad “race/ethnicity” clusters, each associated with a different area of the world. On a finer scale, genetic analysis of Europeans show that, remarkably, a map of their genetic constitutions coincides almost perfectly with the map of Europe itself. In fact, the DNA of most Europeans can narrow down their birthplace to within roughly 500 miles. [See below for the European data.]

You can also identify the “classical” races used in self-identification using some morphological traits. As we wrote:

But you don’t even need DNA sequences to predict ethnicities quite accurately. Physical traits can sometimes do the job: AI programs can, for instance, predict self-reported race quite accurately from just X-ray scans of the chest.

Population differences summed across genes can tell us more, too:

On a broader scale, genetic analysis of worldwide populations has allowed us to not only trace the history of human expansions out of Africa (there were several), but to assign dates to when H. sapiens colonized different areas of the world. This has been made easier with recent techniques for sequencing human “fossil DNA.” On top of that, we have fossil DNA from groups such as Denisovans and Neanderthals, which, in conjunction with modern data, tells us these now-extinct groups bred in the past with the ancestors of “modern” Homo sapiens, producing at least some fertile offspring (most of us have some Neanderthal DNA in our genomes). Although archaeology and carbon dating have helped reconstruct the history of our species, these have largely been supplanted by sequencing the DNA of living and ancient humans.

Finally, there are nearly diagnostic differences between populations in genes that evolved in an adaptive way, like known genes for resistance to low oxygen, short stature or skin pigmentation. Here’s a figure from a 2015 Science paper by Sarah Tishkoff:

None of this would be possible if there were not significant genetic and biological differences between populations.  We did not maintain that there are always diagnostic differences between populations at single genes that can group them into races, but that there are statistical differences in frequencies of variable genes among populations that are biologically meaningful.  Nor did we claim that the classically-defined races are absolutely geographically distinct with little intermixing, or have nearly fixed differences in frequencies of variable genes. That’s not true, and all geneticists realize this now. (But note that even the classically defined “races” generally differ in gene frequencies and in some biological traits to an extent that they can be diagnosed.)

The reality is that we should be dealing with populations, and populations—roughly defined as geographically different groups of people that largely breed among themselves—show diagnostic genetic and morphological differences.

Yet Guevara misleads the reader by relying solely on Lewontin’s paper and neglecting all the work done since that showing that yes, there is diagnostic geographic variation among populations (note that Lewontin implied that the concept of “population” is about as meaningless as “race”). Here are cxcerpts from Guevara’s paper:

Lewontin published his calculations in a short paper in 1972 that ended with this definitive conclusion: “Since … racial classification is now seen to be of virtually no genetic or taxonomic significance either, no justification can be offered for its continuance.” His results have been replicated time and again over the last 50 years, as datasets have ballooned from a handful of proteins to hundreds of thousands of human genomes.

But despite huge strides in genetics research—leaving no doubt about the validity of Lewontin’s conclusions—genetics curricula taught in U.S. secondary and post-secondary schools still largely reflect a pre-1970s view.

This lag in curricula is more than a worry for those in the ivory tower. Increasingly, genomics plays a leading role in health care, criminal justice, and our sense of identity and connection to others. At the same time, scientific racism is on the rise, reaching more people than ever thanks to social media. Outdated education fails to dispel this disinformation.

Leaving “no doubt about the validity of Lewontin’s conclusions”?  Nope.  The apportionment of variation is without doubt, but not his conclusion that populations or races are without biological meaning.

None of the critiques of Lewontin’s paper, including Edwards’s famous clarification, are even mentioned by Guevara. And, in fact, I don’t know of any biologists in post-secondary genetics education who still teach the view that Race and ethnicity are social constructs, without scientific or biological meaning.” (This is a quote from JAMA reproduced in the Coyne and Maroja paper. And perhaps some people teach this erroneous view, but no biologist that I know of.) That JAMA statement is completely misleading, as I hope I’ve shown above. The delineation and definition of classical races was itself misleading and often tied to racism in the past, but, as we see, even self-identified classical races can be diagnosed through genes or morphology, and generally do fall into clusters using analysis of multiple genes.

The last paragraph of Guevara’s quote above shows the ideological motivation behind her paper: we must dismiss the existence of biological races and genetic differences between populations because it emphasizes differences between humans, and thus could lead to ranking of human populations, and thence to racism.  But, as Ernst Mayr recognized, accepting differences does not mean you have to view groups as being morally or legally unequal. We give a quote by evolutionist Ernst May quote in our Skeptical Inquirer paper:

Equality in spite of evident non-identity is a somewhat sophisticated concept and requires a moral stature of which many individuals seem to be incapable. They rather deny human variability and equate equality with identity. Or they claim that the human species is exceptional in the organic world in that only morphological characters are controlled by genes and all other traits of the mind or character are due to “conditioning” or other non-genetic factors. … An ideology based on such obviously wrong premises can only lead to disaster. Its championship of human equality is based on a claim of identity. As soon as it is proved that the latter does not exist, the support of equality is likewise lost. (Mayr 1963)

Thus, the second conclusion of Guevara is wrong:

2.) “High genetic variation exists within geographic regions, and little variation distinguishes geographic regions.”

Well, that’s sort-of true, but, as we said, that “little variation among geographic regions” can, when added up, diagnose populations sufficiently to not only tell you your geographic ancestry, but also to reconstruct the evolutionary and migratory history of human populations. Guevara dismisses these ancestry tests, though she doesn’t tell us why they are wrong:

Helping the zombie persist, direct-to-consumer genetic tests, like those offered by 23andMe and AncestryDNA, can reinforce misconceptions about human variation. These services have become many people’s primary reference point for human genetics information. To be marketable, the companies must communicate their results in simple, familiar ways that also appear meaningful and reliable. This usually entails simplifying genetic ancestry to bright, high-contrast colors, pinned definitively to geographic regions.

And yet, at the same time, Guevara admires the same kind of data—genetic differences between living populations (as well as “ancient fossil DNA”)—as being of value:

In addition to genomes from living humans, DNA extracted from ancient humans over the past two decades has revealed incredible insights. Across time, past humans frequently migrated, mated with, or displaced people they encountered in other regions—resulting in a tangled tree of human ancestry. The ancient DNA results refute any notion of deep, separate roots for humans in different geographic regions.

Well, there are deep roots for some groups (the Neanderthal lineage, for example, separated form the lineage leading to modern humans about 400,000 years ago), and this comes from both fossil and DNA evidence.  The “tangled tree” may be correct in some ways (we did hybridize with Neanderthals, and other populations exchanged genes to different degrees), but it’s not tangled enough to completely efface the evolutionary history of human populations.

All this leads to a third misleading conclusion:

3) Races are social constructs. Any differences between races are largely caused by racism rather than genes. As Guevara says:

As laid out by a major professional association for biological anthropologists, race is a social reality that affects our biology. For the last several hundred years in the U.S. and other colonized lands, racism has influenced people’s access to nutritious food, education, economic opportunities, health care, safety, and more. As a consequence, and precisely because of the environmental influence on most traits, the social construction of race is a risk factor for many health conditions and outcomes, including maternal and infant mortalityasthma, and COVID-19 severity.

This again shows both an ideological motivation and a misleading conclusion. Even the classical biological races (and even more so worldwide populations) are NOT social constructs, but are associated with genetic, morphological, and adaptive differences.  If races are purely socially constructed, how could you tell them apart in the first place? You need some kind of genetic marker. In the case of racism in America, the differences between African-Americans and whites were “constructed” based on skin pigmentation, hair texture, and other traits—traits based on genetic differences. Those differences served to mark out which people were considered different, and then “inferior”, though, as I said, genetic differences among people say nothing about moral or legal equality. THAT is the lesson that needs to be imparted, not the falsity that there are no genetic differences among groups.

Now Guevara may be correct that the “social construct” view is the one taught, erroneously, in high school and college.  But she’s wrong in thinking that Lewontin’s paper supports that “social construct” view.  In fact, the social construct view is largely wrong, with some exceptions centered on the outmoded view of “classical races”, but it appears to dominate anthropology and the social sciences. Anybody holding that view for either populations or groups of geographically contiguous populations needs to read the Coyne and Maroja paper.

4). Humans aren’t peas.  According to Guevara, Mendel’s work on peas, as taught in school, buttresses scientific racism, too:

I, along with others, am concerned that this focus instills and reinforces a false pre-Lewontin view that humans, like Mendel’s peas, come in discrete types. In reality, early studies of peas and other inbred, domesticated species have little relevance for human genetics.

Indeed, it is of little relevance to human genetics, but I’m not aware of any teacher who describes Mendel’s work—which served to show how genes sort themselves out during reproduction—and uses it to conclude, “See, human races are as distinct as round and wrinkled peas.”

In the end, both races and populations of humans show genetic and evolved morphological differences—less than we thought, say, a hundred years ago—but differences that are still significant in useful ways. To say that races or populations are purely social constructs is simply wrong, and to use Lewontin’s paper to reinforce that conclusion is doubly wrong.

Now reader Lou Jost has argued that Lewontin couldn’t really mathematically partition genetic variation the way he did because Lewontin used the wrong method. Regardless, it’s clear that there is more genetic variation at a given locus within a population than between populations or the groups of populations once deemed “races”.  But in the end there is a tremendous amount of information of biological and evolutionary significance to be gained by adding up the small genetic differences we see between human populations.

To end, here’s a map of genetic variation among populations in Europe, showing how the genetic variation (grouped by principal components analysis) lines up nicely with the geographic variation in populations. That’s because genetic differences evolved between semi-isolated groups of people, and that is why we can tell with considerable accuracy where our ancestors came from

Paper: Gilbert et al. 2022

Geography (populations sampled are in black)

Genetics (grouping of individuals using two axes of a principal components analysis. Look how well the geography (identified by color above) matches the genetics!

 

Andrew Sullivan on the ideological erosion of science and the genetics of “race” differences

September 23, 2024 • 10:15 am

Andrew Sullivan’s latest column (click first headline to read, but I couldn’t find an archived version) is a strange one.  His main point—that “progressives’ think that some scientific research should be ignored because it flouts their ideological conventions—is a good one, and one that Luana Maroja and I made before.

In this piece, Sullivan attacks three of these issues: assumption that there are no evolved differences among races, especially in intelligence; that gender reassignment may not always be a good thing; and, an issue I’ve mentioned before, the falsity of recent claims that black newborns have a higher mortality when taken care of by white rather than black physicians (this fact, falsely imputed to racism, actually reflects that underweight black newborns are preferentially given to the care of white doctors).  Sullivan’s conclusion is that science should proceed untrammeled by ideology:

Let science go forward; may it test controversial ideas; may it keep an open mind; may it be allowed to flourish and tell us the empirical truth, which we can then use as a common basis for legitimate disagreements. I think that’s what most Americans want. It’s time we stood up to the bullies and ideologues and politicians who don’t.

He’s right, but he also commits what I see as a serious error.  He describes recent studies by a crack geneticist (David Reich at Harvard) and his colleagues, studies showing that there has been natural selection on several traits within Eurasian “populations” in the last 8000 years. But then Sullivan extrapolates from those results to conclude there must then have been natural selection causing differences among populations.  Now we know that the latter conclusion is true for some traits like skin pigmentation and lactose intolerance, but we can’t willy-nilly conclude from seeing natural selection within a population to averring that known differences among populations in the same trait have diverged genetically via natural selection rather by culture culture (or a combination of culture and selection).

The hot potato here, of course, is IQ or “cognitive performance.”  This does differ among races in the U.S., but the cause of those differences isn’t known (research in this area is pretty much taboo).So even if there’s been natural selection on cognitive performance within Eurasians, as Reich et al. found, one isn’t entitled to conclude that differences among populations (or “races”, a word I avoid because of its historical misuse) must therefore also reflect genetic results of natural selection.

Here’s what Sully says, and basis it on the bioRχiv paper by Akbari et al. (Reich is the senior author) which you can access by clicking below.

Sullivan (bolding is mine):

But how have human sub-populations changed in the last, say, 10,000 years? A new paper, using new techniques, co-authored by David Reich, among many others, shows major genetic evolution in a single human population — West Eurasians — in the last 14,000 years alone. The changes include: “increases in celiac disease, blood type B, and a decline in body fat percentage, as farming made it less necessary for people to store fat for periods without any food.” Among other traits affected: “lighter skin color, lower risk for schizophrenia and bipolar disease, slower health decline, and increased measures related to cognitive performance.” Guess which trait is the controversial one.

The study was able, for the first time, to show

a consistent trend in allele frequency change over time. By applying this to 8,433 West Eurasians who lived over the past 14,000 years and 6,510 contemporary people, we find an order of magnitude more genome-wide significant signals than previous studies: 347 independent loci with >99% probability of selection.

Not just evolutionary change in the last 14,000 years — but “an order of magnitude” more than any previous studies had been able to show. Gould was not only wrong that human natural selection ended 50,000 years ago — but grotesquely so. Humans have never stopped evolving since we left Africa and clustered in several discrete, continental, genetic sub-populations. That means that some of the differences in these sub-populations can be attributed to genetics. And among the traits affected is intelligence.

The new study is just of “West Eurasians” — just one of those sub-populations, which means it has no relevance to the debate about differences between groups. But it is dramatic proof of principle that human sub-populations — roughly in line with what humans have called “races” — can experience genetic shifts in a remarkably short amount of time. And that West Eurasians got suddenly smarter between 10,000 and 5,000 years ago and then more gradually smarter since.

If the results have no relevance to differences between groups, then why in the next sentence does he extrapolate the results to differences between sub-populations or “races”?

Well, yes, Sullivan does indeed admit that the West Eurasian study (below), showing selection within tjat group, can’t be extrapolated to differences between groups.  But he does so anyway, saying that “it is the dramatic proof of a principle that human sub-populations — roughly in line with what humans have called “races” — can experience genetic shifts in a remarkably short amount of time.

Well, no, it doesn’t really “prove” that.  It’s surely true that 1) if two or more populations show genetic variation in a trait and 2) natural selection ACTS DIFFERENTIALLY in those different populations (or “races” or “subpopulations”), then yes, selection can in principle cause genetic differences among populations.  But this is not an empirical observation, but a hypothetical scenario. It’s almost as if Sullivan wants to use within-population data to show that differences among populations (especially in “cognitive performance”) must, by some kind of logic rather than empirical analysis, also be genetically based, and instilled by natural selection. But he is talking about what is possible, not what is known.

The relevant article below, which is somewhat above my pay grade, shows that Reich’s group used a combination of ancient and modern DNA to look for coordinated changes in the sequences of genes  involved in the same trait. Using GWAS analysis (genome-wide association studies), investigators can find out which segments of the genome are associated with variation in various traits within a population.  This way, for example, you can find out which areas of the genome (I believe there are about 1200) vary in a coordinated fashion with variation in an individual’s smarts (they use “educational attainment” as a surrogate for intelligence.

Click title to read:

Knowing this association, you can then compare the bits of the genome in ancient DNA associated with various traits like those listed above, and then estimate a) whether the bits of the genome that are jointly associated with variation in a trait measured today have changed in a coordinated way (i.e., have the genes affecting body fat in a population today changed over the last 8000 years in a coordinated way, with a decrease in those gene variants associated with higher body fat?); and b) the likelihood that natural selection has changed those bits over time.

Although we don’t, for example, know the “educational attainment” of ancient people, we can see that gene variants associated with higher attainment have increased by positive selection in the past few thousand years, implying that the Eurasian population has gotten smarter.  It’s thus fair to conclude that, within the study population,  there was selection for higher cognitive ability, known to be associated with educational attainment.  Here, for example, are two findings of selection from the paper:

CCR5-Δ32: Positive selection at an allele conferring immunity to HIV-1 infection (panel 7)

The CCR5-Δ32 allele confers complete resistance to HIV-1 infection in people who carry two copies4345. An initial study dated the rise of this allele to medieval times and hypothesized it may have been selected for resistance to Black Death46, but improved genetic maps revised its date to >5000 years ago and the signal became non-significant47,48. We find that the allele was probably positively selected ∼6000 to ∼2000 years ago, increasing from ∼2% to ∼8% (s =1.1%, π=93%). This is too early to be explained by the medieval pandemic, but ancient pathogen studies show Yersinia was endemic in West Eurasia for the last ∼5000 years4951, resurrecting the possibility that it was the cause, although other pathogens are possible.

Selection for light skin at 10 loci (panels 8-17).

We find nine loci with genome-wide signals of selection for light skin, one probable signal, and no loci showing selection for dark skin.

Depending on which level of stringency you want to use to identify natural selection on bits of the DNA, Reich’s group found between 300-5,000 “genes” (DNA bits) that have undergone positive or negative natural selection in our ancestors. But remember this: when you are talking about selection on traits, we didn’t KNOW the traits of our ancestors (like “intelligence” or “propensity to smoke” in our ancestors. Instead, what we see is that gene variants affecting those traits in modern populations have changed over time from ancient populations, with gene variants affecting a given trait changing in a coordinated way (i.e., different bits of DNA associated today with “higher intelligence” have generally increased over time).

Below is a figure from the paper showing 12 traits that have coordinated changes in the genes affecting them. Click to enlarge, and note that the traits vary from darker skin color (DNA bits associated with darker skin color declined in frequency, implying selection for lighter skin), waist to hip ratio (genes affecting this ratio declined in frequency), and both “intelligence” and “years of schooling” (both showing strong increases in “smart” DNA over the last 8,000 years).  It’s a clever analysis.

From paper: Figure 4: Coordinated selection on alleles affecting same traits (polygenic adaptation). The polygenic score of Western Eurasians over 14000 years in black, with 95% confidence interval in gray. Red represents the linear mixed model regression, adjusted for population structure, with slope γ. Three tests of polygenic selection—γ, γsign, and rs—are all significant for each of these twelve traits, with the relevant statistics at the top of each panel.

This is a lovely study (it needs vetting, of course, as this is a preprint), but doesn’t buttress Sullivan’s conclusion that changes within a group wrought by natural selection, such as the changes above, mean that differences between populations must also have been caused by natural selection. That’s simply a mistake, or a fallacy resting on confirmation bias. Sullivan insists, though, that he’s just interested in what the facts are, and those facts must play into any societal changes we want to make. (He’s sort of right here, but not completely, but I’ve discussed this issue in a WaPo book review.)

Sullivan:

Why do I care about this? It’s not because I’m some white supremacist, or Ashkenazi supremacist, or East Asian supremacist. It’s because I deeply believe that recognizing empirical reality as revealed by rigorous scientific methods is essential to liberal democracy. We need common facts to have different opinions about. Deliberately stigmatizing and demonizing scientific research because its results may not conform to your priors is profoundly illiberal. And, in this case, it runs the risk of empowering racists. As Reich wrote in his 2018 op-ed:

I am worried that well-meaning people who deny the possibility of substantial biological differences among human populations are digging themselves into an indefensible position, one that will not survive the onslaught of science. I am also worried that whatever discoveries are made — and we truly have no idea yet what they will be — will be cited as “scientific proof” that racist prejudices and agendas have been correct all along, and that those well-meaning people will not understand the science well enough to push back against these claims.

Scientific illiberalism is on both sides. The denial of natural selection by creationists and the denial of carbon-created climate change by some libertarians is damaging to any sane public discourse, but so too is the denial of any human evolution for 50,000 years by critical race theorists and their Neo-Marxist and liberal champions.

Okay, but I wish he’d been a bit more explicit about the limitations of Reich’s study for concluding things about selection among populations or “races”.  Note, though, that he chastises both Left and Right for committing scientific “illiberalism.”

One area in which his conclusions seem more sound, however, involves gender and trans issues:

You see this [scientific illiberalism] also in the left’s defense of “no questions asked” gender reassignment for autistic, trans, and mainly gay children on the verge of puberty. The best scientific systematic studies find no measurable health or psychological benefit for the children — and a huge cost for the thousands of gay or autistic or depressed kids who later regret destroying their natural, functioning, sexed bodies. And a new German-American study has just “found that the majority of gender dysphoria-related diagnoses, including so-called gender incongruence, recorded in a minor or young adult’s medical chart were gone within within five or six years.” Yet the entire US medical establishment refuses to budge.

I should say that my own priors might also need checking. Maybe some, well-screened kids would be better off with pre-pubertal transition. Right now, we just don’t know. That’s why I favor broad clinical trials to test these experiments, before they are applied universally, and why I believe kids should have comprehensive mental health evaluations before being assigned as trans. And yet, as I write, such evaluations are being made illegal in some states, and gay kids are being mutilated for life before puberty, based on debunked science — and Tim Walz and the entire transqueer movement is adamant that no more rigorous research is needed.

Agreed!  I think that Sullivan should have added that studies do show that adults accrue overall benefits from changing gender (at least that’s what I remember). If that’s the case, then he’s made another omission that. if admitted would strengthen his credibility (always admit the caveats with your conclusions!) But I think he’s dead-on right about affirmative therapy for minors.

(h/t: Christopher)