Did humans evolve to fill a “cognitive niche”?

May 31, 2010 • 1:35 pm

Charles Darwin and Alfred Russel Wallace are known for their “simultaneous” discovery of evolution by natural selection, but they had some profound differences.  One of these involved the mechanism for sexual selection, a disagreement I discuss in WEIT.  But their most famous difference involved the origin of human mentality. Darwin saw our minds, like our bodies, as resulting from the accumulation of adaptive differences through natural selection.  Wallace, on the other hand, thought that our higher mental powers represented nonadaptive evolutionary overkill—faculties that were simply not adaptively needed to become human.  Wallace saw these as arising instead from the intercession of “a superior intelligence”, and so was the first post-Darwinian exponent of intelligent design.

But Wallace’s question remains a good one, and is posed anew by Steven Pinker in a nice paper in a recent online issue of PNAS:

. . . why do humans have the ability to pursue abstract intellectual feats such as science, mathematics, philosophy, and law, given that opportunities to exercise these talents did not exist in the foraging lifestyle in which humans evolved and would not have parlayed themselves into advantages in survival and reproduction even if they did?

Pinker proposes an answer—that these feats are byproducts of selection for early humans to inhabit a “cognitive niche.” This answer may well be right, but at the very least will make you think. (You can find more discussion of the cognitive niche idea in chapter 3 of Pinker’s How the Mind Works and chapters 5 and 9 of The Stuff of Thought.)

What Pinker sees as the “cognitive niche” (a term invented by John Tooby and Irv DeVore) is a lifestyle of using both thought and social cooperation to manipulate the environment.  This involves, for example, using tools, extracting poisons from plants,  and all the stratagems of cooperative hunting: planning, communicating, making traps, and so forth.  Pinker sees several “preadaptations” that facilitated our entry into this niche (by “predadaptation,” I mean features that evolved for one purpose but  could subsequently be coopted for a different one).  One is our prehensile hands, perhaps themselves a byproduct of bipedality. Another is our opportunistic diet, which included meat: as Pinker notes, meat is “not only a concentrated source of nutrients for a hungry brain but may have selected in turn for greater intelligence, because it requires more cleverness to outwit an animal than to outwit fruits or leaves.” A third is group living.

The big advantage of manipulating the environment, and passing that knowledge on to others, is that we can meet environmental challenges quickly, while other animals meet them by the much slower process of genetic evolution.  Our mentality, in other words, gives us a huge leg up in the human-environment arms race.

Given this, the cognitive niche will get filled as sociality, mentality, and dexterity all evolve and coevolve, facilitating each other’s evolution.  Language, for instance, will evolve to facilitate group living and cooperation, but that language will itself permit the evolution of more complicated behaviors involving altruism, reciprocity, and calculation of others’ motives.  Coevolving with this would have been longer periods of childhood to enable us to learn everything we need to fit into the cognitive niche, and then longer lives to take advantage of that learning.  Pinker notes:

Support for these hypotheses comes from the data of Kaplan (36), who has shown that among hunter-gatherers, prolonged childhood cannot pay off without long life spans. The men do not produce as many calories as they consume until age 18; their output then peaks at 32, plateaus through 45, then gently declines until 65. This shows that hunting is a knowledge dependent skill, invested in during a long childhood and paid out over a long life.

And of course prolonged child-rearing itself selects for many other adaptations. These may have included hidden ovulation (to keep males faithful), biparental care, and so on. According to Pinker, the evolution of human mentality involved a nexus of interconnected and mutually reinforcing selection pressures.  We don’t need to see a single factor—like bipedality—as being a key factor in the evolution of our mind. (Those single-factor theories have always seemed unrealistic to me.)

Pinker recognizes that the “cognitive niche” did not really exist before humans began evolving; it was in many ways constructed by that evolution itself.  Once we started down our evolutionary road, new possibilities arose that changed the direction of that road itself.   And Pinker pointedly disagrees with Francis Collins and other religious scientists who see the evolution of the human mind as inevitable.  Rather, it was a one-off feature, like the elephant’s trunk or the whale’s baleen basket, that arose via a fortuitous interaction between mutations and the right environmental conditions (big game, an open savanna, etc.).

But what about our ability to do math and philosophy, and all those other endeavors that involve abstract thought?  Pinker sees these as spandrels, byproducts of reasoning that evolved for other reasons.  Concepts that evolved to deal with concrete situations could naturally be extended to less concrete ones. He gives several examples. Here’s one:

So we still need an explanation of how our cognitive mechanisms are capable of embracing this abstract reasoning. The key may lie in a psycholinguistic phenomenon that may be called metaphorical abstraction (9, 59–61). Linguists such as Ray Jackendoff, George Lakoff, and Len Talmy have long noticed that constructions associated with concrete scenarios are often analogically extended to more abstract concepts. Consider these sentences:

1. a. The messenger went from Paris to Istanbul.
b. The inheritance went to Fred.
c. The light went from green to red.
d. The meeting went from 3:00–4:00.

The first sentence (a) uses the verb go and the prepositions from and to in their usual spatial senses, indicating the motion of an object from a source to a goal. But in 1(b), the words are used to indicate a metaphorical motion, as if wealth moved in space from owner to owner. In 1(c) the words are being used to express a change of state: a kind of motion in state-space. And in 1(d) they convey a shift in time, as if scheduling an event was placing or moving it along a time line.

. . The value of metaphorical abstraction consists not in noticing a poetic similarity but in the fact that certain logical relationships that apply to space and force can be effectively carried over to abstract domains.

. . . [A] mind that evolved cognitive mechanisms for reasoning about space and force, an analogical memory that encourages concrete concepts to be applied to abstract ones with a similar logical structures, and mechanisms of productive combination that assemble them into complex hierarchical data structures, could engage in the mental activity required for modern science (9, 10, 67). In this conception, the brain’s ability to carry out metaphorical abstraction did not
evolve to coin metaphors in language, but to multiply the opportunities for cognitive inference in domains other than those for which a cognitive model was originally adapted.

Well, this is food for thought—indeed, a banquet—but is it right?  It sounds eminently reasonable, but of course we need harder evidence than mere plausibility.  Pinker doesn’t suggest a way to test the “metaphorical abstraction” theory (indeed, I think it’s untestable); but he floats the idea of testing the “cognitive niche” idea by looking at DNA itself:

The theory can be tested more rigorously, moreover, using the family of relatively new techniques that detect “footprints of selection” in the human genome (by, for example, comparing rates of nonsynonymous and synonymous base pair substitutions or the amounts of variation in a gene within and across species) (32, 45, 46). The theory predicts that there are many genes that were selected in the lineage leading to modern humans whose effects are concentrated in intelligence, language, or sociality. Working backward, it predicts that any genes discovered in modern humans to have disproportionate effects in intelligence, language, or sociality (that is, that do notmerely affect overall growth or health)will be found to have been a target of selection.

It’s a clever idea, but it comes with many problems. Here are a few:

  • Even if genes for cognitive traits and sociality do show features of the DNA indicating selection (for example, a high rate of substitutions that change protein sequence compared to those that don’t have that effect), it’s not clear that that selection vindicates the “cognitive niche” theory.  It could also support the alternative “one factor at a time” theory; that is, bipedality was the key factor, and that allowed the evolution of manual dexterity, and that allowed hunting, then intelligence, and so on. In other words, the test doesn’t rule out other types of selection that aren’t part of Pinker’s theory.
  • Along these lines, DNA based evidence for selection could result not from natural selection on survival or reproduction, but from sexual selection on the ability to find mates.  Sexual selection is explicitly not part of Pinker’s theory, since he sees it as superfluous. But others have suggested that sexual selection was an important factor in the evolution of things like language and mentality.  I take Pinker’s side here, but the point is that we have to rule out alternative explanations, and DNA sequences simply don’t do that.
  • Geneticists have found a lot of problems with the tests used to show positive selection on DNA.  For one, they are insensitive to forms of selection that only makes single changes in proteins, since the tests are designed to detect selection causing multiple protein-coding changes.  Selection producing only one or a few changes in proteins may be ubiquitous, but if you can’t show it, you lose potentially important support for Pinker’s theory.  Conversely, these tests can give false positives if proteins are not changing by positive selection, but by relaxed selection that allows mutations to accumulate, as may occur during population bottlenecks (humans, of course, went through a population bottleneck during the out-of-Africa phase).  The paper by Austin Hughes, cited below, enumerates the many problems with the way we currently test DNA for signs of selection.

But how do we find the candidate genes to test? Pinker suggests looking at those genes that, within humans, either have mutations that affect aspects of cognition or sociality, or that show more common variation affecting those traits:

The only requirement is that they contribute to the modern human version of these traits. In practice, the genes may be identified as the normal versions of genes that cause disorders of cognition (e.g., retardation, thought disorders, major learning disabilities), disorders of sociality (e.g., autism, social phobia, antisocial personality disorder), or disorders of language (e.g., language delay, language impairment, stuttering, and dyslexia insofar as it is a consequence of phonological impairment). Alternatively, they may be identified as a family of alleles whose variants cause quantitative variation in intelligence, personality, emotion, or language.

This too has problems.  Genes that can mutate to pathologies involving cognition, for instance, aren’t necessarily those selected for improved cognition during our evolution. Signs of selection in those genes might not, then, provide support for the particular form of selection posited by the “cognitive niche” theory.  A gene that evolved to change the jaws of our ancestors, for example, might mutate to something that causes microcephaly or other deformations of the skull, which of course could impair cognition.  But the effect on cognition is an incidental result, and says nothing about past selection for cognition.  In fact, one of the genes that Pinker mentions as evidence for his theory, ASPM—whose mutant form causes microcephaly—is very controversial.  Some geneticists reject the idea that ASPM changed in our lineage by positive selection, while others don’t think that its normal variation is associated with variation in cognition.

Another example is hemoglobin.  A mutation in human hemoglobin causes sickle-cell anemia, which affects kidney function, joint function, and can even produce prolonged erections.  But even if hemoglobin showed signs of selection in the human lineage (I don’t think it does, but that doesn’t matter), this would not mean that that the evolution of hemoglobin in our lineages involved adaptations in our kidneys, joints, and reproductive behavior.

Concentrating on genes whose different forms are associated with less pathological variation may be a better tactic, but there are problems here too. As with ASPM, it may be difficult to show that normal variation in the genes is associated with normal variation in cognitive and social traits.  And even if it was, this does not allow a strong inference that differences in those genes between us and our relatives is what makes our minds “human.”

In the end, there’s only one convincing way to show that particular DNA differences between the human lineage and those of our relatives (e.g., chimps and gorillas) make a meaningful difference in a cognitive trait.  You must move those bits of DNA between the species and see if, say, the human form of a gene will improve the cognition of a chimp, or the chimp form of a gene will impair cognition in humans. This involves either hybridization, which is impossible, or transgenic experiments, which are impossible because they are unethical.  And we’re not even talking about how to measure whether a chimp’s cognition is improved when it carries a new bit of DNA.

One example of how this was done in other species was a recent study by Cretekos et al. (reference below).  The gene Prx1 is known to have mutations in mice that affect elongation of limbs.  Because bats differ from mice in (among other things) growing much longer forelimbs, the researchers sequenced Prx1 genes from fruit bats. They found that the bat DNA differed from mouse DNA in a particular part of the Prx1 gene that regulates its expression.  They then moved that bit from bats into mice by transgenic methods, and found that it increased mouse limb length a bit (about 6%)—but only in the forelimbs, just as predicted!  It was a lovely experiment.

This is precisely the kind of experiment we need to do to make a convincing case that particular genes in the human genome were responsible for making us “human”, i.e., smarter and more socially complex than our relatives. And it’s precisely the experiment that we cannot do, at least for the foreseeable future.

I think Pinker’s theory is right—at least, it makes a lot more sense than other theories of human evolution.  But, as always, there’s a big difference between thinking a theory is right and showing it’s right.  Ironically, in the case of human evolution, we are prevented by our evolved morality from using our evolved skills to test theories about our evolved cognition.

__________

Cretekos, C. J., Wang, Y., Green, E.D., NISC Comparative Sequencing Program, Martin, J.F., Rasweiler, J.J. IV, and Behringer, R.R. 2008. Regulatory divergence modifies forelimb length in mammals. Genes and Development 22:141-151.

Hughes, A. L. 2007. Looking for Darwin in all the wrong places: the misguided quest for positive selection at the nucleotide sequence level. Heredity 99:364-373.

Pinker, S.  2010.  The cognitive niche: Coevolution of intelligence, sociality, and language. Proc. Nat. Acad. Sci. USA doi: 10.1073/pnas.0914630107.

Steven Weinberg’s Lake Views

May 29, 2010 • 9:11 am

If you read popular science, especially with a dollop of atheism, you’ll have read Steven Weinberg.  Weinberg, as most of you know, won the Nobel Prize for Physics in 1979 for helping unify two of the fundamental forces of physics—weak interaction and electromagnetism—into the “electroweak” theory. He also did groundbreaking work on the so-called “standard model” of particle physics.

Weinberg’s written a number of popular and technical books, the most famous of which is probably The First Three Minutes, recounting what physicists think happened at the moment (or, rather, nanoseconds after) the universe began.  He’s  a pretty vociferous atheist,  and gained infamy among accommodationists for writing one of the great mantras of new atheism:

With or without religion, good people can behave well and bad people can do evil; but for good people to do evil—that takes religion.

I’ve just polished off Weinberg’s new collection of essays, Lake Views, published by Harvard University Press.  (The name comes from Lake Austin, which his study faces.)  It gathers 25 essays written between 2001 and 2008 for magazines and journals, and has a few transcripts of his talks.

It’s worth a read, I think, but I wouldn’t buy it (I took mine out of the library). It’s rather thin gruel, and, like Churchill’s pudding, lacks a theme.  The essays are diverse, and he’s put them in chronological order, which is a bit annoying. An essay on nuclear war, for instance, will be snuggled up to one on atheism, or Zionism, or Einstein.  One gets the impression that these pieces weren’t collected for any pressing reason, but simply to tidy up Weinberg’s oeuvre since his last group of essays (Facing Up, which was much better).

My favorites in the new collection include his pieces on Einstein’s mistakes, a good description of the idea of multiverses and how it bears on the anthropic principle, and the longest essay, “What Price Glory,” on military history (one of Weinberg’s avocations).

If you’re going to dip into Weinberg’s short pieces, though, try Facing Up first.  It has some great pieces, included a nice dismantling of Thomas Kuhn’s ideas about paradigm shifts, and a lot of wonderful descriptions of what modern physics has achieved.

One of the better pieces in Lake Views is the last, “Without God,” a transcript a talk Weinberg’s gave in 2008 to Phi Beta Kappa at Harvard.  He owns up to the difficulties of being an atheist (“Cicero offered comfort in De Senectute by arguing that it was silly to fear death. After more than two thousand years his words still have not the slightest power to console us.”), and thinks that “sophisticated” theology, which tries to dispense with a tangible, easily understood God, will eventually erode religion in America:

The various uses of religion may keep it going for a few centuries even after the disappearance of belief in anything supernatural, but I wonder how long religion can last without a core of belief in the supernatural, when it isn’t about anything external to human beings.  To compare great things with small, people may go to college football games mostly because they enjoy the cheerleading and marching bands, but I doubt if they would keep going to the stadium on Saturday afternoons if the only things happening there were cheerleading and marching bands, without any actual football, so that the cheerleading and the band music were no longer about anything.

If he’s right, we should be praising Terry Eagleton and Karen Armstrong to the skies, but there’s the little matter of Islam, too.

Caturday felids: Cool cats

May 29, 2010 • 5:42 am

Summer’s coming, and it’s time to put away those heavy fur coats.  Herewith a selection of shaved cats, or, as ailurophiles call them, cats with a “lion cut.”  And before you go all PETA on me, be aware that sometimes the lion cut is a good thing.

First, here’s why they call it a lion cut:

Usually a little ball of fluff is left on the tail:

Very often the photographs of lion-cut cats make them look really cheesed off.  My theory for this (which is mine) is that the photos are taken right after the trip to the groomer, so the cat is upset.  Like this one:

Or this one:

Or this one:

Often the lion cut makes a cat look like it’s wearing Uggs:

A fresh lion cut allows you to see the pigmentation of the cat’s skin.  White cats have pink skin:

Black ones have gray skin, presumably from the same melanins that darken the fur:

Finally, a celebrity shaved cat.  This is Kitty Purry, pet of the estimable Katy Perry.  Perry gave Purry a lion cut, and sent out a famous tweet about it:

Let’s not forget the videos:

Lion cuts can sometimes have pleiotropic benefits:

The new Templeton Prize winner speaks again!

May 28, 2010 • 10:48 am

I can’t believe it: twice in one day!  Over at the Guardian, you can read “Religion has nothing to do with science—and vice versa.”

I contend that both – scientists denying religion and believers rejecting science – are wrong. Science and religious beliefs need not be in contradiction. If they are properly understood, they cannot be in contradiction because science and religion concern different matters.

Here’s the definition of “properly understood religion” : religion that doesn’t contradict science.  This is all nice and neat, because it makes the assertion a tautology.

What I want to know is this: who is in charge of demarcating “properly understood” faith?  And what are we to do with those millions of misguided folks who obstinately refuse to make their faith proper?

Wait! There’s more:

Some scientists deny that there can be valid knowledge about values or about the meaning and purpose of the world and of human life. The biologist Richard Dawkins explicitly denies design, purpose and values.

In River out of Eden, he writes:

“The universe that we observe has precisely the properties we should expect if there is, at bottom, no design, no purpose, no evil and no good, nothing but blind, pitiless indifference.”

William Provine, a historian of science, asserts that there are no absolute principles of any sort. He believes modern science directly implies that there are no inherent moral or ethical laws, no absolute guiding principles for human society.

There is a monumental contradiction in these assertions. If its commitment to naturalism does not allow science to derive values, meaning or purposes from scientific knowledge, it surely does not allow it, either, to deny their existence.

Dawkins denies purpose and values? I don’t think so.  I know both Dawkins and Provine, and while they believe that there are no moral or ethical laws set out by God, they certainly believe in morals and ethics. And I’m equally sure they believe in “absolute guiding principles for human society.”  They just doesn’t think that those principles come from God.

The rest is straight NOMA (had Steve Gould lived, would he have won a Templeton Prize?):

There are people of faith who see the theory of evolution and scientific cosmology as contrary to the creation narrative in Genesis. But Genesis is a book of religious revelations and of religious teachings, not a treatise on astronomy or biology.

According to Augustine, the great theologian of the early Christian church, it is a blunder to mistake the Bible for an elementary textbook of astronomy, geology, or other natural sciences. As he writes in his commentary on Genesis:

“If it happens that the authority of sacred Scripture is set in opposition to clear and certain reasoning, this must mean that the person who interprets Scripture does not understand it correctly.”

But who can say what the book of Genesis was supposed to mean?  I’ll give you ten to one that, when it was written, it was a treatise on astronomy and biology, at least as far as those things were understood by denizens of the Middle East two millennia ago.

And, frankly, I’m tired of Augustine being trotted out in these kinds of discussions, as if his interpretation of the Bible was obviously the correct one.  I could trot out other theologians who would say the opposite.  And, if we’re going to hold up Augustine as the arbiter of Biblical interpretation, there’s that little matter of predestination. . . .

Finally, this:

Successful as it is, however, a scientific view of the world is hopelessly incomplete. Matters of value and meaning are outside the scope of science.

Perhaps (although Sam Harris would disagree).  But what is outside the scope of science is not automatically inside the scope of faith.

Accommodationists and Templetonians seem to believe if they endlessly repeat the discredited argument of non-overlapping magisteria, people will accept it.  Their guiding philosophy is Snarkian: “What I tell you three times is true.”

The new Templeton Prize winner speaks

May 28, 2010 • 6:30 am

Read and see how you too can get a million pounds. It’s not that hard!

Yes, one can believe in both evolution and God. Evolution is a well-confirmed scientific theory. Christians and other people of faith need not see evolution as a threat to their beliefs.

This is like saying “Gazelles and other antelopes need not see lions as a threat to their lives.”