Charles Darwin and Alfred Russel Wallace are known for their “simultaneous” discovery of evolution by natural selection, but they had some profound differences. One of these involved the mechanism for sexual selection, a disagreement I discuss in WEIT. But their most famous difference involved the origin of human mentality. Darwin saw our minds, like our bodies, as resulting from the accumulation of adaptive differences through natural selection. Wallace, on the other hand, thought that our higher mental powers represented nonadaptive evolutionary overkill—faculties that were simply not adaptively needed to become human. Wallace saw these as arising instead from the intercession of “a superior intelligence”, and so was the first post-Darwinian exponent of intelligent design.
But Wallace’s question remains a good one, and is posed anew by Steven Pinker in a nice paper in a recent online issue of PNAS:
. . . why do humans have the ability to pursue abstract intellectual feats such as science, mathematics, philosophy, and law, given that opportunities to exercise these talents did not exist in the foraging lifestyle in which humans evolved and would not have parlayed themselves into advantages in survival and reproduction even if they did?
Pinker proposes an answer—that these feats are byproducts of selection for early humans to inhabit a “cognitive niche.” This answer may well be right, but at the very least will make you think. (You can find more discussion of the cognitive niche idea in chapter 3 of Pinker’s How the Mind Works and chapters 5 and 9 of The Stuff of Thought.)
What Pinker sees as the “cognitive niche” (a term invented by John Tooby and Irv DeVore) is a lifestyle of using both thought and social cooperation to manipulate the environment. This involves, for example, using tools, extracting poisons from plants, and all the stratagems of cooperative hunting: planning, communicating, making traps, and so forth. Pinker sees several “preadaptations” that facilitated our entry into this niche (by “predadaptation,” I mean features that evolved for one purpose but could subsequently be coopted for a different one). One is our prehensile hands, perhaps themselves a byproduct of bipedality. Another is our opportunistic diet, which included meat: as Pinker notes, meat is “not only a concentrated source of nutrients for a hungry brain but may have selected in turn for greater intelligence, because it requires more cleverness to outwit an animal than to outwit fruits or leaves.” A third is group living.
The big advantage of manipulating the environment, and passing that knowledge on to others, is that we can meet environmental challenges quickly, while other animals meet them by the much slower process of genetic evolution. Our mentality, in other words, gives us a huge leg up in the human-environment arms race.
Given this, the cognitive niche will get filled as sociality, mentality, and dexterity all evolve and coevolve, facilitating each other’s evolution. Language, for instance, will evolve to facilitate group living and cooperation, but that language will itself permit the evolution of more complicated behaviors involving altruism, reciprocity, and calculation of others’ motives. Coevolving with this would have been longer periods of childhood to enable us to learn everything we need to fit into the cognitive niche, and then longer lives to take advantage of that learning. Pinker notes:
Support for these hypotheses comes from the data of Kaplan (36), who has shown that among hunter-gatherers, prolonged childhood cannot pay off without long life spans. The men do not produce as many calories as they consume until age 18; their output then peaks at 32, plateaus through 45, then gently declines until 65. This shows that hunting is a knowledge dependent skill, invested in during a long childhood and paid out over a long life.
And of course prolonged child-rearing itself selects for many other adaptations. These may have included hidden ovulation (to keep males faithful), biparental care, and so on. According to Pinker, the evolution of human mentality involved a nexus of interconnected and mutually reinforcing selection pressures. We don’t need to see a single factor—like bipedality—as being a key factor in the evolution of our mind. (Those single-factor theories have always seemed unrealistic to me.)
Pinker recognizes that the “cognitive niche” did not really exist before humans began evolving; it was in many ways constructed by that evolution itself. Once we started down our evolutionary road, new possibilities arose that changed the direction of that road itself. And Pinker pointedly disagrees with Francis Collins and other religious scientists who see the evolution of the human mind as inevitable. Rather, it was a one-off feature, like the elephant’s trunk or the whale’s baleen basket, that arose via a fortuitous interaction between mutations and the right environmental conditions (big game, an open savanna, etc.).
But what about our ability to do math and philosophy, and all those other endeavors that involve abstract thought? Pinker sees these as spandrels, byproducts of reasoning that evolved for other reasons. Concepts that evolved to deal with concrete situations could naturally be extended to less concrete ones. He gives several examples. Here’s one:
So we still need an explanation of how our cognitive mechanisms are capable of embracing this abstract reasoning. The key may lie in a psycholinguistic phenomenon that may be called metaphorical abstraction (9, 59–61). Linguists such as Ray Jackendoff, George Lakoff, and Len Talmy have long noticed that constructions associated with concrete scenarios are often analogically extended to more abstract concepts. Consider these sentences:
1. a. The messenger went from Paris to Istanbul.
b. The inheritance went to Fred.
c. The light went from green to red.
d. The meeting went from 3:00–4:00.The first sentence (a) uses the verb go and the prepositions from and to in their usual spatial senses, indicating the motion of an object from a source to a goal. But in 1(b), the words are used to indicate a metaphorical motion, as if wealth moved in space from owner to owner. In 1(c) the words are being used to express a change of state: a kind of motion in state-space. And in 1(d) they convey a shift in time, as if scheduling an event was placing or moving it along a time line.
. . The value of metaphorical abstraction consists not in noticing a poetic similarity but in the fact that certain logical relationships that apply to space and force can be effectively carried over to abstract domains.
. . . [A] mind that evolved cognitive mechanisms for reasoning about space and force, an analogical memory that encourages concrete concepts to be applied to abstract ones with a similar logical structures, and mechanisms of productive combination that assemble them into complex hierarchical data structures, could engage in the mental activity required for modern science (9, 10, 67). In this conception, the brain’s ability to carry out metaphorical abstraction did not
evolve to coin metaphors in language, but to multiply the opportunities for cognitive inference in domains other than those for which a cognitive model was originally adapted.
Well, this is food for thought—indeed, a banquet—but is it right? It sounds eminently reasonable, but of course we need harder evidence than mere plausibility. Pinker doesn’t suggest a way to test the “metaphorical abstraction” theory (indeed, I think it’s untestable); but he floats the idea of testing the “cognitive niche” idea by looking at DNA itself:
The theory can be tested more rigorously, moreover, using the family of relatively new techniques that detect “footprints of selection” in the human genome (by, for example, comparing rates of nonsynonymous and synonymous base pair substitutions or the amounts of variation in a gene within and across species) (32, 45, 46). The theory predicts that there are many genes that were selected in the lineage leading to modern humans whose effects are concentrated in intelligence, language, or sociality. Working backward, it predicts that any genes discovered in modern humans to have disproportionate effects in intelligence, language, or sociality (that is, that do notmerely affect overall growth or health)will be found to have been a target of selection.
It’s a clever idea, but it comes with many problems. Here are a few:
- Even if genes for cognitive traits and sociality do show features of the DNA indicating selection (for example, a high rate of substitutions that change protein sequence compared to those that don’t have that effect), it’s not clear that that selection vindicates the “cognitive niche” theory. It could also support the alternative “one factor at a time” theory; that is, bipedality was the key factor, and that allowed the evolution of manual dexterity, and that allowed hunting, then intelligence, and so on. In other words, the test doesn’t rule out other types of selection that aren’t part of Pinker’s theory.
- Along these lines, DNA based evidence for selection could result not from natural selection on survival or reproduction, but from sexual selection on the ability to find mates. Sexual selection is explicitly not part of Pinker’s theory, since he sees it as superfluous. But others have suggested that sexual selection was an important factor in the evolution of things like language and mentality. I take Pinker’s side here, but the point is that we have to rule out alternative explanations, and DNA sequences simply don’t do that.
- Geneticists have found a lot of problems with the tests used to show positive selection on DNA. For one, they are insensitive to forms of selection that only makes single changes in proteins, since the tests are designed to detect selection causing multiple protein-coding changes. Selection producing only one or a few changes in proteins may be ubiquitous, but if you can’t show it, you lose potentially important support for Pinker’s theory. Conversely, these tests can give false positives if proteins are not changing by positive selection, but by relaxed selection that allows mutations to accumulate, as may occur during population bottlenecks (humans, of course, went through a population bottleneck during the out-of-Africa phase). The paper by Austin Hughes, cited below, enumerates the many problems with the way we currently test DNA for signs of selection.
But how do we find the candidate genes to test? Pinker suggests looking at those genes that, within humans, either have mutations that affect aspects of cognition or sociality, or that show more common variation affecting those traits:
The only requirement is that they contribute to the modern human version of these traits. In practice, the genes may be identified as the normal versions of genes that cause disorders of cognition (e.g., retardation, thought disorders, major learning disabilities), disorders of sociality (e.g., autism, social phobia, antisocial personality disorder), or disorders of language (e.g., language delay, language impairment, stuttering, and dyslexia insofar as it is a consequence of phonological impairment). Alternatively, they may be identified as a family of alleles whose variants cause quantitative variation in intelligence, personality, emotion, or language.
This too has problems. Genes that can mutate to pathologies involving cognition, for instance, aren’t necessarily those selected for improved cognition during our evolution. Signs of selection in those genes might not, then, provide support for the particular form of selection posited by the “cognitive niche” theory. A gene that evolved to change the jaws of our ancestors, for example, might mutate to something that causes microcephaly or other deformations of the skull, which of course could impair cognition. But the effect on cognition is an incidental result, and says nothing about past selection for cognition. In fact, one of the genes that Pinker mentions as evidence for his theory, ASPM—whose mutant form causes microcephaly—is very controversial. Some geneticists reject the idea that ASPM changed in our lineage by positive selection, while others don’t think that its normal variation is associated with variation in cognition.
Another example is hemoglobin. A mutation in human hemoglobin causes sickle-cell anemia, which affects kidney function, joint function, and can even produce prolonged erections. But even if hemoglobin showed signs of selection in the human lineage (I don’t think it does, but that doesn’t matter), this would not mean that that the evolution of hemoglobin in our lineages involved adaptations in our kidneys, joints, and reproductive behavior.
Concentrating on genes whose different forms are associated with less pathological variation may be a better tactic, but there are problems here too. As with ASPM, it may be difficult to show that normal variation in the genes is associated with normal variation in cognitive and social traits. And even if it was, this does not allow a strong inference that differences in those genes between us and our relatives is what makes our minds “human.”
In the end, there’s only one convincing way to show that particular DNA differences between the human lineage and those of our relatives (e.g., chimps and gorillas) make a meaningful difference in a cognitive trait. You must move those bits of DNA between the species and see if, say, the human form of a gene will improve the cognition of a chimp, or the chimp form of a gene will impair cognition in humans. This involves either hybridization, which is impossible, or transgenic experiments, which are impossible because they are unethical. And we’re not even talking about how to measure whether a chimp’s cognition is improved when it carries a new bit of DNA.
One example of how this was done in other species was a recent study by Cretekos et al. (reference below). The gene Prx1 is known to have mutations in mice that affect elongation of limbs. Because bats differ from mice in (among other things) growing much longer forelimbs, the researchers sequenced Prx1 genes from fruit bats. They found that the bat DNA differed from mouse DNA in a particular part of the Prx1 gene that regulates its expression. They then moved that bit from bats into mice by transgenic methods, and found that it increased mouse limb length a bit (about 6%)—but only in the forelimbs, just as predicted! It was a lovely experiment.
This is precisely the kind of experiment we need to do to make a convincing case that particular genes in the human genome were responsible for making us “human”, i.e., smarter and more socially complex than our relatives. And it’s precisely the experiment that we cannot do, at least for the foreseeable future.
I think Pinker’s theory is right—at least, it makes a lot more sense than other theories of human evolution. But, as always, there’s a big difference between thinking a theory is right and showing it’s right. Ironically, in the case of human evolution, we are prevented by our evolved morality from using our evolved skills to test theories about our evolved cognition.
__________
Cretekos, C. J., Wang, Y., Green, E.D., NISC Comparative Sequencing Program, Martin, J.F., Rasweiler, J.J. IV, and Behringer, R.R. 2008. Regulatory divergence modifies forelimb length in mammals. Genes and Development 22:141-151.
Hughes, A. L. 2007. Looking for Darwin in all the wrong places: the misguided quest for positive selection at the nucleotide sequence level. Heredity 99:364-373.
Pinker, S. 2010. The cognitive niche: Coevolution of intelligence, sociality, and language. Proc. Nat. Acad. Sci. USA doi: 10.1073/pnas.0914630107.












