Charles Darwin and Alfred Russel Wallace are known for their “simultaneous” discovery of evolution by natural selection, but they had some profound differences. One of these involved the mechanism for sexual selection, a disagreement I discuss in WEIT. But their most famous difference involved the origin of human mentality. Darwin saw our minds, like our bodies, as resulting from the accumulation of adaptive differences through natural selection. Wallace, on the other hand, thought that our higher mental powers represented nonadaptive evolutionary overkill—faculties that were simply not adaptively needed to become human. Wallace saw these as arising instead from the intercession of “a superior intelligence”, and so was the first post-Darwinian exponent of intelligent design.
But Wallace’s question remains a good one, and is posed anew by Steven Pinker in a nice paper in a recent online issue of PNAS:
. . . why do humans have the ability to pursue abstract intellectual feats such as science, mathematics, philosophy, and law, given that opportunities to exercise these talents did not exist in the foraging lifestyle in which humans evolved and would not have parlayed themselves into advantages in survival and reproduction even if they did?
Pinker proposes an answer—that these feats are byproducts of selection for early humans to inhabit a “cognitive niche.” This answer may well be right, but at the very least will make you think. (You can find more discussion of the cognitive niche idea in chapter 3 of Pinker’s How the Mind Works and chapters 5 and 9 of The Stuff of Thought.)
What Pinker sees as the “cognitive niche” (a term invented by John Tooby and Irv DeVore) is a lifestyle of using both thought and social cooperation to manipulate the environment. This involves, for example, using tools, extracting poisons from plants, and all the stratagems of cooperative hunting: planning, communicating, making traps, and so forth. Pinker sees several “preadaptations” that facilitated our entry into this niche (by “predadaptation,” I mean features that evolved for one purpose but could subsequently be coopted for a different one). One is our prehensile hands, perhaps themselves a byproduct of bipedality. Another is our opportunistic diet, which included meat: as Pinker notes, meat is “not only a concentrated source of nutrients for a hungry brain but may have selected in turn for greater intelligence, because it requires more cleverness to outwit an animal than to outwit fruits or leaves.” A third is group living.
The big advantage of manipulating the environment, and passing that knowledge on to others, is that we can meet environmental challenges quickly, while other animals meet them by the much slower process of genetic evolution. Our mentality, in other words, gives us a huge leg up in the human-environment arms race.
Given this, the cognitive niche will get filled as sociality, mentality, and dexterity all evolve and coevolve, facilitating each other’s evolution. Language, for instance, will evolve to facilitate group living and cooperation, but that language will itself permit the evolution of more complicated behaviors involving altruism, reciprocity, and calculation of others’ motives. Coevolving with this would have been longer periods of childhood to enable us to learn everything we need to fit into the cognitive niche, and then longer lives to take advantage of that learning. Pinker notes:
Support for these hypotheses comes from the data of Kaplan (36), who has shown that among hunter-gatherers, prolonged childhood cannot pay off without long life spans. The men do not produce as many calories as they consume until age 18; their output then peaks at 32, plateaus through 45, then gently declines until 65. This shows that hunting is a knowledge dependent skill, invested in during a long childhood and paid out over a long life.
And of course prolonged child-rearing itself selects for many other adaptations. These may have included hidden ovulation (to keep males faithful), biparental care, and so on. According to Pinker, the evolution of human mentality involved a nexus of interconnected and mutually reinforcing selection pressures. We don’t need to see a single factor—like bipedality—as being a key factor in the evolution of our mind. (Those single-factor theories have always seemed unrealistic to me.)
Pinker recognizes that the “cognitive niche” did not really exist before humans began evolving; it was in many ways constructed by that evolution itself. Once we started down our evolutionary road, new possibilities arose that changed the direction of that road itself. And Pinker pointedly disagrees with Francis Collins and other religious scientists who see the evolution of the human mind as inevitable. Rather, it was a one-off feature, like the elephant’s trunk or the whale’s baleen basket, that arose via a fortuitous interaction between mutations and the right environmental conditions (big game, an open savanna, etc.).
But what about our ability to do math and philosophy, and all those other endeavors that involve abstract thought? Pinker sees these as spandrels, byproducts of reasoning that evolved for other reasons. Concepts that evolved to deal with concrete situations could naturally be extended to less concrete ones. He gives several examples. Here’s one:
So we still need an explanation of how our cognitive mechanisms are capable of embracing this abstract reasoning. The key may lie in a psycholinguistic phenomenon that may be called metaphorical abstraction (9, 59–61). Linguists such as Ray Jackendoff, George Lakoff, and Len Talmy have long noticed that constructions associated with concrete scenarios are often analogically extended to more abstract concepts. Consider these sentences:
1. a. The messenger went from Paris to Istanbul.
b. The inheritance went to Fred.
c. The light went from green to red.
d. The meeting went from 3:00–4:00.
The first sentence (a) uses the verb go and the prepositions from and to in their usual spatial senses, indicating the motion of an object from a source to a goal. But in 1(b), the words are used to indicate a metaphorical motion, as if wealth moved in space from owner to owner. In 1(c) the words are being used to express a change of state: a kind of motion in state-space. And in 1(d) they convey a shift in time, as if scheduling an event was placing or moving it along a time line.
. . The value of metaphorical abstraction consists not in noticing a poetic similarity but in the fact that certain logical relationships that apply to space and force can be effectively carried over to abstract domains.
. . . [A] mind that evolved cognitive mechanisms for reasoning about space and force, an analogical memory that encourages concrete concepts to be applied to abstract ones with a similar logical structures, and mechanisms of productive combination that assemble them into complex hierarchical data structures, could engage in the mental activity required for modern science (9, 10, 67). In this conception, the brain’s ability to carry out metaphorical abstraction did not
evolve to coin metaphors in language, but to multiply the opportunities for cognitive inference in domains other than those for which a cognitive model was originally adapted.
Well, this is food for thought—indeed, a banquet—but is it right? It sounds eminently reasonable, but of course we need harder evidence than mere plausibility. Pinker doesn’t suggest a way to test the “metaphorical abstraction” theory (indeed, I think it’s untestable); but he floats the idea of testing the “cognitive niche” idea by looking at DNA itself:
The theory can be tested more rigorously, moreover, using the family of relatively new techniques that detect “footprints of selection” in the human genome (by, for example, comparing rates of nonsynonymous and synonymous base pair substitutions or the amounts of variation in a gene within and across species) (32, 45, 46). The theory predicts that there are many genes that were selected in the lineage leading to modern humans whose effects are concentrated in intelligence, language, or sociality. Working backward, it predicts that any genes discovered in modern humans to have disproportionate effects in intelligence, language, or sociality (that is, that do notmerely affect overall growth or health)will be found to have been a target of selection.
It’s a clever idea, but it comes with many problems. Here are a few:
- Even if genes for cognitive traits and sociality do show features of the DNA indicating selection (for example, a high rate of substitutions that change protein sequence compared to those that don’t have that effect), it’s not clear that that selection vindicates the “cognitive niche” theory. It could also support the alternative “one factor at a time” theory; that is, bipedality was the key factor, and that allowed the evolution of manual dexterity, and that allowed hunting, then intelligence, and so on. In other words, the test doesn’t rule out other types of selection that aren’t part of Pinker’s theory.
- Along these lines, DNA based evidence for selection could result not from natural selection on survival or reproduction, but from sexual selection on the ability to find mates. Sexual selection is explicitly not part of Pinker’s theory, since he sees it as superfluous. But others have suggested that sexual selection was an important factor in the evolution of things like language and mentality. I take Pinker’s side here, but the point is that we have to rule out alternative explanations, and DNA sequences simply don’t do that.
- Geneticists have found a lot of problems with the tests used to show positive selection on DNA. For one, they are insensitive to forms of selection that only makes single changes in proteins, since the tests are designed to detect selection causing multiple protein-coding changes. Selection producing only one or a few changes in proteins may be ubiquitous, but if you can’t show it, you lose potentially important support for Pinker’s theory. Conversely, these tests can give false positives if proteins are not changing by positive selection, but by relaxed selection that allows mutations to accumulate, as may occur during population bottlenecks (humans, of course, went through a population bottleneck during the out-of-Africa phase). The paper by Austin Hughes, cited below, enumerates the many problems with the way we currently test DNA for signs of selection.
But how do we find the candidate genes to test? Pinker suggests looking at those genes that, within humans, either have mutations that affect aspects of cognition or sociality, or that show more common variation affecting those traits:
The only requirement is that they contribute to the modern human version of these traits. In practice, the genes may be identified as the normal versions of genes that cause disorders of cognition (e.g., retardation, thought disorders, major learning disabilities), disorders of sociality (e.g., autism, social phobia, antisocial personality disorder), or disorders of language (e.g., language delay, language impairment, stuttering, and dyslexia insofar as it is a consequence of phonological impairment). Alternatively, they may be identified as a family of alleles whose variants cause quantitative variation in intelligence, personality, emotion, or language.
This too has problems. Genes that can mutate to pathologies involving cognition, for instance, aren’t necessarily those selected for improved cognition during our evolution. Signs of selection in those genes might not, then, provide support for the particular form of selection posited by the “cognitive niche” theory. A gene that evolved to change the jaws of our ancestors, for example, might mutate to something that causes microcephaly or other deformations of the skull, which of course could impair cognition. But the effect on cognition is an incidental result, and says nothing about past selection for cognition. In fact, one of the genes that Pinker mentions as evidence for his theory, ASPM—whose mutant form causes microcephaly—is very controversial. Some geneticists reject the idea that ASPM changed in our lineage by positive selection, while others don’t think that its normal variation is associated with variation in cognition.
Another example is hemoglobin. A mutation in human hemoglobin causes sickle-cell anemia, which affects kidney function, joint function, and can even produce prolonged erections. But even if hemoglobin showed signs of selection in the human lineage (I don’t think it does, but that doesn’t matter), this would not mean that that the evolution of hemoglobin in our lineages involved adaptations in our kidneys, joints, and reproductive behavior.
Concentrating on genes whose different forms are associated with less pathological variation may be a better tactic, but there are problems here too. As with ASPM, it may be difficult to show that normal variation in the genes is associated with normal variation in cognitive and social traits. And even if it was, this does not allow a strong inference that differences in those genes between us and our relatives is what makes our minds “human.”
In the end, there’s only one convincing way to show that particular DNA differences between the human lineage and those of our relatives (e.g., chimps and gorillas) make a meaningful difference in a cognitive trait. You must move those bits of DNA between the species and see if, say, the human form of a gene will improve the cognition of a chimp, or the chimp form of a gene will impair cognition in humans. This involves either hybridization, which is impossible, or transgenic experiments, which are impossible because they are unethical. And we’re not even talking about how to measure whether a chimp’s cognition is improved when it carries a new bit of DNA.
One example of how this was done in other species was a recent study by Cretekos et al. (reference below). The gene Prx1 is known to have mutations in mice that affect elongation of limbs. Because bats differ from mice in (among other things) growing much longer forelimbs, the researchers sequenced Prx1 genes from fruit bats. They found that the bat DNA differed from mouse DNA in a particular part of the Prx1 gene that regulates its expression. They then moved that bit from bats into mice by transgenic methods, and found that it increased mouse limb length a bit (about 6%)—but only in the forelimbs, just as predicted! It was a lovely experiment.
This is precisely the kind of experiment we need to do to make a convincing case that particular genes in the human genome were responsible for making us “human”, i.e., smarter and more socially complex than our relatives. And it’s precisely the experiment that we cannot do, at least for the foreseeable future.
I think Pinker’s theory is right—at least, it makes a lot more sense than other theories of human evolution. But, as always, there’s a big difference between thinking a theory is right and showing it’s right. Ironically, in the case of human evolution, we are prevented by our evolved morality from using our evolved skills to test theories about our evolved cognition.
Cretekos, C. J., Wang, Y., Green, E.D., NISC Comparative Sequencing Program, Martin, J.F., Rasweiler, J.J. IV, and Behringer, R.R. 2008. Regulatory divergence modifies forelimb length in mammals. Genes and Development 22:141-151.
Hughes, A. L. 2007. Looking for Darwin in all the wrong places: the misguided quest for positive selection at the nucleotide sequence level. Heredity 99:364-373.
Pinker, S. 2010. The cognitive niche: Coevolution of intelligence, sociality, and language. Proc. Nat. Acad. Sci. USA doi: 10.1073/pnas.0914630107.
39 thoughts on “Did humans evolve to fill a “cognitive niche”?”
You want us to think on a holiday? I’ll think about it tomorrow. (Especially since I have no clue on which way to lean).
I wouldn’t use the word “impossible” for something that is unethical — clearly it is possible to do unethical things (just look at Bernie Madoff, or BP).
More substantively, I’m not sure that the claim such work would be “unethical” is self-evidently true. What exactly would be unethical about it? It can’t be putting “human genes” into animals, as we do transgenic work with non-cognitive-related genes all time. Is it because we might make chimpanzees “smart”? How is that unethical? (Given film history it might not be wise, but that is a different story.) Is it always wrong to increase the intellectual capacity of animals (which we of course do already through selective breeding — just look at certain breeds of dogs)? Or is the problem just “genetic engineering”, in which case the concern seems suspiciously similar to those who oppose GMOs on some vague “principle”?
I’m not saying there isn’t an ethical argument to be made here, merely that I think that one has to be made, because it is isn’t self-evident (at least not to me).
Indeed, let’s make the question sharper.
According to some (or many, depending on your circles), that action is right that brings about the greatest good for the greatest number. If transgenic experiments that involve human to chimp transfers of DNA are not right, then it must be demonstrable, or at least demonstrably probable, that they would not bring about the greatest good for the greatest number. If so, where is Coyne’s argument that this is so?
Alternatively, some other conception of the right may be correct. If so, what is Jerry’s conception and why does he think it correct?
I find it very odd that scientists get so up in arms about religious nutjobs making things up, but then themselves make up ethical principles as they go along. If something is wrong, there should be a reason that it’s wrong, and those that say it’s wrong should be able to produce the reason, on-demand.
Who is making up ethical principles? My statement reflected the current state of affairs: such experiments are CONSIDERED unethical, and currently can’t be done. That’s what I meant.
The issues raised by Tulse and MJ are interesting. You should deal with them in or around your review of Sam Harris’ book, when The Conversation will be dealing with it. It could be an awesome series of posts, if you let it. Why is the same kind of gene transfer used in mice taboo for the great apes?
“And we’re not even talking about how to measure whether a chimp’s cognition is improved when it carries a new bit of DNA.”
If the chimp refuses to go to church after the new DNA is inserted, I’d say that’s indisputable proof of improved cognition.
BTW… anybody got any spare Prx1 genes? I’d like to buy shorter golf clubs for my next set.
Putting human DNA into chimps … hmmm … how long before Bonzo’s in the White House again? Bunga! Bunga!
This is a most interesting topic. Why can humans comprehend quantum theory while our closest relative the chimpanzee has at best a primitive concept of number?
The answer of course is language. Language allows for the linking of human minds into superminds, far more powerful than anyone individual’s brain. First through oral tradition, next through writing, now through even more powerful ways of linking our minds.
Think about it–how much of what you know acutally orginated in your own mind? Most of what anyone knows, even the smartest among us, orginated in the brains of others
IMO, this magnification through the linking of minds by the means of language explains much of the huge intellectual leap that seems so puzzling.
I doubt it.
Here’s an argument from Jerry Fodor (I know, he’s not very welcome on this blog) from The Language of Thought.
Suppose you want to learn what the word “number” means. Presumably, you have to formulate a hypothesis as to what it might mean, test the hypothesis, and upon sufficient confirmation, accept it.
What hypothesis is best? Well, probably, that “number” means NUMBER, the concept of being a number. But in order to even conceive this hypothesis, that “number” means NUMBER, one must already have the concept NUMBER (because that concept is a mereological part of one’s hypothesis). So you can’t get the concept NUMBER through language, it must be innate.
It’s quite right that we get most of our knowledge from others. But it’s a totally different thing to say we get most of our concepts– our capacities to think of certain things– through language. How could we in the first instance *understand* what our fellows were telling us to learn, if we had no conception of what they were talking about?
Well, it is understood that certain human intellectual capacities, such as the ability to think in the abstract, have to be present for the whole process to take off. And it is not clear that this intellectual capacity is in fact separate from the evolution of our capacity for language. I personally cannot think about anything without using language–talking to myself in my head. Language is not just for communicating with others. If you think about it, it is very strange that we talk to ourselves in our heads. Who the hell are communicating with?
Some people and probably most animals think in pictures. Temple Grandin touches on this in a talk over at TED, if interested.
Before we were formulating hypotheses, we were making associations. In language, the first associations we made (as an evolving species or as a growing child) were between things and their names – remember Helen Keller learning that the wet stuff on her hand was called what Anne Sullivan was tracing on her other hand? Then that actions had names, then that qualities had names. We learnt the names for numbers (the qualities that distinquished one thing from two things, three things, etc) before we learnt that the name for those qualities was “number”.
This level of abstraction can be quite high. Only the other day I found it almost impossible to ask clearly what the collective name for different kinds of football game (Rugby, League, Soccer – and it makes it harder that that one insists on being called “football” – Gridiron, Australian Rules) is. (The answer is “code”.)
I agree completely with Neil.
I also agree with MJ (having trouble lining up my replies…)
Okay, now I’m not sure. Over and out.
Disclaimer: tried to access the full PNAS and only got a black screen.
Is there enough evidence of the capabilities of Neandertals at some time point at which we also have multiple DNA samples, and could make comparisons from those to modern man. What’s the status of the “language gene” (that was trumpeted widely some yrs back) in Neandertals vs us, for instance.
Otherwise, I remember seeing some program along the lines of Ascent of Man, that detailed how even the concept of numbers, let alone mathematics, is a very recent development.
Anyway, key components have to be language and opposable* digits, driven by a massively parallel computer (brain) with enhanced memory. Is there any reason to suppose that brains of, say, gorillas are any less massively interconnected, or that their memory is any less than ours? (And also re. that, we know that nouns are stored in different physical locations in our brain vs. verbs – I think this sort of thing has been looked at in chimps but don’t know the conclusion – anyone?)
Anyway, before engaging in a massive sequence gazing expedition, there ought to be some reason to focus on one particular area. Otherwise, this whole thing is liable to collapse with the resultant conclusion – Godidit.
*goddam thimblewit spellcheck does not recognize opposable, proving once again that it is the province of a lower intelligence. Also, in the abst, co-opt appears sans hyphen or umlaut! Is that Pinker or the PNAS copy editor?
I’m surprised to see this rather flippant passage “…it requires more cleverness to outwit an animal than to outwit fruits or leaves.” It appears to dismiss the need for a comprehensive knowledge of local flora, or flora in general if on the move. People would have needed to know what was poisonous, what was in season and when–i.e. where to find what at which time of year–perhaps over many kilometers. They would need to calculate how much to store for winters and what to do about shortages. Maybe they would also have to know what flora was attractive to animals for tracking them. Perhaps in the rush to announce the easy answer of “meat,” Pinker isn’t interested in the gathering side of hunting and gathering. In any case, I’m sure selection was much more complicated and multifaceted, including the need to outwit the smart members of one’s own species.
IMO The evolutionary trait we have is imagination. The ability to think what if, this would give us a great advantage in selection. Once you begin to imagine the questions start, why is this why is that all the arts follow from imagination and so does science, philosophy and mathematics and unfortunately religion.
Imagination is the trait which has allowed us to evolve.
Or you could start with Theory of Mind, which is related to imagination (because it’s related to thinking that things could be otherwise than they appear, otherwise than they are, etc) and also to empathy.
There’s a 5th level of abstraction:
e. Her reasoning went from the premises to the conclusion
– where the “movement” is purely in logic-space, or thought-space (though in all five cases, we can formulate an image of something moving – in the last case we might imagine the reasoning as being written down).
I am wondering if that development came about (perhaps only in part) because solving problems itself gives us pleasure (else why am I addicted to Sudoku?). Have we perhaps internalised the gratitude we get from solving problems for other people, or the sensory pleasure we get from finding something that tastes sweet or otherwise pleases us? And the ability to think in metaphors is certainly integral to solving problems, via comparing the unknown with the none.
What was I thinking? …with the _known_.
Or far more likely a direct product of brachiation from which bi-pedality ‘descended’, as it were.
Perhaps “dexterous” is better here than “prehensile”? All primates have prehensile hands, but who besides humans could play a piano?
(OK, besides certain kittehs)
Funny, I was just thinking about evolution and hands this morning – as one does, you know. Looking at mine, doing the finger-to-thumb thing that makes it all possible, wiggling all the digits, admiring this handy all-purpose tool kit on the ends of my arms, imagining not having them (as a species). Hands really are remarkable.
I seem to recall a sort of anatomical drawing showing the body in proportion to its innervation. If I remember correctly, the amount of wiring devoted to the hands outstripped everything else. Anyone else recall anything like that?
MKG’s comment about brachiation resonates. Successful brachiation (where the selective pressure was presumably severe) begat fine control of the fingers, which fostered tool-making abilities, and soon the species was making and playing fiddles (except in certain areas where fiddles were supposedly the devil’s instrument).
Intended to hat-tip Ray on the musical connection, too.
Ah, here’s an image of it: http://piclib.nhm.ac.uk/piclib/www/image.php?img=87494
There’s a picture of that model in the NHM book “99% Ape”. I’ll dig up the reference.
I am not a scientist, so please forgive me if my question is naive.
But it is this: If there is, indeed, a cognitive niche, presumably it has existed for billions of years.
Why did nature wait until the last million or so to fill it?
I’m not aware of any extinct type of animal, other than Homo, that was capable of science and philosophy.
Not a naive question. Presumably because the PRECURSORS for the evolution of human mentality had to be in place before there was a step into the cognitive niche—and that means the origin of intelligent and social animals like primates.
Why do you say that hybridization is “impossible”, as opposed to “impossible because unethical”?
We obviously need more unethical geneticists.
The “cognitive niche” concept has 1 more problem(which was addressed, though somewhat weakly): there would have to have been something else that occupied such a niche before we did. As far as we know, nothing of the sort has(unless the K-T extinction really WAS caused by intelligent dinosaurs).
The way I see it, we developed our intelligence for defense, i.e. our ancestors had to think differently out in the unforgiving grasslands. As a family, apes have few self-defenses beyond strength or agility, which (I guess) didn’t cut it out in the grasslands. We attempted to remedy this by having children year-round, but this was only a temporary fix. Thus, we found a form of self-defense in the use of our very environment, in the form of sticks, stones, bones, etc. From that, we gained an inquisitive and creative nature, so we could figure out how to use different things as weapons (which is probably to blame for our use of fire).
I don’t think there had to be “something else” in the niche before we “got there”… The cognitive niche is supposed to be created by our manipulative interaction with the environment, that is, it was not “there” before we began manipulating things and creating it at the same time (backtracking, I suppose it began to look like what we would call a niche in some moment, but not independently of our own action) . Our colective manipulation of the environment lead its configuration…
0.. Corking 🙂