A short obituary of J. D. Watson in PNAS

February 18, 2026 • 9:45 am

The Proceedings of the National Academy of Sciences finally published an obituary of J. D. Watson, who died in November of last year. (Nathanial Comfort has written a biography of Watson that will be a good complement to Matthew’s biography of Crick; Comfort’s book will be out at the end of this year or the beginning of 2027.)  You can access the PNAS obituary for free by clicking on the screenshot below, which is a good summary of Watson’s accomplishments (and missteps) if you don’t want a book-length treatment.

Most laypeople, if they know Watson’s name, probably know just two things. First, he and Crick co-discovered the structure of DNA, one of the great findings of biology. Second, Watson was demonized, and fired as director of the Cold Spring Harbor Laboratories, for making racist comments.  Both are true. Yes, Watson was a racist, as I discovered from talking to him for an hour and a half (see below), but he was also a brilliant scientist who did far more than just the DNA-structure stuff. The article describes some of his other accomplishments and I quote:

DNA was not the only structure that Watson solved at Cambridge. Using X-ray crystallography, Watson determined that the coat protein subunits of Tobacco Mosaic virus (TMV) were arranged helically around the viral RNA, although he could not detect the RNA (5). Two years later, Rosalind Franklin, now at Birkbeck College with J. D. Bernal, published the definitive study on the structure of TMV (6).

Watson left Cambridge in 1953 to take up a fellowship with Delbrück at the California Institute of Technology. He joined forces with Alex Rich in Pauling’s laboratory to work on the structure of RNA, but RNA gave fuzzy X-ray diffraction patterns and provided no clues as to what an RNA molecule might look like. Watson was not happy in Pasadena and, with the help of Paul Doty, was appointed an assistant professor in the Department of Biology at Harvard. However, he first spent a year in Cambridge, United Kingdom, before moving to Cambridge, Massachusetts.

Watson and Crick teamed up again to study the structure of small viruses and proposed that as a general principle, the outer protein coat of these viruses was built up of identical subunits. Franklin was also studying small viruses, and she and Watson exchanged letters, and she asked Watson and Crick to review drafts of her manuscripts.

At Harvard, Watson, his colleagues, and students made many important findings on ribosomes and protein synthesis, including demonstrating, concurrently with the team of Sydney Brenner, Francois Jacob, and Matt Meselson, the existence of messenger RNA. Watson’s contributions are not reflected in many of the publications from his Harvard laboratory. He did not add his name to papers unless he had made substantial contributions to them, thus ensuring that the credit went to those who had done the work. These papers included the discovery of the bacterial transcription protein, sigma factor, by Watson’s then graduate student Richard Burgess, along with Harvard Junior Fellow Richard Losick. At Harvard, Watson also promoted the careers of women, notably providing support for Nancy Hopkins, Joan Steitz, and Susan Gerbi. He also contributed to the split in the Department of Zoology due to his contempt for those working in the Department who were antireductionists.

 

In his last scientific paper (7), published in 1972, Watson returned to DNA. In considering the replication of linear DNA of T7 phage, he pointed out that the very ends of a linear DNA molecule cannot be replicated, the “end replication problem” which is solved in eukaryotes by telomeres. (Watson’s work was predated by Alexey Olovnikov who had published the same observation in 1971 in a Russian journal.)

Note the contributions Watson made, along with collaborators, at Harvard, and note as well that he did not put his name on publications unless he made “substantial contributions to them.”  I did that, too, and I inherited that practice from my Ph.D. advisor Dick Lewontin, who inherited it from his Ph.D. advisor Theodosius Dobzhansky, who inherited it from his research supervisor at Columbia and Cal Tech, the Nobel Laureate T. H. Morgan.  This is a good practice, and I never suffered from keeping my name off papers, for the granting agencies care only about which and how many papers come from an investigator’s funded lab, not how many his or her name is on.  I’ll digress here to say that this practice has almost died out, as people now slap their name on paper for paltry reasons, like they contributed organisms or other material.  The reason is the fierce competition for funding and credit.

Watson went on to write influential textbooks, trade books (notably The Double Helix) and headed up the Human Genome Project, from which he ultimately resigned. Finally, he ran the Cold Spring Harbor Laboratory, which he did very well until the racism scandal broke, rendering him ineffective.

Witkowski and Stillman don’t neglect the dark side of Watson:

In the late 1990s, Watson gave seminars, notably at the University of California Berkeley, where he expanded on research on the hormone POMC and related peptides and made inappropriate and incorrect observations about women. In October 2007, he made racist remarks about the intelligence of people of African descent, and, damagingly for his fellow employees at CSHL, stated that while he hoped that everyone was equal, “people who have to deal with black employees find this not true.” The CSHL Board of Trustees dissociated the institute from Watson’s comments, and he was forced to step down from his administrative position as Chancellor. The matter resurfaced in January 2019 when Watson was asked if his views on race and intelligence had changed. His answer was unequivocal: “No, not at all.” The Laboratory’s response was immediate, relieving him of all his emeritus titles. Watson and his family, however, continued to live on the CSHL campus.

They conclude this way:

Jim’s remarkable contributions to science and society will long endure—for the scientists using the human genome sequence, for students using Molecular Biology of the Gene and for readers of The Double Helix, and for reviving Cold Spring Harbor Laboratory. He was a most amazing man.

Here’s a photo of Watson and me when he visited Chicago in 2013 to introduce the Watson Lectures that he endowed for our department. Do read the cool story about how those lectures came about in my post “Encounters with J. D. Watson“.

My brief interview of Matthew Cobb about his new biography of Francis Crick

January 7, 2026 • 11:00 am

Matthew Cobb’s new biography of Francis Crick has been out for only a short time, but I’ve never seen a review less than enthusiastic (check out this NYT review). I finished it last week, and was also enthusiastic, finding it one of the best biographies of a scientist I’ve ever read. It concentrates on Crick’s science, but his accomplishments were inseparable from his personality, which focused not only on science but also on poetry (the book begins and ends with a poet), drugs, women, and philosophy (he was, by the way, a hardcore atheist and determinist).

But I digress. I really recommend that if you have any interest in the man and his work, which of course includes helping reveal the structure of DNA, you get this book and read it. It is a stupendous achievement, based on tons of research, sleuthing, and interviews, and only a geneticist could have written it. But it’s not dull at all: Matthew has always written lively and engaging prose. Crick is also a good complement to Matthew’s previous book, Life’s Greatest Secret, about how the genetic code was cracked.

As a complement, a biography of Jim Watson by Nathaniel Comfort is in the works, but hasn’t yet been published.

After I finished the book,  I had a few questions about Crick and his work, and asked Matthew if I could pose them to him and post his answers. on this site  He kindly said “yes,” and so here they are. My questions are in bold; Matthew’s answers in plain text. Enjoy:

What one question would you ask Crick if he could return from the dead? (Perhaps something that you couldn’t find out about him from your research.)

I think I would probably ask him about his view of the state of consciousness research. His key insight, with Christof Koch, was that rather than trying to explain everything about consciousness, researchers should look for the neural correlates of consciousness – neurons that fired in a correlated manner with a visual perception – and ask what (if anything) was special about how they fired, their connections, and the genes expressed within them. Since his death, we have obtained recordings from such neurons, but far from resolving the issue, consciousness studies have lost their way, with over 200 different theories currently being advanced. What did he think went wrong? Or rather, is it time to use a more reductionist approach, studying simpler neural networks, even in animals that might not be thought to be conscious?

 

Why did it take ten years—until the Nobel prize was awarded—for people to appreciate the significance of DNA?

Most people imagine that when the double helix was discovered it immediately made Watson and Crick globally famous and the finding was feted. That was not the case, mainly because the actual evidence that DNA was the genetic material was restricted to Avery’s 1944 work on one species of bacterium (this was contested) and a rather crappy experiment on bacteriophage viruses (this was the famous paper by Hershey and Chase from 1952; the experiment was so messy that Hershey did not believe that genes were made solely of DNA). So although the structure of DNA was immediately obvious in terms of its function – both replication and gene specificity, as it was called, could be explained by reciprocal base pairs and the sequence of bases – there was no experimental proof of this function. Indeed, the first proof that DNA is the genetic material in eukaryotes (organisms with a nucleus, including all multicellular organisms) did not appear until the mid-1970s! Instead, people viewed the idea that DNA was the genetic material as a working hypothesis, which became stronger through the 1950s as various experiments were carried out (eg., Meselson and Stahl’s experiment on replication) and theoretical developments were made (eg Crick’s ideas about the central dogma). Its notable that the Nobel Prize committee awarded the prize in 1962, just after the first words in the genetic code were cracked and the relation between DNA, RNA and protein had been experimentally demonstrated.

 

A lot of the latter part of the book is on Crick’s work on neuroscience (and, later, consciousness). You claim that he made enormous contributions to the field that really pushed it forward. Could you tell us a bit about what those contributions were?

Although he did not make a great breakthrough, he helped transform the way that neuroscience was done, the ideas and approaches it used. From the outset – a 1979 article in a special issue of Scientific American devoted to the brain – he focused attention on one particular aspect of brain function (he chose visual perception), the importance of theoretical approaches rooted in neuroanatomy, the need for detailed maps of brain areas and the promise of computational approaches to neural networks. All these things shaped subsequent developments – in particular the work on neural networks, which he played a fundamental part in, and which gave rise to today’s Large Language Models (he worked with both Geoffrey Hinton and John Hopfield, who shared the 2024 Nobel Prize in Physics for their work on this in the 1980s). And, of course, he made the scientific study of consciousness scientifically respectable, taking it out of the hands of the philosophers who had been tinkering with the problem for three thousand years and hadn’t got anywhere. Later, in a perspective article he published on the last day of the old millennium, he reviewed recent developments in molecular biology and predicted that three techniques would become useful: classifying neurons not by their morphology but by the genes that are expressed in them, using genetic markers from the human genome to study the brains of primates (the main experimental system he advocated using), and controlling the activity of neurons with light by using genetic constructs. All these three techniques – now called RNAseq, transcriptional mapping and neurogenetics – are used every day in neuroscience labs around the world. Indeed, within a few months of the article appearing, Crick received a letter from a young Austrian researcher, Gero Miesenböck, telling him that his lab was working on optogenetics and the results looked promising. During his lifetime, Crick’s decisive leadership role was well known to neuroscientists; now it has largely been forgotten, unfortunately.

 

Is there anything a young scientist could learn from Crick’s own methods that would be helpful, or was he a one-off whose way of working cannot be imitated?

I think the key issue is not so much Crick as the times in which he worked. As he repeatedly acknowledged, he was amazingly lucky. From 1954-1977 he worked for the Medical Research Council in the UK. He did no teaching, no grading, was not involved in doctoral supervision (I’m not even clear how many PhD students he technically supervised – 4? 3? 5? – which highlights that even if he had his name on a bit of paper, he had little to do with any of them). Apart from a couple of periods, he had no administrative duties, and only one major leadership post, at the Salk, which nearly killed him. He wrote one major grant application at the Salk (the only one he ever wrote), but basically he was funded sufficiently well to simply get on with things. And what did he do? ‘I read and think,’ he said. Try getting that past a recruitment or promotions panel today! In a way, the onus for the creation of more Cricks does not lie with young researchers, but with established scientists – they need to allow young people the time to ‘read and think’, and value failure. Most ideas will turn out to be wrong; that’s OK. Or at least, it was to Crick. Many senior researchers (and funders) don’t see things that way. However, even without such changes, young scientists can adopt some of Crick’s habits. Here’s my attempt to sum up what I think were the lessons of his life and work:

  • Read widely and avidly, even engaging with ideas that might seem eccentric or pointless, as ‘there might be something in it’ (one of his favourite phrases).
  • Talk through your ideas with your peers – try to find the weak spots in each other’s arguments.
  • At least in the initial stages of research, don’t get bogged down in the details that might counter your interpretation/theory – Crick and Brenner called this the ‘don’t worry’ approach. They figured that unconnected contrary data points might not undermine their ideas, and would eventually turn out to have specific, varied explanations.
  • Write down your ideas in the form of memos or short documents (keep them short). Writing helps you clarify your ideas and shaped your mind – do not use AI to do this! You can then share your writing with peers, which can be used as a target for discussion and debate.
  • Master the art of clear writing. Avoid jargon, keep your ideas straightforward. Again, the only way to develop this skill is to write – badly at first. So rewrite, edit, recast your writing – it will improve your thinking.
  • Above all, make sure that the science you do is *fun*. That was a word that Crick repeatedly used, and he genuinely got great pleasure from doing science and thinking about it. Seek out an area in which you can have fun and aren’t bogged down by drudgery.

Click below to get the book on Amazon:

More by Matthew on Crick, Watson, and DNA

November 15, 2025 • 10:45 am

Matthew’s biography of Francis Crick just came out, and I’m delighted, as I’m sure he is, with the spate of glowing reviews. I haven’t seen a bad one yet, and some of them rate the book as superlative. It is certainly one of the best science biographies going, and I hope it wins the Royal Society Science book prize.

I’ll finish up my endorsements of the book (the reviews will keep coming, though) by highlighting two more: one in Science and the other in the Times of London. But first you can listen to Matthew talking about J. D. Watson, who just died, on this BBC show (Matthew’s bit, which is the only discussion of biology, goes from the beginning to 9:35). As Matthew says, “This is the most important discovery in biology since Darwin’s theory of evolution by natural selection. It transforms our understanding of heredity, of evolution–of everything to do with biology.”

The American you hear in the interview is from an old interview with Watson himself.

The moderator then wants to discuss the sexism and racism of Watson, and Matthew eventually gets to it. First, though, Matthew discusses the involvement of Maurice Wilkins and Rosalind Franklin in the DNA structure, and says, as he always does, that the history was complicated, that the discovery was more collaborative than people think, but also that Crick and Watson failed to ask Franklin for permission to use her data, which was a scientific boo-boo. Watson’s further accomplishments are discussed (the Human Genome Project, the upgrading of Cold Spring Harbor Laboratories).  The mention of Watson’s personal arrogance, sexism, and racism starts at 6:50, and Matthew manages to decry it (calling it a “terrible legacy”) while not seeming nasty, something he’s good at.

Next, two reviews, the first in Science. It’s very positive, and I’ll give the exerpts (access should be free by clicking on the headline below).

In October 1958, Francis Crick and his wife, Odile, hosted a party at their house in Cambridge to celebrate Fred Sanger’s Nobel Prize in Chemistry. During the festivities, a rocket was launched from the roof terrace, which landed on the roof of a nearby church and necessitated the services of the local fire brigade (1). This otherwise inconsequential event is an apt metaphor for the scientific assault on mysticism and vitalism that the atheist Crick and his contemporaries helped pioneer through their pursuit of a new “chemical physics” of biology—an endeavor that would eventually help describe the nature of life itself. In his magnificent and expansive new biography, Crick: A Mind in Motion, Matthew Cobb forensically explores and electrifies this important chapter in the history of science through the exploits of one of its key protagonists.
Magnificent and expansive! You’ll be seeing those words on the cover.  And some of these, too:

Another intriguing theme Cobb explores is Crick’s friendship with the psychedelic beat poet Michael McClure (6). Crick was so taken by the charismatic poet, in particular, a stanza in McClure’s “Peyote Poem”—“THIS IS THE POWERFUL KNOWLEDGE / we smile with it”—that he pinned it onto a wall in his home. For Crick, the beauty inherent in the solution of a complex scientific problem and the aesthetic euphoria and sense of revelation it created were reminiscent of the perceptual effects of consuming a hallucinogenic compound, such as peyote.

Cobb also touches on Crick’s eugenicist proclamations and details some of his other disastrous forays into the social implications of science, which ultimately led him to permanently abstain from such activities. Crick’s notable lack of engagement with the 1975 Asilomar meeting, which sought to address the potential biohazards and ethics of recombinant DNA technology, was in stark contrast to Watson and biologist Sydney Brenner. Crick never explained his silence on the topic of genetic engineering (7).

Complex, energetic, freethinking, dazzling, and bohemian, Crick was also ruthless, immature, misogynistic, arrogant, and careless. The phage biologist Seymour Benzer noted that Crick was not a “shrinking violet.” Maurice Wilkins described Watson and Crick as “a couple of old rogues,” and Lawrence Bragg more politely observed that Crick was “the sort of chap who was always doing someone else’s crossword.” Cobb, however, has arrived at a somewhat more benign and nuanced interpretation of the events surrounding the discovery of the double helix, the collaborative nature of which, he asserts, was obfuscated by the fictional narrative drama of Watson’s bestseller The Double Helix.

Crick is set to become the definitive account of this polymath’s life and work. We must now wait patiently for historian Nathaniel Comfort’s upcoming biography of James Watson to complement it.

In my view, the phrase “definite account of this polymath’s life and work” is really the most powerful approbation the book could get.

You can see the review from the Times of London by clicking below, or find it archived here:

If the age of the lone scientific genius has passed, was Francis Crick among its last great specimens? His name will for ever be bound to that of James Watson and their discovery in 1953 of the double-helix structure of DNA. Yet it is a measure of Crick’s influence that this breakthrough, transformative as it was, is done and dusted barely 80 pages into Matthew Cobb’s absorbing new biography.

Cobb, a zoologist and historian of science, presents Crick (1916-2004) as the hub round which a mid-century scientific revolution revolved — a researcher and theorist of unstoppable curiosity, who unravelled the secret code behind heredity before helping to reinvent the study of the mind and consciousness. More than 70 years on, it is easy to forget how penetrating Crick’s insights were — how, before he came along, we did not know how life copies itself and the molecular mechanism behind evolution was a mystery.

But Cobb’s book is no hagiography. Briskly paced, it concentrates on Crick’s scientific life, but also offers glimpses, some unflattering, of the man behind the lab bench. The picture it builds is of a brilliant, garrulous and often exasperating individual.

. . . Cobb writes with clarity and a touch of affection for his subject. His Crick is radical in science and conservative in temperament; deeply irreligious yet moved by poetry; a philanderer who adored his wife. Above all he is insatiably curious — a mind in motion, indeed. And yes, he may also represent something that may now be lost: the era when a single intellect could sit at the centre of a scientific revolution. Crick might be best known for his collaboration with Watson and his notorious debt to Franklin. However, in the crowded, collaborative landscape of 21st-century research, where knowledge advances by increments, achieved by vast teams who work with ever growing volumes of data, it is hard to imagine another individual whose ideas will so completely redefine the life sciences.

I’d call that a good review as well. Kudos to Dr. Cobb. I told him he should celebrate by going off on a nice vacation, but I’m betting he won’t.

Matthew on the subject of his latest book: Francis Crick

November 1, 2025 • 11:00 am

As I’ve mentioned several times, Matthew has written what is the definitive biography of Francis Crick, one of the great polymaths of our time. It comes out in the first two weeks of November.

Today you can see an article that Matthew about the book for the Observor, but he and I both urge you to buy the book itself (the publisher’s site is here, a U.K. purchasing site is here, and the U.S. Hachette site, here, gives a 20% discount with the code CRICK20.

Click the headline to read the article for free:

But what is this about poetry?  Here are a few excerpts from the article.

n 1947, aged 31 and with his career in physics derailed by the war, Francis Crick, the future co-discoverer of the DNA double helix, returned to research, focusing on two fundamental biological problems: life and the brain. Over the following half century, he made decisive contributions to both these fields, becoming one of the most significant thinkers of the 20th century. In 1994, the Times hailed Crick as the “genius of our age”, comparing him to Isaac Newton, Mozart and Shakespeare, while after his death in 2004, parallels were drawn with Charles Darwin and Gregor Mendel.

Like Darwin, Mendel and Newton, Crick changed how the rest of us see the world. He drew out the implications of DNA structure, developed new ways of understanding life and evolution, and later convinced neuroscientists to adopt computational and molecular approaches, and to study the nature of consciousness.

Crick’s aim was not just to make discoveries about two fundamental scientific riddles; he also wanted to replace the superstitious and religious ideas that marked these questions. This did not mean he was stuffy or unimaginative – he was fascinated by the flux of perception and emotion he found in poetry, particularly the work of psychedelic Beat poet Michael McClure, who became a close friend. Poetry and science co-existed in his approach to the world.

Once, when I met with Jim Watson during one of his yearly visits to Chicago, he told me that part of the motivation for his and Crick’s attempt to find the structure of DNA was to confirm materialism (aka atheism): they wanted to show, as Watson told me, that the “secret of life” was a molecule that, in the right milieu, could produce a whole organism. More excerpts:

Crick’s scientific achievements have recently tended to be reduced to those few weeks in Cambridge in February 1953, when he and James Watson discovered the structure of DNA. The widely believed story that they stole the data of King’s College London researcher Rosalind Franklin is untrue: Watson and Crick knew of Franklin’s results and those of Crick’s close friend Maurice Wilkins, but they did not provide any decisive insight into the structure of DNA. Franklin knew that the pair had access to her data and bore no grudge; she soon became friendly with both men, and was particularly close to Crick and his wife, Odile.

Watson and Crick subsequently explained that had they not found the structure, then Franklin, or her colleague Wilkins, or someone else, would have done so – it was inevitable. Crick and Watson succeeded because they were lucky, smart, somewhat unscrupulous, and determined

And the poetry:

The imaginative aspect to Crick’s thinking extended to his vocabulary. In 1953, he told a friend that the double helix made him swoon every time he thought of it; this was because of its beauty, a term he often used rather than the word “elegance”, frequently employed by physicists and mathematicians. Biological results are often messy and complex, not elegant. They are nevertheless beautiful, because of their evolutionary roots and the contingent factors that have shaped them.

This sense of beauty, of deep relationships underlying complex phenomena, drove Crick’s scientific work and was linked to his fascination with poetry. As he explained:

“I hope nobody still thinks that scientists are dull, unimaginative people… It is almost true that science itself is poetry enough for them. But there is no effective substitute for the subtle interplay of words and from time to time one becomes wearied by the exact formulations of science and longs for a poetry which speaks to one’s bones.”

But here I disagree with Crick:

Although Crick admired the works of WB Yeats and TS Eliot, by the mid-1960s he had fallen out of love with them because of their mystical views. As he explained in a letter to his friend, the novelist CP Snow, he felt “you can’t be a major poet without a solid foundation of silly ideas (almost everybody thinks Yeats’s ideas silly but to me Eliot’s are just as bad)”.

Yes, Yeats was a mystic, which of course is antiscientific, but both he and Eliot wrote poetry that was non-mystical (think of Yeat’s gorgeous “The Lake Isle of Innisfree“, or Eliot’s “The Love Song of J. Alfred Prufrock“).

. . .That Crick’s otherwise penetrating mind never challenged his old prejudices and could not master political issues highlights that he was not a flawless hero nor – no matter what graffiti in 1960s Cambridge proclaimed – a candidate for the post of God. Instead, he was an extraordinarily clever man with limits to his interests and perception.

Crick’s withdrawal from cultural debates coincided with a series of shifts in his world. He and Odile moved from Cambridge to California, where he worked on neuroscience and consciousness at the Salk Institute in San Diego.

In his 50s, Crick used LSD and cannabis and became fascinated by Michael McClure’s materialist psychedelic poetry, which he admired for what he described as its fury and imagery and for its open embrace of biology: “When a man does not admit that he is an animal, he is less than an animal,” proclaimed McClure. Crick’s friendship with McClure ran through the second half of his life, and he did not see it as being in contradiction with his scientific views.

. . .In 2004, on the day that Crick died after a long illness, McClure completed what he described as his finest poem, dedicated to Crick. Full of the muscular sensation and vivid imagery that Crick appreciated, one stanza seems to represent McClure’s attempt to grapple with his friend’s inevitable end:

PERHAPS WE RETURN TO A POOL

– STEADY AND SOLID;

ready and already completed in fireworks

and lives and non-lives – thin and faint

as powerful odours stirring

my moment’s soul in the mind of place.

Below is a photo of Crick from Wikipedia with the caption, “Francis Crick in his office. Behind him is a model of the human brain that he inherited from Jacob Bronowski.” 

Francis_Crick.png: Photo: Marc Lieberman, per ticket:2015100910022707derivative work: Materialscientist, CC BY 2.5, via Wikimedia Commons

Cobb on Crick: The “Central Dogma”

December 2, 2024 • 9:45 am

As I’ve mentioned several times, Matthew Cobb has written what will likely prove the definitive biography of Francis Crick (1916-2004), co-discoverer of the structure of DNA and a general polymath. While writing it, Matthew came across some Crick material showing that biologists and historians have misunderstood Crick’s “Central Dogma” of molecular biology.

Matthew has corrected the record in the piece below from the Asimov Press. Click the headline, as it’s free to read:

You may have learned this dogma as “DNA makes RNA makes protein,” along with the caveat that it’s a one-way path. But Matthew shows that this was not Crick’s contention. I’ve indented Mathew’s words below:

The Central Dogma is a linchpin for understanding how cells work, and yet it is one of the most widely misunderstood concepts in molecular biology.

Many students are taught that the Central Dogma is simply “DNA → RNA → protein.” This version was first put forward in Jim Watson’s pioneering 1965 textbook, The Molecular Biology of the Geneas a way of summarizing how protein synthesis takes place. However, Watson’s explanation, which he adapted from his colleague, Francis Crick, is profoundly misleading.

In 1956, Crick was working on a lecture that would bring together what was then known about the “flow of information” between DNA, RNA, and protein in cells. Crick formalized his ideas in what he called the Central Dogma, and his original conception of information flow within cells was both richer and more complex than Watson’s reductive and erroneous presentation.

Crick was aware of at least four kinds of information transfers, all of which had been observed in biochemical studies by researchers at that time. These were: DNA → DNA (DNA replication), DNA → RNA (called transcription), RNA → protein (called translation) and RNA → RNA (a mechanism by which some viruses copy themselves). To summarize his thinking, Crick sketched out these information flows in a little figure that was never published.

Crick’s figure is below. Note that the dogma is simply the first sentence typed in the diagram, implying that information from either DNA or RNA, translated into a protein, cannot get back into the DNA or RNA code again. Thus changes in protein structure cannot go back and change the genetic code (see the bottom part of the diagram).

As you see, the DNA—>RNA—>protein “dogma” is an extreme oversimplification of Crick’s views. And he meant the word “dogma” to mean not an inviolable rule of nature, but a hypothesis. Nevertheless, Crick was widely criticized for using the word “dogma”.

But getting back to the diagram:

The direct synthesis of proteins using only DNA might be possible, Crick thought, because the sequence of bases in DNA ultimately determines the order of amino acids in a protein chain. If this were true, however, it would mean that RNA was not always involved in protein synthesis, even though every study at that time suggested it was. Crick therefore concluded that this kind of information flow was highly unlikely, though not impossible.

Crick also theorized that RNA → DNA was chemically possible, simply because it was the reverse of transcription and both types of molecules were chemically similar to each other. Still, Crick could not imagine any biological function for this so-called “reverse transcription,” so he portrayed this information flow as a dotted line in his diagram.

We now know, though that the enzyme “reverse transcriptase” is used by some RNA viruses to make DNA to insert into their hosts’ genomes.

Here’s what Crick said he meant by the “Central Dogma,” and, in fact, this schema has not yet been violated in nature:

In other words, in Crick’s schema, information within the cell only flows from nucleic acids to proteins, and never the other way around. Crick’s “Central Dogma” could therefore be described in a single line: “Once information has got into a protein it can’t get out again.” This negative statement — that some transfers of information seem to be impossible — was the essential part of Crick’s idea.

Crick’s hypothesis also carried an unstated evolutionary implication; namely, that whatever might happen to an organism’s proteins during its lifetime, those changes cannot alter its DNA sequence. In other words, organisms cannot use proteins to transmit characteristics they have acquired during their lifetime to their offspring.

In other words, there can be no Lamarckian inheritance, in which environmental change affecting an organism’s proteins cannot become ingrained into the organism’s genome and thus become permanently heritable.

Matthew discusses several suggested modifications of Crick’s version of the Central Dogma. Prions, misfolded proteins that cause several known diseases, were thought by some to have replicated themselves by somehow changing the DNA that codes for them, but it’s now known that prions are either produced by mutations in the DNA, or can transmit their pathological shape by directly interacting with other proteins. Prion proteins do not change the DNA sequence.

Some readers here might also be thinking that “epigenetic inheritance”, in which DNA is modified by chemical tags affixed to its bases, might refute the central dogma, as those modifications are mediated by enzymes, which of course are proteins. But as Matthew notes, those modifications are temporary, while the DNA sequence of nucleotides (sans modifications) is forever:

In other cases, researchers have pointed to epigenetics as a possible exception to Crick’s Central Dogma, arguing that changes in gene expression are transmitted across the generations and thus provide an additional, non-nucleic source of information. But still, epigenetics does not violate Crick’s Central Dogma.

During an organism’s life, environmental conditions cause certain genes to get switched on or off. This often occurs through a process known as methylation, in which the cell adds a methyl group to a cytosine base in a DNA sequence. As a result, the cell no longer transcribes the gene.

These effects occur most frequently in somatic cells — the cells that make up the body of the organism. If epigenetic marks occur in sex cells, they are wiped clean prior to egg and sperm formation. Then, once the sperm and eggs have fully formed, methylation patterns are re-established in each type of cell, meaning that the acquired genetic regulation is reset to baseline in the offspring.

Sometimes, these regulatory effects are transmitted to the next generation through the activity of small RNA molecules, which can interact with messenger RNAs or proteins to control gene expression. This occurs frequently in plants but is much rarer in animals, which have separate lineages for their somatic and reproductive cells. A widely-studied exception to this is the nematode C. elegans, where RNAs and other molecules can alter inheritance patterns.

No matter how striking, though, none of these examples violate Crick’s Central Dogma; the genetic information remains intact and the epigenetic tags are always temporary, disappearing after at most a few generations.

That should squelch the brouhaha over epigenetics as a form of Lamarckian evolutionary change, as some have suggested that epigenetic (environmental) modifications of the DNA could be permanent, ergo the environment itself can cause permanent heritable change. (That is Lamarckian inheriance.) But we know of no epigenetic modifications that last more than a couple of generations, so don’t believe the hype about “permanently inherited trauma” or other such nonsense.

And there’s this, which again is not a violation of Crick’s “Dogma”:

. . . enzymes can modify proteins in the cell after they have been synthesized, so not every amino acid in a protein is specified in the genome. DNA does not contain all the information in a cell, but Crick’s original hypothesis remains true: “Once information has got into a protein it can’t get out again.”

Now Matthew does suggest a rather complicated way that the Dogma could be violated, but it’s not known to occur, though perhaps humans might use genetic engineering to effect it. But you can read about it in his piece.

It’s remarkable that Crick’s supposition that information in a protein can’t get back to the DNA or RNA code—made only three years after the structure of DNA was published—has stood up without exception for nearly seventy years. This is a testament to Crick’s smarts and prescience.

And if you remember anything about the Central Dogma, just remember this:

“Once information has got into a protein it can’t get out again.”

Kipnis on Hitchens

December 23, 2023 • 1:00 pm

Here we have one snarky writer dissing another, and I admire both of them. This one is Laura Kipnis, writer and Northwestern University professor, going after Christopher Hitchens, who needs no introduction. It’s a fairly short piece in Critical Quarterly, free to read by clicking below (pdf is here).

Those of you who admire Hitchens—and I’m one—will have to endure him taking a drubbing about his retrograde views of women and politics, his fixation on Bill Clinton, and his overstatements about his sex life.  Yes, he was wrong, but often he was right (viz., Mother Teresa, religion, the Elgin Marbles), but he was always witty and thoughtful.

Just a few excerpts, and I’m gonna go home and cook a nice dinner with a nice bottle.

I can be as humourless as the next leftwing feminist but for some reason Christopher’s, what to call it – lasciviousness? antiquarianism? – amused more than offended me, though his public anti-abortion stance was noxious and, one suspects, hypocritical. Colour me surprised if that particular edict was upheld in practice. In any case, I never thought of him as someone you’d go to for instruction on feminism, and increasingly not on any political question, yet it was perplexingly hard to hold his bad politics against him. Mocking him on gender could even be fun, as at least there, unlike elsewhere, the positions seemed lightly held. When he published his notorious ‘Why Women Aren’t Funny’ piece in Vanity Fair, I responded (I hope a teensy bit funnily) in Slate, where he also frequently wrote, that though it was a fascinating portrait of female nature and relations between the sexes, it was unclear to which decade it applied – it had the slightly musty air of 1960s-ish Kingsley Amis, wrapped in nostalgia ‘for the merry days when sexual conquest required an arsenal of tactics deployed by bon-vivantish cads on girdled, girlish sexual holdouts. “Oh Mr. Hitchens!” you imagine one of the potential conquests squealing at an errant hand on nylon-clad knee.’

My problem with Christopher, hardly mine alone, was (to state the obvious) simply that he was one of the more charming men on the planet and mixed with liquor, this is a dangerous combination. Like most people who knew him at all, a few of the drunkest nights of my life were spent in his company. Conversations were funny, flirtatious, frank. Yet the rightward turn and increasing political rigidity also made him seem ridiculous: eruditely shrill.

And Kipnis avers that Hitchens was obsessive about Bill Clinton, particularly about his philandering (or rape, or whatever he actually did):

Christopher, on the other hand … Something about Bill Clinton’s sex life seemed to derange him. He was off the rails on the subject, literally sputtering. I tried to put it to him that he seemed, well, overinvested. It seemed way too personal, somehow off. What was it about Bill Clinton that had this unhinging effect on him? (I was kind of drunk at that point myself.) I suppose I expected him to at least pretend to ponder the question, devote maybe a few seconds to a show of self-examination. Anyone would. Not him. He was barricaded against anything I could say, also against the ‘what is this “about” for you’ sort of conversation that drunk people are known to have, which is one of the fun things about drinking, Something obdurate and hardened switched on instead. Thinking was not what was taking place, just pre-rehearsed lines and a lot of outrage.

. . . . When I later tried (and failed) to read No One Left to Lie to, his anti-Clinton screed, it reminded me of what had seemed so deranged and shrill that evening in Chicago. Of course, there’d be much more of that to come: the bellicose over-certainty about Iraq, the increasingly militaristic posturing – there was a comic rigidity about it. I’m thinking of what philosopher Henri Bergson wrote in his 1900 book on laughter about what turns people into comedic figures:2 being unaware of something automatic or mechanistic in your attitudes or actions, like Lucille Ball on the chocolate factory assembly line, turning into an automaton herself as the line keeps speeding up. Inflexibility is funny, though also a tragic waste of whatever’s human in us. The human is elastic.

Kipnis is smart and is not without humor herself, so one has to seriously consider her point of view. But what we can all agree on is what she says towards the end:

There was a sentence of Christopher’s that I always remembered, from a review of something by Richard Yates. I wished I had written it. Regarding Yates: ‘It’s clear that he’s no fan of this smug housing development or the new forms of capitalism on behalf of which its male inhabitants make their daily dash to the train.’ It’s a sentence I’m sure he gave little thought to, but I loved its man-of-the-world swoop – from a writer’s oeuvre to the banalities of suburban marriage to the mode of production, crammed into an offhandedly elegant sentence. There were always things to admire in his sentences, even as his political instincts went to shit.

The man could write. And when he wrote and was right, it was great stuff, like “God Is Not Great.”

Dan Dennett: a new book and an interview in the NYT

August 27, 2023 • 12:00 pm

I recently finished Dan Dennett‘s new autobiography, I’ve Been Thinking (cover below; click to get an Amazon link), and I was deeply impressed by what a full life the man has had (he’s 81).  I thought he spent most of his time philosophizing, writing, and teaching philosophy at Tufts; but it turns out that he had a whole other life that I knew little about: owning a farm in Maine, sailing all over the place in his boat, making tons of apple cider, hanging out with his pals (many of them famous), and traveling the world to lecture or study. Truly, I’d also be happy if I had a life that full. And, as Dan says in his interview with the NYT today, he’s left out hundreds of pages of anecdotes and other stuff.

Although I’ve taken issue with Dan’s ideas at times (I disagree with him on free will and on the importance of memes, for example), you can’t help but like the guy. He’s sometimes passionate in his arguments, but he’s never mean, and of course he looks like Santa Claus. Once at a meeting in Mexico, I was accosted by Robert Wright, who was incensed that I’d given his book on the history of religion a bad review in The New Republic.  Wright plopped himself down beside me at lunch, so I was a captive audience, and proceeded to berate and harangue me throughout the meal. It was one of the worst lunch experiences I’ve ever had.

Because of Wright’s tirade, I was so upset that, after the meal was done, I went over to Dan, jumped in his lap, and hugged him (telling him why). I was greatly relieved, for it was like sitting on Santa’s lap. Now Santa, who’s getting on, has decided to sum up his career. The book is well worth reading, especially if you want to see how a philosopher has enacted a life well lived.

In today’s paper there’s a short interview with Dan by David Marchese, who has been touted as an expert interviewer. I didn’t think that Marchese’s questions were that great, but read for yourself (click below):

I’ll give a few quotes, mostly about atheism and “other ways of knowing,” First, the OWOK. Marchese’s questions are in bold; Dennett’s responses in plain text. And there are those annoying sidenotes that the NYT has started using, which I’ve omitted.

Right now it seems as if truth is in shambles, politics has become religion and the planet is screwed. What’s the most valuable contribution philosophers could be making given the state of the world? 

Well, let’s look at epistemology, the theory of knowledge. Eric Horvitz, the chief scientist at Microsoft, has talked about a “post-epistemic” world.

How? 

By highlighting the conditions under which knowledge is possible. This will look off track for a moment, but we’ll come around: Andrew Wiles proved Fermat’s last theorem. 1990s, the British mathematician Andrew Wiles proved a theorem that had stumped mathematicians since it was proposed by Pierre de Fermat in 1637.

It was one of the great triumphs of mathematics in my lifetime. Why do we know that he did it? Don’t ask me to explain complex mathematics. It’s beyond me. What convinces me that he proved it is that the community of mathematicians of which he’s a part put it under scrutiny and said, “Yep, he’s got it.” That model of constructive and competitive interaction is the key to knowledge. I think we know that the most reliable path to truth is through communication of like-minded and disparate thinkers who devote serious time to trying to get the truth — and there’s no algorithm for that.

Note this bit: “the most reliable path to truth is through communication of like-minded and disparate thinkers who devote serious time to trying to get the truth.” This means that all knowledge, including the “other ways of knowing” of indigenous people, has to be vetted by like-minded and disparate thinkers. If it hasn’t been, it’s not another way of knowing, but only a way of claiming to know.

But wait! There’s more!

There’s a section in your book “Breaking the Spell” where you lament the postmodern idea that truth is relative. How do we decide which truths we should treat as objective and which we treat as subjective? I’m thinking of an area like personal identity, for example, where we hear phrases like, “This is my truth.” 

The idea of “my truth” is second-rate. The people who think that because this is their opinion, somehow it’s aggressive for others to criticize or reject them — that’s a self-defeating and pernicious attitude. The recommended response is: “We’d like to bring you into the conversation, but if you’re unable to consider arguments for and against your position, then we’ll consider you on the sidelines. You’re a spectator, not a participant.” You don’t get to play the faith card. That’s not how rational inquiry goes.

Marchese asks too many questions about AI and ChatGPT, topics which, while they may be important, bore me to tears. He also gets a bit too personal. He should have stopped inquiring after the first answer below.

There was something in your memoir that was conspicuous to me: You wrote about the late 1960s, when your pregnant wife had a bowel obstruction. 

Yeah, we lost the baby.

You describe it as “the saddest, loneliest, most terrifying” time of your life. 

Yes.

That occupies one paragraph of your memoir. 

Yes.

What is it indicative of about you — or your book — that a situation you described that way takes up such a small space in the recounting of your life? 

Look at the title of the book: “I’ve Been Thinking.” There are hundreds of pages of stories that I cut at various points from drafts because they were about my emotional life, my trials and so forth. This isn’t a tell-all book. I don’t talk about unrequited love, failed teenage crushes. There are mistakes I made or almost made that I don’t tell about. That’s just not what the book’s about.

Finally, the good stuff about atheism and religion. Although regarded as one of the “Four Horsemen of New Atheism” along with Hitchens, Dawkins, and Harris, Dan has been the least demonized of them, probably because he’s not a vociferous anti-theist and regards religion as a phenomenon deserving more philosophical study than opprobrium. Nevertheless, he makes no bones about his unbelief:

We have a soul, but it’s made of tiny robots. There is no God. These are ideas of yours that I think a lot of people can rationally understand, but the gap between that rational understanding and their feelings involves too much ambivalence or ambiguity for them to accept. What is it about you that you can arrive at those conclusions and not feel adrift, while other people find those ideas too destabilizing to seriously entertain? 

Some people don’t want magic tricks explained to them. I’m not that person. When I see a magic trick, I want to see how it’s done. People want free will or consciousness, life itself, to be real magic. What I want to show people is, look, the magic of life as evolved, the magic of brains as evolving in between our own ears, that’s thrilling! It’s affirming. You don’t need miracles. You just need to understand the world the way it really is, and it’s unbelievably wonderful. We’re so lucky to be alive! The anxiety that people feel about giving up the traditional magical options, I take that very seriously. I can feel that anxiety. But the more I understood about the things I didn’t understand, the more the anxiety ebbed. The more the joy, the wondrousness came back. At the end of “Darwin’s Dangerous Idea,” I have my little hymn to life and the universe.  That’s my God — more wonderful than anything I could imagine in detail, but not magical.

So how do you understand religious belief? 

No problem at all. More people believe in belief in God than believe in God. [Marchese takes issue with this in a sidenote.] We should recognize it and recognize that people who believe in belief in God are sometimes very reluctant to consider that they might be wrong. What if I’m wrong? That’s a question I ask myself a lot. These people do not want to ask that question, and I understand why. They’re afraid of what they might discover. I want to give them an example of somebody who asks the question and is not struck down by lightning. I’m often quoted as saying, “There’s no polite way of telling people they’ve devoted their life to an illusion.” Actually, what I said was, “There’s no polite way of asking people to consider whether they’ve devoted their life to an illusion, but sometimes you have to ask it.”

There are better questions that could have been asked. For example, I would have asked Dan, “What do you think has been your greatest contribution to philosophy?” and “What has been your biggest error in your work on philosophy?”  Readers might suggest other questions below, though I’m not going to convey them to Dan!

A photo of Dan en famille, with caption, from the interview. I knew him only after his beard turned white, so I wouldn’t have recognized him:

Two of my photos of Dan. The first is in Cambridge, MA, on the way to the “Moving Naturalism Forward” meeting in 2016. We drove the three hours from Boston to Stockbridge, and Richard had to fly back early because of a hurricane warning. Ergo Dan argued with me about free will for three hours’ return drive on the turnpike from Stockbridge to Boston (it was not covered with snow). That was something to remember, but I gave no ground:

And Dan at a symposium on religion at the University of Chicago in 2019.  It was tedious at times, and I think Dan is showing some impatience here with the annoying lucubrations of Reza Aslan.