Insofar as I have any “philosophy” about how I do my work, it’s this: keep experiments simple. I’ve always tried to do experiments sufficiently uncomplicated and easy to understand that the results—one way or the other—would be clear-cut enough to not require (or barely require) statistical analysis. I’ve taught my students this notion, too, and I think I’ve succeeded in that endeavor.
And the experiments I most admire are equally simple. The most beautiful, as I’ve mentioned before, is Meselson and Stahl’s demonstration, in 1958, that the replication of DNA was “semiconservative”: that is, when a two-stranded DNA molecule replicates, it unzips and each strand forms a template for building a new strand from nucelotide and sugar constituents. (There are many other ways DNA could have replicated.)
The results of this experiment were crystal clear: they involved first marking all the DNA strands in bacteria with a heavy isotope of nitrogen (you can do this simply by giving the bacteria Purina’s heavy-nitrogen E. Coli Chow). They then put those bacteria into medium that had “light” nitrogen, extracting the bacterial DNA at various intervals—as their DNA replicated—and centrifuging it in a cesium chloride density gradient. This enables you to see which strands have the heavy nitrogen and which don’t, for the different-weight strands move to different positions in the chemical gradient; and various theories of how DNA replicates predict different patterns for how the strands will migrate. I’d recommend simply downloading the Meselson and Stahl paper at the link below to see how clean the results were. You don’t have to be a scientist to understand how the experiment worked, and what it means. (This, and the experiment following, are all described beautifully in Horace Freeland Judson’s The Eighth Day of Creation.)
Below is figure 4 from Meselson and Stahl’s paper, which shows absolutely, and without any need for mathematical analysis, that DNA replicates semiconservatively. See how a “heavy” (high-weight) band starts giving rise to a lighter band within one generation (a DNA molecular that’s half the original “light” one and half the new “heavy”one). Lighter-weight bands are to the left. And then, after another generation of DNA replication (the bacterium E. coli replicates every 20 minutes), you see even lighter bands, now consisting of the newly formed light strands which have themselves become templates for yet another light strand—giving rise to “double light” DNA. Read from the top down: from the beginning of the experiment to the end, note how heavier bands produce semi-heavy bands (half old, half new DNA) and then fully light bands (all new DNA).
Lighter-weight bands are to the left:
This is what scientists call “a clean result”
The second most beautiful experiment, which took place fifty years and one week ago, was done by Marshall Nirenberg and J. Heinrich Matthaei, and is the subject of a really nice article by our pinch-“blogger” Matthew Cobb in yesterday’s Telegraph “Genes and DNA: meet the first man to read the book of life.” The piece is really about Matthaei, who did the crucial experiment, in 1961, that began the decoding of DNA. By “decoding”, I mean understanding how the sequence of four nucleotide “letters” in DNA (adenine, guanine, cytosine, and thymine [which is “uracil” in the RNA product produced by DNA]) codes for amino acids, the constituents of proteins. After all, what DNA “does”, by and large, is code for proteins.
It had been theorized by George Gamow that because there are 4 DNA bases and 20 amino acids, the code was probably a triplet code, since with 4 bases a doublet code could only yield 16 (4 X 4) amino acids. Unravelling this code was one of the major accomplishments of modern biology, and was begun by Nirenberg and Matthaei in Nirenberg’s lab at the National Institutes of Health. As Matthew describes, the crucial experiment was actually done by Matthaei while Nirem=nberg was away. Setting it up was complicated, for it required constructing a system of “cell-free” protein synthesis, made by using the cell contents of bacteria. Once in place, the researchers could use artificially constructed RNAs (the product of DNA that itself codes for proteins) to see what proteins could be produced by RNA in the artificial system.
As described on pp. 473-480 in The Eighth Day of Creation (buy that book!), the crucial experiment, called “27Q,” was begun at 3 a.m. (!) on May 27, 1961. It was over six hours later. Matthaei determined that an artificial strand of RNA composed only of the nucleotide base uracil (“poly-U”) produced proteins containing only phenylalanine. Thus “UUU” (or “UUUU” or higher polymers; they didn’t yet know the code was triplet) coded for that amino acid. They cracked codes for other amino acids as well. As Matthew describes in his piece:
The discovery was finally revealed two weeks later in Moscow, at the Fifth International Congress of Biochemistry. Nirenberg was given 15 minutes to present his findings – but only a handful of people turned up to hear a nobody claim he had solved a problem that was still defeating the world’s largest laboratories. When the news reached Francis Crick that afternoon, he immediately changed the conference programme so that the young American could give his talk again. The next day, in front of a packed lecture theatre, Nirenberg described his careful experiments and created a sensation. When he stepped on to the stage, Nirenberg also stepped into history.
By the end of the year, Crick had shown that the DNA code was a triplet code, and that code had been cracked for every amino acid.
Nirenberg stepped into history, but Matthaei only got his toe in. For in 1968 the Nobel Prize for Medicine and Physiology was given to Robert Holley, H. Gobind Khorana, and Marshall W. Nirenberg for uncovering the relationship between DNA sequences and proteins. Matthaei was left out in the cold. (Nobel Prizes in one area cannot be given to more than three people in a given year). This is quite unjust, since Matthaei had done the crucial experiment and was also pivotal in setting up the cell-free synthesis system. As usual, the boss gets the prizes and the grunts get squat. I consider Meselson and Stahl’s lack of Nobels equally unjust.
Nirenberg died last year, but what happened to Matthaei? Surprisingly, even at age 82 he still goes to the lab, bicycling to the Max-Planck Institute every day. You can see a video of him and an interview (in German) here. What a trouper!
Meselson, M. and F. W. Stahl, 1958. The replication of DNA in Escherichia coli. Proc. Natl Acad Sci USA 44: 671–82. doi:10.1073/pnas.44.7.671
Nirenberg, M. W., and J. H. Matthaei. 1961. The dependence of cell-free protein synthesis in E. coli upon naturally occurring or synthetic polyribonucleotides. Proc Natl Acad Sci U S A 47:1588-602.
31 thoughts on “The second most beautiful experiment in biology”
It is great that Matthaei is still at work & such a pity that he has become relatively forgotten by history. Well done Matthew for putting his name back out there for anglophones. One of the emiritus professors I know is still doing top research & is regularly publishing at the 84. I only discovered recently that another emeritus (medical) professor I know was supervised in his biochemistry degree by Francis Crick.
Get that bicycle out Professor Coyne!
theorized by George Gamow that because there are 4 DNA bases and 20 amino acids,
the code was probably a triplet code
Is there any work, even hypothetical, of the evolution of DNA/RNA coding schemes? Or
any possible explanations why the RNA
codon table has the specific redundancies that it does? Why a 6:1 redundancy for
Leucine, but no redundancies for Tryptophan? Have these redundancies been shown to
have fitness benefits? Is it necessarily better to have a redundant triplet code, or
could some speculative version of life do better with a doublet plus amino acid
A mere student of astrobiology here. (If I’m going to brag, I can as well throw in a forgotten basic course in molecular biology years ago.) But as you may surmise, the ease and potential variation of genomic machinery evolution is among the topics of interest.
I can’t say that I have a good overview yet, but among the topics studied is coding evolution (Crick’s “frozen accident”; discovery of code robustness) over amino acid evolution (RNA anti-codon/AA affinities; AA metabolism phylogenies) to RNA world vs ribosome evolution (Koonin’s work, among others).
Your question reminds me most about a line of work I read latest on the immersive connection between physics of noisy channels and the evolution of the genetic code (or molecular codes, such as the transcription network, in general). [“A colorful origin for the genetic code: Information theory, statistical mechanics and the emergence of molecular codes“, Tsvi Tlusty, Physics of Life Reviews, 2010.]
I confess that the physics angle is alluring, but it looks like a really good model to me. Rate distortion theory predicts:
a) the smoothness of the code (fig 1), which translates to robustness.
(Answering your question about redundancy.)
b) a fitness function (eq 4) between error-load, diversity and cost of specificity,* that drives a smooth phase change to an emergent code vs incorporated amino-acids.
c) a connection between the topology of the code (number of unique amino acids) and the coloring problem, specifically that our effective 48 codon (wobble) triplet code under selection admits maximum 20 AAs (table 1) after the phase change.
(Answering your question about alternate codes, how they could look, and potential evolutionary pathways between codes.)
d) that Crick’s frozen accident or some similar evolutionary mechanism is necessary to explain the 48 wobble to 64 non-wobble discrepancy.
(Answering your question about leucine vs tryptophan.)
* At this point there is a lot of connection with chemistry (chemical distance, say regards hydrophobicity or molecular size; average binding energy) vs physics (potential energy; graph theory; phase change).
“a fitness function (eq 4) between” – better: “a fitness model (eq 4) with”, et cetera.
Huh. As I prepared the earlier comment I stumbled on part of the follow up in the same journal. Apparently this approach may be very fruitful, potentially extending to everything from code evolution constrained by the availability of environmental metabolic free energy to protein folding.
Maybe “a really good model” is more than my wishful thinking.
Thanks! Now that is a cool paper. And it directly raises a follow-on question about evolution in such circumstances. The paper says,
“A major finding of our hypothesized model is that the topology of the code sets an upper limit to the number of first excited modes (of the graph Laplacian), and thus to the number of amino-acids. … our model suggests that the dynamics of code evolution are affected by the topology of the codon graph. Close to the coding transition, only the modes with the lowest error-load will be excited.”
The question: Since the Modern Synthesis, the well-known analytical tools used to study and explain evolution use “continuous” models for inherited traits, even if the “continuity” comes from some approximation to the central limit theorem.
But the situation with RNA coding is very, very different. All the possible codes are quite discrete. As advanced in the paper (!), code evolution may even depend on the discrete spectral properties of the code’s graph.
How well-equipped is evolutionary theory to deal with (fundamentally) discrete choices like RNA codes?
A quick glance at Jerry’s book and the only possibly similar example I see is the evolution of sex, and we know that there isn’t a full understanding of an evolutionary preference for one parent or two.
Jerry, great post, but in need of a minor correction — 15^N is not radioactive, it’s a stable heavy isotope. Thanks for all your work.
Yep, you’re right. I’ve fixed it, thanks.
I’d put explanation of the triplet code as number one. Was there any discovery of greater significance in molecular biology than understanding the relationship of the DNA code to the proteome? How an information-encoding biopolymer (DNA) is read and rendered into everything we’re made of is one of the greatest discoveries of humankind. For me, better than black holes and nuclear fusion and all the rest. How a molecular replicator transitions to a living thing capable of metabolism, compartmentalization, etc. is the biggest open question in the world (not that there aren’t many steps and processes involved).
I think the triple-code experiment is more important, and more fundamental, than Meselson and Stahl’s study, but I think the M&S experiment is simply more elegant, more beautiful.
As usual a great post – I learn a lot from you prof
BTW Before reading the body of “The second most beautiful experiment in biology” I jumped to the conclusion that it was being described as the second most beautiful experiment because we all agree on what comes first… 🙂
I think it’s truly amazing that these experiments were done within my lifetime. And that since then, we’ve sequenced the entire human genome.
The pace of scientific advancement is truly astonishing.
I well remember the enthusiam with which the relatively new deciphering of the genetic code was taught to me in an undergrad bio course some time around ’69-’70. It’s indeed been quite the era to live through. Thanks for triggering that reflection.
This is great stuff. Why are our molecular biology textbooks so sterile, imputent and white washed? They rarely include these classic experiments in detail. I find that my students really learn a lot from delving into these experiments and looking at the raw data…very few things in molecular biology stir the imagination and spur critical thinking like these experiments.
As a genomicist (in training), I have to take exception to the lead-in to this post. You seem to be implying that experiments that require a higher level of statistical analysis – where the results are not necessarily intuitive – are somehow less good or less valid. If you have a way to do linkage mapping and haplotype phase imputation with less than moderately heavy statistics, please let me know!
What kind of philosophy is that? It would seem to limit a lot of the experiments you’re willing to do and teach. Is this how most evolutionary biologists think?
Excuse me, but this is analogous to accommodationism reasoning. =D You can have strategies on short and longterm.
Physicists like Tegmark or mathematicians like Tao work isolated at times, and tend to employ switching between a few strategies. Tegmark has mentioned doing a speculative work between four down to earth.
Tao describes mathematicians switching between tool driven exploration and targeted exploration. I.e. having developed a mathematical tool set and using it for every problem that looks like a nail for that hammer vs tackling very hard problems that may not yield easily (akin to Tegmark’s speculative work with high risk-possible high return).
Then if you have a lab, it adds another level of strategies. I can see how you want to go for clear, demonstrative work that adds knowledge and transfer easily before going into details. (Details, which I’m sure is a lot more, and more hazardous for progress, in biology than in physics and math.)
I tend to look at it the other way. If it works fruitfully it is good science; no reason to worry about unobserved unknowns.
Non-scientist here. As an ignorant peasant I like clear answers to simple questions. I realize that this is not often possible in science however. I would hazard that JC is not saying that those experiments are less valid or good. Are we not talking about adding layers of complexity which can mean different analyses are possible or the weight is more on interpretation? (I am asking you, not telling you!)
I think what’s complicated is the quantity of data generated by many every-day experiments in molecular biology. Statistics are required for any hope of interpretation.
Evolutionary biology is not my background–and Coyne is one of the most famous evolutionary biologists alive–so I’m surprise, and a little skeptical, that statistics plays such a small role in that field.
Formulated like that, it is easier to agree.
One of my courses in grad school was a journal club in which we discussed classic papers in science. Of course the M&S semi-conservative DNA replication was in there. My favorite was the one by (I think) Monod and Jacob that demonstrated the RNA intermediate between DNA and protein.
So I looked it up and I was thinking about was the Brenner, Jacob and Meselson paper “An unstable intermediate carrying information from genes to ribosomes for protein synthesis.” 1961. Nature 190: 576-81.
Monod and Jacob had the Lac Operon paper. It’s tough to keep these things straight in my mind.
I teach these classic experiments in our intro biology class, too — it’s much better to teach how we know how DNA replicates than to just flatly teach them how DNA replicates. It’s an important distinction.
I also have to Kwok a little bit: Frank Stahl was one of my Ph.D. advisors. Cool science, but also a very good guy.
Okay, you owe me a Leica rangefinder camera for that episode of namedropping.
To give a little more historical perspective to Kevin’s #6 comment, consider this. Around 1900, Karl Correns, Hugo DeVries, and Eric Tschermak independently did genetic experiments, and then discovered that Gregor Mendel had scooped them. I call them, respectfully, the midwives of Mendelism, because they gave full credit to Mendel, and reintroduced his work to the scientific world.
Although Correns and De Vries died in the 1930’s, Tschermak lived until 1962. He rediscovered Mendel, and lived to see cracking of the genetic code. That is rather remarkable.
Who has the details of the simple experiment using O18 to show that the Oxygen released by photosynthesis comes from H2O and not from CO2?
Who did the experiment with bacterial colonies on a petri dish and showed that antibiotic resistance is present before the bacteria are challenged with the antibiotic? I think it was a husband and wife team.
The second experiment was done by Esther and Joshua Lederberg; it’s described here.
My admiration of Joshua Lederberg knows no bounds (talk about chance favoring a prepared mind!), but weren’t Lederberg and Lederberg just using replica plating to demonstrate what Luria and Delbrück (Genetics 28:491, 1943) had already established with phage resistant mutants?
I believe it shows the same thing the other way ’round. The bacteria were cultured in media with the heavy isotope, then transferred to medium with the light isotope, followed by periodic DNA extraction. I think the figure shows this.
Yep, thanks. I started the description right (all heavy DNA) but then got balled up and got things backwards. I’ve fixed the description so it’s completely correct now (I think!). Thanks!
It’s still not right. I believe it should look that:
See how a “heavy” (high-weight) band starts giving rise to a lighter band within one generation (a DNA molecular that’s half the original “heavy” one and half the new “light”one).