The Nobel Prizes for Chemistry and for Physics

October 9, 2024 • 9:00 am

Well, I missed a day, but the other two Nobel Prizes in science—Chemistry and Physics—were awarded.

The Chemistry Prize, well deserved since I know about the work, went to three people: David Baker (University of Washington), Demis Hassabis (“a British computer scientist and artificial intelligence researcher”), and John M. Jumper (“an American senior research scientist at DeepMind Technologies”) for both designing proteins and predicting their three-dimensional structure simply from the sequence of amino acids—an endeavor that had largely defied previous attempts. Now you can feed the AA sequence into a computer and, lo, get the structure. And the 3D structure is immensely important in understanding protein function and figuring out how to modify proteins (and hence DNA) to act in different ways. From the Nobel Press release:

They cracked the code for proteins’ amazing structures

The Nobel Prize in Chemistry 2024 is about pro­teins, life’s ingenious chemical tools. David Baker has succeeded with the almost impossible feat of building entirely new kinds of proteins. Demis Hassabis and John Jumper have developed an AI model to solve a 50-year-old problem: predicting proteins’ complex structures. These discoveries hold enormous potential.

The diversity of life testifies to proteins’ amazing capacity as chemical tools. They control and drive all the chemi­cal reactions that together are the basis of life. Proteins also function as hormones, signal substances, antibodies and the building blocks of different tissues.

“One of the discoveries being recognised this year concerns the construction of spectacular proteins. The other is about fulfilling a 50-year-old dream: predicting protein structures from their amino acid sequences. Both of these discoveries open up vast possibilities,” says Heiner Linke, Chair of the Nobel Committee for Chemistry.

Proteins generally consist of 20 different amino acids, which can be described as life’s building blocks. In 2003, David Baker succeeded in using these blocks to design a new protein that was unlike any other protein. Since then, his research group has produced one imaginative protein creation after another, including proteins that can be used as pharmaceuticals, vaccines, nanomaterials and tiny sensors.

The second discovery concerns the prediction of protein structures. In proteins, amino acids are linked together in long strings that fold up to make a three-dimensional structure, which is decisive for the protein’s function. Since the 1970s, researchers had tried to predict protein structures from amino acid sequences, but this was notoriously difficult. However, four years ago, there was a stunning breakthrough.

In 2020, Demis Hassabis and John Jumper presented an AI model called AlphaFold2. With its help, they have been able to predict the structure of virtually all the 200 million proteins that researchers have identified. Since their breakthrough, AlphaFold2 has been used by more than two million people from 190 countries. Among a myriad of scientific applications, researchers can now better understand antibiotic resistance and create images of enzymes that can decompose plastic.

Life could not exist without proteins. That we can now predict protein structures and design our own proteins confers the greatest benefit to humankind.

Reader Simon found two tweets from the AlaphFold program showing how the protein structures come out when the amino acid sequence is fed in:

And a petulant tweet by Oded Rechavi (I think it’s an unfair comparison):


And this year’s Nobel Prize in Physics went to John Hopfield (emeritus professor at Princeton) and Geoffrey Hinton (emeritus professor at Toronto) who together developed models for neural networks of the kind used in the recent set of papers on decoding the fly brain.  From the press release:

They trained artificial neural networks using physics

This year’s two Nobel Laureates in Physics have used tools from physics to develop methods that are the foundation of today’s powerful machine learning. John Hopfield created an associative memory that can store and reconstruct images and other types of patterns in data. Geoffrey Hinton invented a method that can autonomously find properties in data, and so perform tasks such as identifying specific elements in pictures.

When we talk about artificial intelligence, we often mean machine learning using artificial neural networks. This technology was originally inspired by the structure of the brain. In an artificial neural network, the brain’s neurons are represented by nodes that have different values. These nodes influence each other through con­nections that can be likened to synapses and which can be made stronger or weaker. The network is trained, for example by developing stronger connections between nodes with simultaneously high values. This year’s laureates have conducted important work with artificial neural networks from the 1980s onward.

John Hopfield invented a network that uses a method for saving and recreating patterns. We can imagine the nodes as pixels. The Hopfield network utilises physics that describes a material’s characteristics due to its atomic spin – a property that makes each atom a tiny magnet. The network as a whole is described in a manner equivalent to the energy in the spin system found in physics, and is trained by finding values for the connections between the nodes so that the saved images have low energy. When the Hopfield network is fed a distorted or incomplete image, it methodically works through the nodes and updates their values so the network’s energy falls. The network thus works stepwise to find the saved image that is most like the imperfect one it was fed with.

Geoffrey Hinton used the Hopfield network as the foundation for a new network that uses a different method: the Boltzmann machine. This can learn to recognise characteristic elements in a given type of data. Hinton used tools from statistical physics, the science of systems built from many similar components. The machine is trained by feeding it examples that are very likely to arise when the machine is run. The Boltzmann machine can be used to classify images or create new examples of the type of pattern on which it was trained. Hinton has built upon this work, helping initiate the current explosive development of machine learning.

“The laureates’ work has already been of the greatest benefit. In physics we use artificial neural networks in a vast range of areas, such as developing new materials with specific properties,” says Ellen Moons, Chair of the Nobel Committee for Physics.

Both prizes show the power of AI, but it isn’t AI that decided to tackle both the chemistry and physics problems; rather, it was AI that was a tool used to solve important scientific questions.

And we have a (sort-of) winner. Though nobody guessed the Physics winners, reader Luke correctly guessed two of the three Chemistry winners (he gave only two names, Jumper and Hasabis, but I’ll let the absence of a third winner slide), and so wins an autographed book.  I ask Luke to get in touch with me to obtain his prize.

18 thoughts on “The Nobel Prizes for Chemistry and for Physics

  1. Nobel press release:

    “They cracked the code for proteins’ amazing structures”

    Rewrite->

    “They cracked the code for” [ predict(ion) of the structure of virtually all the 200 million proteins that researchers have identified] [that contain generally at least 20 amino acids]”

    … so somehow not every experimentally determined protein structure that has been exponentially accumulating in the Protein Data Bank (PDB) since the late 70s has been predicted from only the “20” amino acids.

    More than just 20 amino acids are in any given experimental protein structure in the PDB, plus any chaperones, nucleic acids, antibodies, post-translational modifications, prosthetic groups, cofactors, agonist/antagonist, inhibitors, and so on,… that the prediction program used as a training set to begin with – but not the Nuclear Magnetic Resonance structures so much, which are all also abundant in the PDB, along with CryoEM – all structures experimentally determined – and yet “virtually all” structures were … predicted…. was time machine learning discovered for this as well?

    “If it disagrees with experiment it is wrong. In that simple statement is the key to science. It does not make any difference how beautiful your guess is. It does not make any difference how smart you are, who made the guess, or what his name is – if it disagrees with experiment it is wrong.”

    -Richard P. Feynman

    <i>The Character of Physical Law
    (1965)

    Chapter 7
    “Seeking New Laws”, p.150 (Modern Library edition, 1994)

    1. A short response: think of, to a zeroth order approximation, deep learning models as fancy curve fitting. The goal is to have enough data to get a curve that models the system, but not so much that you capture the noise. If the final model allows correct interpolation and extrapolation (within reason), it “predicts”.

      In this case, there are physical laws that guide the folding. We don’t know them all, but the fit curve still fills in blanks and allows extrapolation (it has done well with proteins not in the database, some of which are fairly unique in some ways).

      ChatGPT, on the other hand, can’t really extrapolate. There are NO underlying rules to guide the data. Only the data. It may also have trouble interpolating in may areas, do to sparse, or noisy, or incorrect, data.

      1. How was the result cross-validated – how large was the test set?

        Why are solvent-exposed loops and side chains poorly predicted? Because they are not experimentally observed to high enough precision, except in solution NMR structures which captures the dynamics.

        Protein dynamics can explain life – not “the structure.

        Protein dynamics are intrinsic to structure. Consider the Gleevec—tyrosine kinase ABL1 structure – a dynamic loop was stabilized by Gleevec. Did a computer program “predict” that experimental result?

        1. Got cut off : I ask about Gleevec in particular because A. I am
          interested but B. to illustrate how the “prediction” would be founded on experiment. So what would be the discovery in that case – and, namely, with … what, AlphaFold3? Is what I’m getting at.

          What precisely is the discovery, and where is the experiment?

          Also note how the press release avoids saying “protein folding”. THAT is under all the structures – including folding chaperones, and all this stuff experimental protein folders work on. And there was talk before by Nobel laureates how the protein folding problem was “solved” by AlphaFoldn – but I’m not reading that now.

        2. “Consider the Gleevec—tyrosine kinase ABL1 structure – a dynamic loop was stabilized by Gleevec. Did a computer program “predict” that experimental result?”

          Back in the day, I work on the BCR/ABL kinase that Gleevec works on. The answer is no, or rather, not really. There were some software packages then that predicted protein folding but they were crude and only approximate.

          1. I remember the CASP results that produced immense excitement.

            So that’s the standard here – experimentalists produced some number of structures that AlphaFold genuinely blind-predicted way way better than any other program ever two years in a row (AlphaFold1 then 2), and that’s in the long form summary. Perhaps that set is excluded from all training, but they give no suggestion.

            It is nice it notes “The progress described above would not have been possible, of course, without the efforts from structural biologists in providing all the experimentally determined structures that have gone into the Protein Data Bank.”

  2. I can well understand the prize in Chemistry, since the technology to predict protein. structure is a huge breakthrough. But I don’t understand the prize in Physics. AI technology is also a Very Big Event, but I don’t see what it has to do with advancing the field of Physics. Sabine Hossenfelder agrees with me. https://www.youtube.com/watch?v=dR1ncz-Lozc

    1. Inclined to agree with you, even before seeing Dr. Hossenfelder’s video. At least the guy who invented the blue LED cracked a previously intractable quantum energy problem.

      1. I think the issue is that the categories for the Prize are out of date, and the committee believed they had to recognize AI somewhere. There needs to be a reassessment of those categories in order to keep the Prize relevant to modern times.

    2. I learned about Hopfield’s work doing my diploma thesis at a chair for computational physics at the physics department of my university. Much of the theory of what neural networks can do and how to train them was developed by physicists, using mathematical tools that had been developed for spin glasses and other disordered systems. So… it may be at the outer edges of physics, but since there’s no Nobel for computer science, it’s the best the committee can do, and it’s not inappropriate IMO.

      1. Try as I might, I can’t see any justification for them winning the prize in physics. What truths did they uncover about the physical world? What will be written in physics textbooks as a result of their work? Nothing, as far as I can make out.

        They may have used mathematical tools and models developed and ordinarily used for physics, but they didn’t use those tools to do physics. They used them to do computer science, which is why their work has been written about in computer science textbooks. It doesn’t appear in physics textbooks.

        Computer science awards already exist, and the AM Turing Award is widely recognised as the Nobel equivalent in computing. They both deserve a Turing Prize (Hinton already won it in 2019 for this very work) because their work is truly groundbreaking. But a physics prize? No, and I say this as someone who works in computing/AI.

        The tools of stochastic/statistical physics are used extensively in financial modelling. Does that mean that investors who are manipulating these tools to make money are doing fundamental physics? Of course not.

        Mark Sturtevant made a good point that the committee probably feels it has to move with the times. There’s a lot of truth in that; the members likely fear the prize will lose relevance and credibility if they ignore the bandwagon that is AI. Mark’s right, but they are wrong. This seriously damages the credibility of the physics prize, in my humble opinion.

        I’m not entirely sure that the chemistry prize is strictly chemistry, but I think it deserves a Nobel prize. If nothing else, this all just goes to show that science is changing faster than many of us are ready for (me especially!)

        1. Claiming that work on neural networks will not appear in physics books is not accurate, seeing how there was a chapter on the topic in my “Computational Physics” textbook in the late 1990s. And most of our papers appeared in Phys. Rev. E., alongside other statistical physics topics.
          Admittedly, the topic is at the intersection of statistical physics and computer science. So what? I had a colleague who discovered a strong connection between NP-completeness and phase transitions. Are you going to tell him he shouldn’t call his work physics?
          As for finance, the people who make money using this stuff shouldn’t get the Nobel. If Black, Scholes and Merton had received it for physics, it would have been borderline but justifiable IMO, but there’s a Nobel for economics which they received, so the question is moot..

          1. I understand where you are coming from, and of course computational physics belongs in physics textbooks. However, my point was that this work specifically will at best be given a passing mention in said books. It has not advanced physics significantly, but it has revolutionised parts of computer science. Yet it has earned the Nobel in physics. It just doesn’t compute – if you’ll pardon the pun 😀. I don’t wish to diminish their achievements and I believe they deserve a Nobel prize or equivalent. But I don’t feel this prize has been awarded for physics and that makes me uncomfortable.

  3. For years, I’ve had MIT’s Folding at Home on my computer. It takes unused computing power to attempt to predict the folding of a protein. Does this mean it’s obsolete and I should delete it?

  4. I think that AlphaFold2 is a very strong evidence against Intelligent Design, because it wouldn’t work if there was a designer.

Comments are closed.