An extract from Matthew’s new book

In a comment on this morning’s Hili post, reader “Snake” called attention to the publication in today’s Guardian of “The Long Read”, which happens to be a nice big extract from Matthew’s new book, The Idea of the Brain: The Past and Future of Neuroscience. The book will be out March 12 in the UK and April 21 in the U.S. Here’s the U.S. version (click to go to Amazon site):

As I said before, I read the book in galleys and recommend it highly. It’s a history of ideas about how the brain works, starting from the ancient Greeks and proceeding on to today. It’s more a history of science combined with science than a pure scientific discussion about the brain. It turns out that at each period of time, scientists derived their ideas about how the brain works from their contemporary technology, ergo the title of the extract below.

And, as Matthew notes, we still know very little about the brain works, though he’s convinced that knowledge will accrue slowly. He also has no patience for panpsychism—the idea that consciousness is somehow inherent in each particle of matter rather than a phenomenon that arises when the brain reaches a certain level of complexity.

Here’s my blurb on Amazon (it’s on the cover too):

“In this engrossing book, Matthew Cobb deftly recounts the tortuous history of research on the brain, in which researchers pursue the hard problems of memory, consciousness, and volition, always limited by forced comparisons between human brains and the machines available at the time. A work of history and deep scholarship, but written in an engaging and lively way, The Idea of the Brain is optimistic about the recursive attempts of our brains to understand themselves, yet reminds us that the three most important words in science are, ‘We don’t know.'”―Jerry Coyne, author of Why Evolution is True

Read the extract by clicking on the screenshot below. I’ll give just one brief bit:

This is in fact the last paragraph of the book. I like the last one-word sentence:

There are many alternative scenarios about how the future of our understanding of the brain could play out: perhaps the various computational projects will come good and theoreticians will crack the functioning of all brains, or the connectomes will reveal principles of brain function that are currently hidden from us. Or a theory will somehow pop out of the vast amounts of imaging data we are generating. Or we will slowly piece together a theory (or theories) out of a series of separate but satisfactory explanations. Or by focusing on simple neural network principles we will understand higher-level organisation. Or some radical new approach integrating physiology and biochemistry and anatomy will shed decisive light on what is going on. Or new comparative evolutionary studies will show how other animals are conscious and provide insight into the functioning of our own brains. Or unimagined new technology will change all our views by providing a radical new metaphor for the brain. Or our computer systems will provide us with alarming new insight by becoming conscious. Or a new framework will emerge from cybernetics, control theory, complexity and dynamical systems theory, semantics and semiotics. Or we will accept that there is no theory to be found because brains have no overall logic, just adequate explanations of each tiny part, and we will have to be satisfied with that. Or –


  1. ThyroidPlanet
    Posted February 27, 2020 at 9:55 am | Permalink

    Excellent- clicks all around

  2. Posted February 27, 2020 at 10:09 am | Permalink

    Thanks Jerry! That last word got edited out by an over-zealous sub in the print edition, and in the early on-line versions until I complained! – MC

    • Posted February 27, 2020 at 11:42 am | Permalink

      Not reading that artile as I don’t want to know how it ends! 🙂

    • Posted February 27, 2020 at 12:22 pm | Permalink

      I have pre-ordered your book Matthew and am very much looking forward to it.

      I thoroughly enjoyed Life’s Greatest Secret.

  3. TJR
    Posted February 27, 2020 at 10:19 am | Permalink

    That last word reminds me of the end of the acknowledgements section for one of Bernhard Flury’s books (Common Principal Components I think).

    After all the usual bits thanking everyone, the last sentence is:

    Needless to say

  4. Posted February 27, 2020 at 10:58 am | Permalink

    One thing I think we can be sure of: So-called “strong emergence” does not occur and, on the contrary, the workings of the brain CAN be explained, at least in principle, “by the activity of individual components.” To doubt this is to verge on woo.

  5. JohnH
    Posted February 27, 2020 at 11:04 am | Permalink

    To continue with the optimistic slant of Mathew’s book, the only word I would add to Jerry’s review would be “Yet.”

  6. Posted February 27, 2020 at 11:25 am | Permalink

    Thanks … the book should be super interesting … Again, evidence suggests that the brain, especially for humans, is a super, chaotic memory device … explaining dreams, language, philosophy, culture, politics, our tragic history, etc.

  7. Posted February 27, 2020 at 1:13 pm | Permalink

    “By viewing the brain as a computer that passively responds to inputs and processes data”

    Why should a computer be passive?

    I don’t regard “brain as computer” metaphorically either – I take it absolutely literally. It is also true that the brain has noncomputational components (the endorcrine system, to which it is linked, for example). Of course, the modularization also suggest that the number is wrong – brain as lots of computers. (That’s what the ANN research basically shows.)

    As for strong/weak emergence, Mario Bunge points out something that I think is over looked: boundary conditions. You need *those* in order to do the prediction as much as the law (statements) in question.

    “It implies that our minds are somehow floating about in our brains”

    I have heard this from Bunge – and this is where we disagree. No, it doesn’t. The view is that the state (and allowable state transitions) is what matters, not what the state is *of*. It is quite possible that the relevant state spaces of various human brains are not compatible enough to easily require a “blank” brain into a copy (in any case that would kill the original “occupant”). What is also possible is that the relevant total state includes the noncomputational factors in which case duplication strictly speaking is impossible. There’s no mind independent of *all* bodies (inc. brain) – and with that we agree – but it does not follow from that the specific body components are. For example, we have prosthetic brain parts already – inner ear hearing aids, for example. What is different here?

    As for the 6507, that’s quite right. We may not *know* computationally how a lot of the brain (and nervous system) works, but that’s an epistemic thing. It says nothing about the ontological question.

    “Brains are natural, evolved phenomena, not digital devices.”

    This to me is a non sequitur. I would say they are *both*. (They may also be in a way analogue.)

    Thanks for the stimulating article – I look forward to the book!

  8. Alexander
    Posted February 27, 2020 at 1:16 pm | Permalink


  9. Cval
    Posted February 28, 2020 at 7:31 pm | Permalink

    “But the scale of complexity of even the simplest brains dwarfs any machine we can currently envisage.”

    Possibly, or possibly not. The current guesses at the effective total information processing capacity of the human brain put it roughly in the exaflop range, a shade above the current generation of supercomputers. Admittedly these are just guesses.

    Incidentally, Hans Moravec once came up with a preposterously low estimate of 100 teraflops–the method he used to arrive at it (extrapolating retinal performance to the rest of the brain) was plausible, but ISTM he drastically lowballed the performance of the retina by equating it with a machine vision system capable of processing a 1 megapixel field of view at 10 frames per second.

    A more biologically realistic target like 20 megapixels at 1000 frames per second (reflecting the ~1 ms response time of the undelying nerves rather than the ~100 ms response time of cone photopigments) would put it up into the same range as the other estimates.

    • Posted February 29, 2020 at 10:16 am | Permalink

      “But the scale of complexity of even the simplest brains dwarfs any machine we can currently envisage.”

      I think this is true but what we really don’t know is the ratio of complexity to functionality. How much of the brain’s complexity doesn’t contribute to its core functionality? After all, every neuron is a complex biochemical machine in its own right but presumably most of its mechanisms only serve to keep it alive rather than process information.

      Birds can fly and are obviously complex while planes also fly and are much less complicated. It seems likely that comparisons between a brain and some eventual capable artificial intelligence will be similar.

%d bloggers like this: