Richard Gunderman at The Conversation: our ability to lie shows that the mind is physically independent of the brain (!)

December 27, 2016 • 9:00 am

UPDATE: As the first comment in the thread (by Coel) shows, I was correct in assuming there’s a religiosity to Gunderman’s argument: he’s a trustee of the Christian Theological Seminary. Further, someone who once knew him emailed me and described him as “ultra religious.”

________________________

The motto of the site The Conversation is “academic rigor, journalistic flair”, and it’s funded by an impressive roster of organizations, including the Bill and Melinda Gates Foundation (see bottom for the list). But there’s neither rigor nor flair in a recent article by Dr. Richard Gunderman, “Why you shouldn’t blame lying on the brain“. (According to his bio at the Radiological Society of America, Gunderman is “a professor and vice chairman of the department of radiology at Indiana University, with faculty positions in pediatrics, medical education, philosophy, philanthropy, and liberal arts.” He’s also written both technical books on his field and popular books like We Make a Life by What We Give.) 

Gunderman’s point, which completely baffles me, is that our ability to lie proves that we can actually override the material processes in our brain, and that “the human mind is not bound by the physical laws that scientists see at work in the brain.” In other words, lying proves that there’s a ghost in the machine—a non-physical aspect to our brains and behavior that gives us a form of dualistic free will.

Gunderman begins by noting that functional MRI (fMRI) scans of the brain have shown that lying can be detected by a decrease of activity of the amygdala, which to Gunderman suggests that “subjects may become desensitized to lying, thereby paving the way for future dishonesty.” I haven’t read the studies, so I have no idea whether that last conclusion has any support.

But Gunderman wants to dispel the notion that because lying can be seen as changes in brain activity, it must therefore be a product of the neurological/biochemical processes of brain activity. One section of his piece, called “brain not simply a machine,” argues that because the brain is complicated (with 100 billion neurons and 150 trillion synapses), and because it actually experiences the world through consciousness and emotionality, we can never find a physical basis for those things. Ergo, the brain isn’t a machine!. Gunderman:

As Nobel laureate Charles Sherrington, one of the founders of modern neuroscience, famously declared, natural sciences such as physics and chemistry may bring us tantalizingly close to threshold of thought, but it is precisely at this point that they “bid us ‘goodbye.‘” The language of natural science is inadequate to account for human experience, including the experience of telling a lie.

Consider Mozart’s “A Little Serenade” or Rembrandt’s self-portraits. We can describe the former as horsehair rubbing across catgut, and we may account for the latter as nothing more than pigments applied to canvas, but in each case something vital is lost. As any reader of Shakespeare knows, a lie is something far richer than any pattern of brain activation.

This is a misguided argument, because you can’t describe the effects of Mozart as horsehair on catgut, and no scientists claims such a thing. The sounds have an emotional resonance in our brain after they enter it through our ears. That emotional resonance, of which we’re conscious, depends on our genes and environment: the factors that have built our brain. Just because we don’t understand how it all works manifestly does not say that the brain isn’t a machine. It’s simply a machine whose workings we don’t fully understand.

Gunderson then makes a weird argument about why the brain is “not the mind” (well, the mind is really a product of the brain, just as a knee-jerk reflex is, but never mind). While admitting that we can change how the mind works by physical intervention, he still claims that there is a dualism in our thoughts not explicable by our brains:

A second dangerous misinterpretation that often arises from such reports is the notion that brain and mind are equivalent. To be sure, altering the chemistry and electrical activity of the brain can powerfully affect a person’s sensation, thought, and action – witness the occasionally remarkable effects of psychoactive drugs and electro-convulsive therapy.

But in much of human experience, the causal pathway works in the opposite direction, not from brain to mind, but mind to brain. We need look no further than the human imagination, from which all great works of art, literature and even natural science flow, to appreciate that something far more complex than altered synaptic chemistry is at work in choices about whether to be truthful.

What is the “mind” that is not part of the brain, then? Is it a soul or something not embodied in physical matter? As Mencken said of Thorstein Veblen, “What is the sweating professor trying to say?” Just because art springs from the imagination and perception (and one’s experiences) does NOT mean that all those things are not coded in the brain. In fact, you can efface many aspects of perception and imagination by destroying parts of the brain. Gunderson’s claim that works of art must reflect something more than synaptic chemistry is an unsupported assertion; in fact, the data so far show that he’s wrong. He really needs to specify exactly how he think the mind is independent of the brain; I’m puzzled that a doctor (unless he’s religious) can even make such a claim. If the mind is disconnected from the brain, how does imagination get from the mind to the brain? Is there a soul above it all?

But the most bizarre part of Gunderman’s article is that he sees lying as proof that the mind is independent of the brain:

In fact, our capacity to lie is one of the most powerful demonstrations of the fact that the human mind is not bound by the physical laws that scientists see at work in the brain. As Jonathan Swift puts it “Gulliver’s Travels,” to lie is “to say the thing which is not,” perhaps as profound a testimony as we could wish for free will and the ability of the human mind to transcend physical laws

In the Genesis creation story, it is after woman and man have tasted the fruit of the tree of the knowledge of good and evil and hidden their nakedness that God declares that “they have become like us.” To be able to lie is in a sense divine, implying a capacity to imagine reality as it is not yet. If used appropriately, this capacity can make the world a better place.

What is going on here, I think, is that Gunderson (whose quotes from scripture imply he’s religious, which may explain his dualism) is espousing a form of “free won’t.” That is, while our positive, truthful statements may reflect the activity of our brain, the fact that we can tell untruths somehow means that the Ghost in the Machine is overriding what we’d normally do. (Benjamin Libet, who did the first experiments showing that decisions can be predicted in the brain before they come to consciousness, didn’t believe in free will per se but did in “free won’t”: the idea that we can decide to change our minds by overriding a conscious decision.)

But this shows nothing of the sort. The decision about whether to be truthful is simply built into our neurons, and often as an adaptive mechanism (By “adaptive,” I mean something that we think will be good for us, not necessarily something that’s evolved—though I think Robert Trivers is right that a certain amount of deceit and self-deception are evolutionary adaptations.) When someone says, “Do I look too fat in these clothes?”, it’s to your benefit to say “no”. That can be a lie, but why on Earth does it show that that decision about how to answer is independent of the physics of our brain? If you say “yes,” is that just the product of our brain-machine?

Gunderman steps further into this argument—one that any sensible person can see through—at the end of his piece, when he simply makes his flat assertion again:

In reality, of course, lying is not the fault of the brain but the person to whom the brain belongs. When someone tells a lie, he or she is not merely incorrect but deceptive. People who lie are deliberately distorting the truth and misleading someone in hopes of gain, placing their purposes above the understanding and trust of the person to whom they lie.

Of course many of our truthful statements are made in hopes of gain, placing our own purposes above those of others.. So how does the fact that we sometimes use deception prove dualism? Gunderman goes on (my emphasis):

Even in the era of functional neuro-imaging, there is no lie detector that can tell with certainty whether subjects are telling the truth. There is no truth serum that can force them to do so. At the core of every utterance is an act of moral discernment that we cannot entirely account for except to say that it reflects the character of the person who does it.

Lying is not a matter of physical law, but of moral injunction. It is less about chemistry than character. It reflects not merely what we regard as expedient in the moment but who we are at our core. Ironically, while it is less momentous to act well than to be good, we are in the end little more than the sum of all the moral compromises we have made or refused to make.

This is why we abhor the deceptive conduct of narcissists, crooks and politicians, and why we esteem so highly the characters of people who manage to tell the truth even when it is especially inconvenient to do so. Such acts are morally blameworthy or exemplary precisely because we recognize them as the products of human choice, not physical necessity.

Why does telling a lie show a nonphysicality of the mind in a way different from telling the truth? This is the main question, and Gunderman doesn’t answer it.

I think he’s straying into religious territory here. For every aspect of our character comes from our brain, whether we’re lying or not. And all data show that that character depends on the brain, for character can be profoundly altered by brain injuries, surgery, experience of the world, or drugs. Dragging in the claim “we are in the end little more than the sum or all the moral compromises we have made or refused to make” suggests a religious theme, one based on moral choice, which to many religionists means dualistic free will. If we don’t choose how we behave, but our brain chooses for us, what does “moral choice” even mean?  Just because most people are dualists, and think that at any time we do have a choice about how we behave (we don’t), doesn’t mean that we can accept conventional wisdom for reality. “Right” and “wrong” acts are to be praised and condemned for the good of society, but we shouldn’t accept that common notion that we could at a given time choose to behave either good or ill.

Gunderman’s argument is so tortured, so unsupported by evidence, that I suspect it’s motivated by religion. That’s just a guess, but anyone who drags in scripture and morality to prove that the mind is disconnected from the brain has to be working on premises that aren’t scientific.

drgunderman_photo-copy
Gunderman

Funding partners for The Conversation, which funded Gunderman’s misguided essay,.

screen-shot-2016-12-25-at-6-53-24-am

h/t: jj

Patricia Churchland on the effects of neurobiology on criminal law

August 25, 2016 • 11:30 am

Scientific American has a new article, “20 big questions about the future of humanity“, in which twenty well known scientists prognosticate about our collective fate. It’s not clear whether the questions were generated by the scientists themselves or by the magazine, but most of them, and the answers, don’t inspire me much. It’s not that I think the answers are bad, I just think that predictions of this sort—will sex become obsolete? will humans survive the next 500 years? when and where will we find extraterrestrial life?—are shots in the dark, and the answers not that enlightening. After all, the extraterrestrial question is simply a big fat unknown.

But one question and answer, called to my attention by reader John O., intrigued me for obvious reasons. The respondent is the well known philosopher Patricia Churchland. Here’s the question and her answer, and the bold bit in the answer is my own emphasis.

Will brain science change criminal law?

In all likelihood, the brain is a causal machine, in the sense that it goes from state to state as a function of antecedent conditions. The implications of this for criminal law are absolutely nil. For one thing, all mammals and birds have circuitry for self-control, which is modified through reinforcement learning (being rewarded for making good choices), especially in a social context. Criminal law is also about public safety and welfare. Even if we could identify circuitry unique to serial child rapists, for example, they could not just be allowed to go free, because they would be apt to repeat. Were we to conclude, regarding, say, Boston priest John Geoghan, who molested some 130 children, ‘It’s not his fault he has that brain, so let him go home,’ the result would undoubtedly be vigilante justice. And when rough justice takes the place of a criminal justice system rooted in years of making fair-minded law, things get very ugly very quickly.”
     —Patricia Churchland, professor of philosophy and neuroscience at the University of California, San Diego

This seems to me both wrongheaded and very superficial, especially when you consider that punishment is part of criminal law. But at least she’s a determinist and a naturalist.  We can argue (not this time!) about what this means for conceptions of free will, but I think it’s almost a given that a philosophy involving determinism (either hard determinism or compatibilism) will have implications for criminal law different from those coming from a philosophy of dualism.  

That’s certainly the case in practice, for the concept of whether someone could have done otherwise, versus whether he was “compelled” by uncontrollable circumstances in a criminal situation, has played a big role in our judicial system. If you’re considered mentally incompetent, for example, or have a brain tumor that makes you aggressive, or don’t “know right from wrong”, your punishment can vary drastically. If you’re considered mentally ill, you may be hospitalized; if you do know “right from wrong” (even if your circumstances allow you to know it but not act on that knowledge) you will be put in a pretty bad prison situation; and if there are extenuating circumstances that may have influenced your behavior (like an abused woman killing her abuser), your sentence may be light—or you may be even set free.

Under determinism, nobody has a choice of how to act, in other words, there are always “extenuating circumstances” in the form of environmental and genetic factors that caused you to transgress. The way the justice system deals with these factors will, of course, differ from person to person; but it’s vitally important to realize that no criminal had a free choice about what he did. (I’m using “he” here since most criminals are male.) And we can’t deny that lots of punishments are based not on deterrence, rehabilitation, or public safety, but on pure retribution: a vile sentiment that presupposes that someone could have done otherwise.

Even Sean Carroll, a compatibilist, realizes the implications of neuroscience on our justice system. As I quoted him the other day from his new book The Big Picture:

To the extent that neuroscience becomes better and better at predicting what we will do without reference to our personal volition, it will be less and less appropriate to treat people as freely acting agents. Predestination will become part of our real world.

Now I’m not sure I agree with Sean that predicting behavior has anything to do with treating people as “freely acting agents,” for we already know that they’re not freely acting agents. Prediction has to do with your strategy for “punishing” the offender (it affects recidivism and public safety); perhaps that’s what Sean means, but it’s not clear.

Further, Churchland goes badly wrong when she thinks that determinism is solely about understanding why someone does something, and then exculpating them when we do. That’s ludicrous. We need to prevent an offender from reoffending if they’re freed, which means rehabilitation; we need to protect the public even if we do understand why someone commits a crime (what if their neurons make them psychopathic?); and we need to deter others by example from committing crimes. (Deterrence is certainly compatible with determinism: seeing someone get punished affects your brain, often making you less likely to transgress.) I have no idea how Churchland draws a connection between understanding the correlates of behavior and letting people go free, and then—vigilante justice! We already know that “criminal law is about public safety and welfare,” and no determinist thinks otherwise. Determinists are not a group of people hell-bent on freeing criminals!

At any rate, the more we learn about brain function, the more we’ll be able to understand those factors that compel people to behave in a certain way when faced with the appearance of choice. And when we know that, we’ll be better able to treat them. But as we learn more about the brain, my hope is that we will be less and less willing to punish people on the assumption that they made the “wrong choice”,  avoid retribution, and begin to design a system of punishment that not only protects society and deters others, but, above all, fixes the problems, both social and neurological, that lead people to break the law.

 

Has the evolution of consciousness been explained?

July 27, 2016 • 11:00 am

Michael Graziano is a neuroscientist, a professor of psychology at Princeton University, and, on the side, writes novels for both children and adults. His speciality is the neurology and evolutionary basis of consciousness, about which he’s written several pieces at The Atlantic.

His June 6 piece, “A new theory explains how consciousness evolved“, attempts to trace how consciousness (which I take to be the phenomenon of self-awareness and agency) could arise through evolution. This is a very good question, although resolving it will ultimately require understanding the “hard problem” of consciousness—the very fact that we are self-aware and see ourselves as autonomous beings. We’re a long way from understanding that, though Graziano is working on the neuroscience as well as the evolution.

In the meantime, he’s proposed what he calls the “Attention Schema Theory,” or AST, which is a step-by-step tracing of how consciousness might have arisen via evolutionary changes in neuronal wiring. To do this, as Darwin did when trying to understand the stepwise evolution of the eye, you need to posit an adaptive advantage to each step that leads from primitive neuronal stimuli (like the “knee reflex”) to full-fledged consciousness of the human sort.

That, of course is difficult. And we’re not even sure if the neuronal configurations that produced consciousness were really adaptive for that reason—that is, whether the phenomenon of consciousness was something that gave its early possessors a reproductive advantage over less conscious individuals.  It’s possible that consciousness is simply an epiphenomenon—something that emerges when one’s brain has evolved to a certain level of complexity. If that were the case, we wouldn’t really need to explain the adaptive significance of consciousness itself, but only of the neural network that produced it as a byproduct.

Now I haven’t read Graziano’s scholarly publications about the AST; all I know is how he describes it in the Atlantic piece. But, as I’ve already said, if you’re describing some complex science in a popular article, at least the outline of that science should be comprehensible and make sense. And that’s what I find missing in the Atlantic article. Graziano lucidly describes the steps by which a lineage could become more complex in its sensory system, with each step possibly enhancing reproduction. But when he gets to the issue of consciousness itself—the phenomenon of self-awareness—he jumps the shark, or, rather, dodges the problem.

Here are the steps he sees in the AST, and when each step might have occurred in evolution.

1.) Simple acquisition of information through neurons or other sensory organs. This could have happened very early; after all, bacteria are able to detect gradients of light and chemicals, and they were around 3.5 billion years ago.

2.) “Selective signal enhancement,” the neuronal ability to pay attention to some environmental information at the expense of other information. If your neuronal pathways can compete, with the “winning signals” boosting your survival and reproduction, this kind of enhancement will be favored by selection. This will confer on animals the ability to adjudicate conflicting or competing signals, paying attention to the most important ones. Since arthropods but not simpler invertebrates can do this, Graziano suggests that this ability arose between 600 and 700 million years ago.

3.) A “centralized controller for attention” that could draw one’s “overt attention” among inputs from several different sensory systems (for example, you might want to go after the smell of food rather than toward the darkness, as in that moment it’s better to get food than to hide). This, says Graziano, is controlled by the part of the brain called the tectum, which evolved about 520 million years ago.

The tectum, Graziano adds, works by forming an “internal model” of all the different sensory inputs. As he says,

The tectum is a beautiful piece of engineering. To control the head and the eyes efficiently, it constructs something called an internal model, a feature well known to engineers. An internal model is a simulation that keeps track of whatever is being controlled and allows for predictions and planning. The tectum’s internal model is a set of information encoded in the complex pattern of activity of the neurons. That information simulates the current state of the eyes, head, and other major body parts, making predictions about how these body parts will move next and about the consequences of their movement. For example, if you move your eyes to the right, the visual world should shift across your retinas to the left in a predictable way. The tectum compares the predicted visual signals to the actual visual input, to make sure that your movements are going as planned. These computations are extraordinarily complex and yet well worth the extra energy for the benefit to movement control. In fish and amphibians, the tectum is the pinnacle of sophistication and the largest part of the brain. A frog has a pretty good simulation of itself.

I’m still not sure what this “internal model” is: the very term flirts with anthropomorphism. If it’s simply a neuronal system that prioritizes signals and feeds environmental information to the brain in an adaptive way, can we call that a “model” of anything? The use of that word, “model,” already implies that some kind of rudimentary consciousness is evolving, though of course such a “model” is perfectly capable of being programmed into a computer that lacks any consciousness.

4.) A mechanism for paying “covert” as well as “overt” attention. Covert attention is stuff that we attend to in our brains without directly paying attention to it. An example is focusing your hearing on a specific conversation nearby and ignoring extraneous sounds. Of course the very concept of “paying selective attention” sort of implies that we have some kind of consciousness, for who is doing the “paying”?

The part of the brain that controls covert attention, says Graziano, is the cortex. That evolved with the reptiles, about 300 million years ago.

And here’s where the problem with the article lies, for Graziano subtly, almost undetectably, says that with this innovation we’ve finally achieved consciousness. His argument is a bit tortuous, though. First he gives a thought experiment that implies cortex = consciousness, then undercuts that thought experiment by saying that that that doesn’t really explain. consciousness. He then reverses direction again, bringing consciousness back to center stage. It’s all very confusing, at least to me.

Here’s the part where consciousness comes into his piece. Graziano starts with crocodiles, which have a selectively attentive cortex, and describes a Gedankenexperiment that explictly suggests consciousness:

Consider an unlikely thought experiment. If you could somehow attach an external speech mechanism to a crocodile, and the speech mechanism had access to the information in that attention schema in the crocodile’s wulst, that technology-assisted crocodile might report, “I’ve got something intangible inside me. It’s not an eyeball or a head or an arm. It exists without substance. It’s my mental possession of things. It moves around from one set of items to another. When that mysterious process in me grasps hold of something, it allows me to understand, to remember, and to respond.”

But then Graziano takes it back, for he realizes that selective attention itself could be a property of neuronal networks, and doesn’t imply anything about the self-awareness and sense of “I” and “agency” that we call consciousness. (Note that the words “I’ve got something intangible inside me” is an explicitly conscious thought.) But in denying the intangibility of consciousness, he simultaneously affirms his presence. Here’s where the rabbit comes out of the hat:

The crocodile would be wrong, of course. Covert attention isn’t intangible. It has a physical basis, but that physical basis lies in the microscopic details of neurons, synapses, and signals. The brain has no need to know those details. The attention schema is therefore strategically vague. It depicts covert attention in a physically incoherent way, as a non-physical essence. And this, according to the theory, is the origin of consciousness. We say we have consciousness because deep in the brain, something quite primitive is computing that semi-magical self-description. Alas crocodiles can’t really talk. But in this theory, they’re likely to have at least a simple form of an attention schema.

But an “attention schema” isn’t consciousness, not in the way that we think of it. Nevertheless, Graziano blithely assumes that he’s given an adaptive scenario for the evolution of consciousness, an evolution that’s only enhanced because you also have to model the consciousness of others—what Dan Dennett calls “the intentional stance.” Graziano:

When I think about evolution, I’m reminded of Teddy Roosevelt’s famous quote, “Do what you can with what you have where you are.” Evolution is the master of that kind of opportunism. Fins become feet. Gill arches become jaws. And self-models become models of others. In the AST, the attention schema first evolved as a model of one’s own covert attention. But once the basic mechanism was in place, according to the theory, it was further adapted to model the attentional states of others, to allow for social prediction. Not only could the brain attribute consciousness to itself, it began to attribute consciousness to others.

So here he’s finessed the difficulty of self-awareness by simply asserting that once you have mechanisms for providing both covert and overt attention, you have consciousness. I don’t agree (though of course I’ve read only this article). Why couldn’t a computer do exactly the same things, but without consciousness? In fact, they do those things, as in self-driving cars.

Graziano goes on to say that figuring out what other members of your species do, based on the notion that they have consciousness, is itself a sign of consciousness. And again I don’t agree. A computer can have an “intentional stance,” using a program and behavioral cues to direct its own behavior, without consciousness. The “hard problem”—that of self-awareness—has been circumvented, assumed without a good reason.

Graziano finishes by talking about semantic language, something that’s unique to humans and surely does require consciousness (I think! Maybe I’m wrong!). But that’s irrelevant, for the evolution of consciousness has already been assumed.

I admire Graziano for realizing that if consciousness, which is closely connected with our sense of agency and libertarian “free will”, evolved, there may be an adaptive explanation for it. He doesn’t consider that consciousness may be an epiphenomenon of neural complexity, which is possible.

I myself think consciousness and agency are indeed evolved traits, traits whose neuronal and evolutionary bases may elude our understanding for centuries. I take a purely evolutionary view rather than a neuroscientific view, for I’m not a neuroscientist. And using just evolution, one can think of several reasons why consciousness and agency might have been favored by selection. I won’t reiterate these here as I discuss them at the end of my “free will” lectures that you can find on the Internet.  And I always say that the problem of agency is unsolved. It still is, as it is for consciousness.

Graziano is making progress with the neuroscience, but the AST is still a long way from being a good theory of how consciousness evolved.

Does evolution lead us to perceive reality, or is it all an illusion?

July 21, 2016 • 11:15 am

Donald D. Hoffman is a highly respected Professor of Cognitive Science at the University of California at Irvine.  He’s developed a “formal theory of conscious agents” that he describes in a new Atlantic article—or rather in an interview with Amanda Gefter called “The case against reality“. And for the life of me I can’t figure out what the man is trying to say. I haven’t read his more formal academic work, but if they’re presenting his theory in a public place like the Atlantic, it seems that what he’s saying should be clear. Yet what I read is either unclear, or, when it’s clear, seems wrong. You should read the short interview yourself, but here are the points I take from it. It’s all a mess, and seems a bit like a gemisch of quantum woo, evolutionary misunderstandings, and postmodernism. If there’s a substantive and important point in the piece, I’ve been too dense to see it.

I’ve indented bits of the interview below. I address three claims.

1). There is no external reality. Quantum mechanics has proved that. Hoffman seems to think that because quantum mechanics has disproved local realism for some particles (that is, has disproved the claim that a photon or electron, for instance, has a certain nature and is in a certain place, regardless of whether we know it), so it has disproved local realism for everything, including macro objects. What we see isn’t reality, or even an approximation of reality: it is all illusion molded by natural selection.

Experiment after experiment has shown—defying common sense—that if we assume that the particles that make up ordinary objects have an objective, observer-independent existence, we get the wrong answers. The central lesson of quantum physics is clear: There are no public objects sitting out there in some preexisting space. As the physicist John Wheeler put it, “Useful as it is under ordinary circumstances to say that the world exists ‘out there’ independent of us, that view can no longer be upheld.”

. . . Gefter: If snakes aren’t snakes and trains aren’t trains, what are they?

Hoffman: Snakes and trains, like the particles of physics, have no objective, observer-independent features. The snake I see is a description created by my sensory system to inform me of the fitness consequences of my actions. Evolution shapes acceptable solutions, not optimal ones. A snake is an acceptable solution to the problem of telling me how to act in a situation. My snakes and trains are my mental representations; your snakes and trains are your mental representations.

We’ll get to the “fitness consequences”—the selective pressures—in a minute, but I want to document Hoffman’s view of reality, which Gefter seems to accept:

Hoffman: . . . I have a space of experiences, a space G of actions, and an algorithm D that lets me choose a new action given my experiences. Then I posited a W for a world, which is also a probability space. Somehow the world affects my perceptions, so there’s a perception map P from the world to my experiences, and when I act, I change the world, so there’s a map A from the space of actions to the world. That’s the entire structure. Six elements. The claim is: This is the structure of consciousness. I put that out there so people have something to shoot at.

Gefter: But if there’s a W, are you saying there is an external world?

Hoffman: Here’s the striking thing about that. I can pull the W out of the model and stick a conscious agent in its place and get a circuit of conscious agents. In fact, you can have whole networks of arbitrary complexity. And that’s the world.

Gefter: The world is just other conscious agents?

Hoffman: I call it conscious realism: Objective reality is just conscious agents, just points of view. Here’s a concrete example. We have two hemispheres in our brain. But when you do a split-brain operation, a complete transection of the corpus callosum, you get clear evidence of two separate consciousnesses. Before that slicing happened, it seemed there was a single unified consciousness. So it’s not implausible that there is a single conscious agent. And yet it’s also the case that there are two conscious agents there, and you can see that when they’re split. I didn’t expect that, the mathematics forced me to recognize this. It suggests that I can take separate observers, put them together and create new observers, and keep doing this ad infinitum. It’s conscious agents all the way down.

And one more bit, showing that brains aren’t real, either:

Hoffman: The idea that what we’re doing is measuring publicly accessible objects, the idea that objectivity results from the fact that you and I can measure the same object in the exact same situation and get the same results — it’s very clear from quantum mechanics that that idea has to go. Physics tells us that there are no public physical objects. So what’s going on? Here’s how I think about it. I can talk to you about my headache and believe that I am communicating effectively with you, because you’ve had your own headaches. The same thing is true as apples and the moon and the sun and the universe. Just like you have your own headache, you have your own moon. But I assume it’s relevantly similar to mine. That’s an assumption that could be false, but that’s the source of my communication, and that’s the best we can do in terms of public physical objects and objective science.

. . . Neurons, brains, space … these are just symbols we use, they’re not real. It’s not that there’s a classical brain that does some quantum magic. It’s that there’s no brain! Quantum mechanics says that classical objects—including brains—don’t exist.

I don’t know where to begin with this. First of all, just because a subatomic particle doesn’t have an intrinsic property until we measure it, and that property is dependent on how we measure it, doesn’t mean that macro objects don’t have properties or, as Hoffman implies, don’t exist. You can, after all, measure the momentum of a car, or of a stationary chair, with great accuracy. It seems to me that Hoffman is using a form of Chopra-ist woo here: claiming not only that certain claims about quantum mechanics extend all the way up to macro objects (yes, they do, but classical mechanics is adequate for macro phenomena, and that includes existence claims), but also that those objects don’t exist outside of consciousness. In fact, Hoffman claims, like Chopra, that the only real thing that exists is consciousness:

As a conscious realist, I am postulating conscious experiences as ontological primitives, the most basic ingredients of the world. I’m claiming that experiences are the real coin of the realm. The experiences of everyday life—my real feeling of a headache, my real taste of chocolate—that really is the ultimate nature of reality.

I find this problematic in several ways. If the brain is an illusion and doesn’t really exist, where does consciousness come from? After all, if we fiddle with the Object Formerly Known as the Brain, and ablate certain parts of it, or give it chemicals, then consciousness goes away. Further, all observers agree on that. Isn’t it strange, if reality is only an illusion constructed by our consciousness, that giving ketamine to a brain removes its consciousness? Why do we all still perceive the same objects then, and agree that the chemical has the same effect? Or, if the Moon is a figment of our consciousness (Chopra has maintained exactly that!), why, when some scientists observe a Rover landing on the Moon, do other scientists perceive exactly the same thing? After all, that concurrence couldn’t reflect anything molded by natural selection—our fitness doesn’t depend on Moon landings. Surely that must say something about an external reality.

But on to evolution:

2). We don’t perceive reality accurately because evolution provides us not with an accurate take on reality, but with a series of illusions that enhance our fitness [reproductive output]. Again, maybe I’m missing something, but if external reality is solely a result of our consciousness (which comes from a nonexistent brain), then why are we even subject to natural selection? That already seems contradictory, but perhaps I’m not understanding Hoffman. But what I do understand is his argument, which is flawed, why evolution gives us a take on the world that doesn’t even come close to reality:

Gefter: People often use Darwinian evolution as an argument that our perceptions accurately reflect reality. They say, “Obviously we must be latching onto reality in some way because otherwise we would have been wiped out a long time ago. If I think I’m seeing a palm tree but it’s really a tiger, I’m in trouble.”

Hoffman: Right. The classic argument is that those of our ancestors who saw more accurately had a competitive advantage over those who saw less accurately and thus were more likely to pass on their genes that coded for those more accurate perceptions, so after thousands of generations we can be quite confident that we’re the offspring of those who saw accurately, and so we see accurately. That sounds very plausible. But I think it is utterly false. It misunderstands the fundamental fact about evolution, which is that it’s about fitness functions—mathematical functions that describe how well a given strategy achieves the goals of survival and reproduction. The mathematical physicist Chetan Prakash proved a theorem that I devised that says: According to evolution by natural selection, an organism that sees reality as it is will never be more fit than an organism of equal complexity that sees none of reality but is just tuned to fitness. Never.

Now I’ll admit right off the bat that natural selection occasionally confers traits that, in some circumstances, distort reality. We may, for example, have been selected to think that we’re brighter or better than we are, because having that illusion gives us a confidence and power that might enhance our fitness. As Steve Pinker has written:

. . . beliefs have a social as well as an inferential function: they reflect commitments of loyalty and solidarity to one’s coalition. People are embraced or condemned according to their beliefs, so one function of the mind may be to hold beliefs that bring the belief-holder the greatest number of allies, protectors, or disciples, rather than beliefs that are most likely to be true. Religious and ideological beliefs are obvious examples.

. . . publicly expressed beliefs advertise the intellectual virtuosity of the belief-holder, creating an incentive to craft clever and extravagant beliefs rather than just true ones. This explains much of what goes on in academia.

. . . the best liar is the one who believes his own lies. This favors a measure of self-deception about beliefs that concern the self…

And our everyday experience with objects can sometimes mislead us when evolved traits create optical illusions, like the famous “checker shadow illusion.” Futher, we know that our sensory system is imperfect and limited by our biological constitution, so that we miss things that other creatures can see, like the ultraviolet patterns perceived by birds and butterflies. Finally, natural selection will foster our consciousness of those animals, those environmental factors, and those traits that have the highest potential effect on our fitness. But that’s not the same thing as saying that our consciousness actually molds the appearance of those organisms and traits.

But I maintain that, in general, natural selection will favor a fairly accurate take on reality (assuming there is a reality), because the more accurately we perceive nature, the higher fitness we will have. Hoffman gives one example where he says we’re selected to have illusions, but I don’t find it particularly convincing:

Gefter: You’ve done computer simulations to show this. Can you give an example?

Hoffman: Suppose in reality there’s a resource, like water, and you can quantify how much of it there is in an objective order—very little water, medium amount of water, a lot of water. Now suppose your fitness function is linear, so a little water gives you a little fitness, medium water gives you medium fitness, and lots of water gives you lots of fitness—in that case, the organism that sees the truth about the water in the world can win, but only because the fitness function happens to align with the true structure in reality. Generically, in the real world, that will never be the case. Something much more natural is a bell curve—say, too little water you die of thirst, but too much water you drown, and only somewhere in between is good for survival. Now the fitness function doesn’t match the structure in the real world. And that’s enough to send truth to extinction.

I find that baffling. Why wouldn’t the fitness function be one like this: “we want enough water to drink, and to sate our band of hominins, but we don’t want to go jumping in huge ponds of water if we can’t swim.” I think Hoffman has gotten it all backwards: our consciousness doesn’t shape external reality to increase our fitness; rather, our fitness depends on accurately perceiving external reality. There are some exceptions. I’m convinced, for example, that natural selection molds our tastes, which are qualia, to conform to what’s good for us. That’s why we like fats and sweets so much. As I’ve always said, a rotting carcass probably tastes like heaven to a vulture. But in most cases we want to see things as they are, for our fitness depends on that. If we could mold reality through consciousness to match our fitness, we would be able to see all dangerous insects as highly visible: snakes and spiders, for example, would be perceived as bright red or orange. We should be able to evolve our color-vision system to enhance our fitness. But that’s a shade on the teleological side, and we just can’t do that.

Yes, many dangerous animals are cryptic. That proves that we can’t change our perception of reality willy-nilly—that there is an external reality out there (animals often enhance their own fitness by being cryptic, so they win the perception battle!).  We can’t often mold the way we see reality to match our fitness functions.

I won’t go on, as this is already too long, but I wanted to add one more claim by Hoffman.

3). Neuroscience hasn’t progressed because neuroscientists haven’t taken into account the quantum nature of brain function and neural activity. 

Gefter: It doesn’t seem like many people in neuroscience or philosophy of mind are thinking about fundamental physics. Do you think that’s been a stumbling block for those trying to understand consciousness?

Hoffman: I think it has been. Not only are they ignoring the progress in fundamental physics, they are often explicit about it. They’ll say openly that quantum physics is not relevant to the aspects of brain function that are causally involved in consciousness. They are certain that it’s got to be classical properties of neural activity, which exist independent of any observers—spiking rates, connection strengths at synapses, perhaps dynamical properties as well. These are all very classical notions under Newtonian physics, where time is absolute and objects exist absolutely. And then [neuroscientists] are mystified as to why they don’t make progress. They don’t avail themselves of the incredible insights and breakthroughs that physics has made. Those insights are out there for us to use, and yet my field says, “We’ll stick with Newton, thank you. We’ll stay 300 years behind in our physics.”

There is by no means universal agreement that quantum-mechanical phenomena, as opposed to classical mechanics, are important on the level of brains and perception.  There are good arguments, in fact, that we can use simple classical mechanics—assuming an external reality—when doing neuroscience. At any rate, I would take issue with the claim that our failure to grasp quantum mechanics has impeded the study of consciousness, or has slowed progress in neuroscience.

If you can figure out a really important point in Hoffman’s article, do point it out below.

h/t: Peter

A scan of my brain (it’s pretty normal!)

June 26, 2016 • 1:30 pm

When I was in Los Angeles a week ago, I found myself hanging around some neuroscientists and neuropsychologists, and they persuaded me to have my brain scanned and analyzed: a “QEEG”. I had no idea what it involved, but it was completely painless. I simply donned this funny-looking hat that had 19 recording electrodes. The electrodes picked up electrical impulses from different parts of the brain, and those impulses can be combined and crunched to triangulate the activity of deeper parts of the brain.

I had no idea what I was getting into, but the 60-minute procedure, combined with a computer program that analyzed my brain waves, produced a lot of information.  I should add that this procedure is often done by therapists as well as physicians, and can cost from $500 to several thousand dollars depending on they type of QEEG done. My procedure would have cost $1,000, so I was pleased to get a freebie. But I was also scared that I would find out my brain was abnormal!

The spike is not part of the apparatus, nor does it connote that I’m a pointy-headed intellectual:
My brain scan
Dr. Orli Peter, who is both a clinical and a neuropsychologist with a practice in Beverly Hills, explained the analysis to me:
There are several types of QEEG analyses and we use SKIL – an advanced analysis program developed by UCLA professor Barry Sterman, a pioneer in research for clinical applications for neurofeedback and his then graduate student, David Kaiser.
She and David Kaiser are colleagues in her practice and he did my actual brain scan and analyzed my brain activity using the program he helped develop.
I did the scan four ways. Two traditional ways: with my eyes closed and then with my eyes open, looking at a fixed image (a chair). And then two new ways, called the “Peter Test,” to pick up any unresolved alteration in brain functioning due to exposure to psychological trauma. “Trauma neuromarkers” have been identified via various neuroimaging techniques.

The analysis of David and Orli, summarized by the latter; I’ve put the take-home message in bold:

Just so you know, Brodmann area theta unity is analyzed in SKIL brainmapping. It is a measure of corticolimbic connectivity, an indirect measure of myelination and distribution of sub-cortically driven theta associated with cerebral maturation.

Nearly all regions in your brain show mature integration of limbic and cortical functioning. Your sensory sampling speed is at the slightly faster end of the speed shared with the majority of people, and consistent across regions, which is an indicator of healthy sensorimotor development . However, your frontal lobe shows excessive theta similarity, an indicator of primal (unmodulated) functioning in bilateral BA9 and BA47, and there is less theta similarity of the ACC and Broca’s areas, an indicator of inefficiency in functions served by these areas.

Here is a list of the the type of functioning these regions are involved with.

BA 9 —hyperlimbic connectivity may impact cognitive flexibility and planning, being able to infer the intention of others, and empathy. Children who show poor attachment have poorer activation here. Recent studies have also shown this region is involved in social fairness, and excessive limbic functioning will result in a different sense of social justice than the dominant group.

BA 47 –more primal functioning in this region may reduce decision making and (again) being able to infer the intention of others, and to properly understand emotion (this hub has been shown to specifically relate to understanding emotion when communicated through prosody.)

Anterior Cingulate Cortex (ACC) — the ACC is a major hub that has connections to both the “emotional” limbic system and the “cognitive” prefrontal cortex. Poorer integration of the ACC is associated with poorer decision making because of increased difficulty in holding two conflicting ideas simultaneously and because of poorer error detection. Poorer connectivity is also associated with poorer emotional awareness and recognition of emotional cues.

Broca’s area is associated with sequencing and hierarchical categorization, a subset that influences language.

In sum, the overall view is that most regions of your brain are functioning very well, better than most, but your ability to make decisions, infer intention of others, understand emotion and share in perceptions of social justice is driven by more limbic processes, making behaviors that rely on these abilities more challenging or unique.

I take this to mean that I have the moral sense and the empathy of an early mammal!

The type of corticolimbic integration are converted into colors, and, I was told, the more green your brain areas are, the more “normal.”. I was largely green, which greatly relieved me:

Brodmann Overview - BA Exec jchec50

Re The Peter Test: I did not show any alteration in the functioning of my default-mode-network due to psychological trauma. In other words, there is no sign that I’ve been traumatized (this could either mean “never traumatized” or “traumatized and recovered from it”) which jibes pretty well with my own self-assessment.  

And here’s my list of sampling rates from the 19 electrodes. The explanation, from Orli, is below. Of course most of it is beyond me, but I’m sure some readers will understand:

Dominant Frequency1-jerry

Re the chart above:
Sampling rates are shown two ways: dominant frequency table at 1/8 hz sensitivity and as spectral entropy plots which are 1 hz sensitivity. The “overall” is peak from 1 to 45 hz and can be ignored. This range will show artifact and delta and pink noise peaks. The sensory information gating peak is typically between 7-14 hz which is the second column and one to pay attention to. This information is also represented in spectral entropy plots. Here we can see the organization of frequency activity for each brain region (see first figure above).
 The peak frequency around 10.75 hz in much of your regions is calculated by tallying up frequency bins across recording. In the “eyes closed” condition typically we will see sinusoidal activity, and this is the primary speed of these sinusoidal waveforms. These waveform are generated by the thalamocortical loop and are the rate of inhibition by the reticular thalamic nuclei which sheaths most of the thalamus and is this inhibition is activated mostly by thalamus on the thalamic relay to cortex of sensory information  when there is little or no sensory stimulation the thalamus goes into an idling speed and this is the relaxed rate of sensory volleying to cortex; i.e., our relaxed or default sensory sampling rate of environment. This is not our max rate- just our default; We can sample and gate information to the cortex faster or slower than this, depending on the situation.
I was grateful to get this analysis for free, and relieved that I’m not some kind of brain freak! If you’re in LA or traveling there, you can contact this email to get an appointment for your own SKIL EEG. Dr. Peter gives discounts to those who can verify financial need; and insurance can cover some of the cost as well.

A gynandromorph moth comes to the light – and tells a story about science

September 2, 2015 • 11:00 am

by Matthew Cobb

This tw**t popped up in my feed the other night, from “wildlife illustrator and invertebrate enthusiast” Richard Lewington [Richard has a website showing his art here]. Richard was running a moth trap in the night when he found this beauty:

If you look carefully, you can just see the male’s feathery antenna on the left; the female side presumably had a straighter antenna (these different shapes relate to the different functions – males have to detect female pheromones from far away; females primarily need to be able to detect food plants on which to lay their eggs). You can see this clearly in another example Richard tw**ted:

Gynandromorphs are mixtures of male and female, often occurring because of a developmental problem – we highlighted the potentially gynandromorph cardinal bird here three years ago. There is a link between birds and moths, in that both groups have an unusual form of sex determination. In mammals, females have identical sex chromosomes (XX) while males have one X and one Y chromosome – they can produce two kinds of gametes (X and Y sperm) and so are called the heterogametic sex. For reasons that are unclear, in birds and lepidoptera (moths and butterflies),  females are the heterogametic sex (to avoid confusion, their sex chromosomes are called Z and W; males in both groups are ZZ).

It seems probable that these moths are gynandromorphs because, at a very early stage of development – probably when a fertilised female ZW egg divided into two cells – one of the daughter cells ‘lost’ the W chromosome because of some glitch. The tissues that were produced by that cell were therefore ‘ZO’ – you need the W chromosome to be female, so the tissues became male. The sharp dividing line down the middle of the moths, and the ‘mirroring’ of sexually dimorphic external structures on either side reinforces this intepretration.

There are many examples of gynandromorph lepidoptera on Google, which is probably a combination of people’s interest in these insects and the striking sexual dimorphism that exists in many species, making it easier to spot:

Image taken from here.

Here’s a photo of a gynandromorph gypsy moth, clearly showing the different shaped antennae (the male side is on the right):

Image taken from Jerry’s colleague Greg Dwyer.

As Jerry pointed out in his original cardinal post, those of us who work on the fly Drosophila (which, like us, has XX females and XY males) would occasionally see gynandromorphs in our stocks, although unless you are doing some funky genetics with sex-linked eye- or body-colour, male and female flies are not as different as the examples of the moths seen above. However, I do recall finding an apparently female fly with a male foreleg (male forelegs have ‘sex combs’ that are involved in sexual behaviour). Jerry’s explanation bears repeating:

In flies the sex is determined by the ratio of X chromosome to autosomes.  Flies, like all diploid species, have two copies of every autosome. If you also have two X chromosomes, you’re a female because the ratio of autosomes to Xs is 1:1. If you have one X chromosome and one Y chromosome, your ratio is 2:1 and you’re male.  The Y doesn’t matter here: if you lose a Y chromosome, and hence are XO, you still look like a male, although you’re sterile (the Y carries genes for making sperm).

So to get gynandromorphs in flies, all that has to happen is that one X chromosome gets lost in one cell when the initial cell in a female (XX) zygotes divides in two.  One half of the fly then becomes XX, the other XO, and the fly is split neatly down the middle, looking like the one below.  But gynandromorphs don’t have to be “half and halfs”.  X chromosomes can get lost at almost any stage at development, so flies can be a quarter male, have irregular patches of maleness, have just a few male cells, or even a male patch as small as a single bristle.

Way back in the day (i.e., 1970s), making mosaic flies in which different patches of tissue are either male or female was the only tool we had for identifying which tissues were involved in controlling various behaviours. This was fastidious work pioneered by one of the greats of post-war science, the physicist-turned-molecular-geneticist-turned-behaviour-geneticist, Seymour Benzer. [JAC: see my mini-post at bottom in which I used these methods for another purpose.]

Along with Yoshiki Hotta, Benzer was able not only to show tissue-level genetic control of behaviour, but also to show where in the embryo those tissues were determined, thereby constructing what he called a fate map of the action of a particular mutation. They adapted this technique from one of the founders of genetics, Arthur Sturtevant, who originally proposed it in 1929.

Here are some figures from Hotta and Benzer’s 1972 paper in Nature: ‘Mapping of behavior in Drosophila mosaics’. The first shows the range of mosaics that they produced – they were much more varied than the naturally occurring gynandromorphs because of the way they manipulated a special kind of X-chromosome in these flies, called a ring-X chromosome (known as X-R). This X-R chromosome could be lost at varying times in development, changing tissues from female (XX-R) to male (XO). The later the chromosome was lost, the more specific the tissues that would be male. By using a body-colour mutation on the X-chromosome, Hotta and Benzer could track from the outside of the fly which tissues were male and female, because they had different colours.

The top left fly in the figure apparently lost its X-R chromosome at the earliest stage of development, hence the straight line. As you can see, the effect doesn’t need to be symmetrical – if the chromosome is lost at a later stage, then a very specific part of the fly could be affected, such as the right wing in the top right fly (the left wing is still female).
240527a01

The second figure shows how they interpreted which parts of the fly embryo were involved in determining the behaviour of a mutation called hyperkinetic in which the fly shakes its legs when anaesthetised (this rather odd behaviour turned out to be of major importance, as it is produced by changes to the activity of ion channels in the fly’s neurons). Unsurprisingly, it appears that the hyperkinetic gene was exerting its influence in three separate regions (one for each of the fly’s pairs of legs), all of which are involved in producing the part of the fly’s nervous system that controls movement.

240527a0The arduous nature of the technique – it was not possible to predict which tissues would lose their X-R chromosome, and often no detectable change occurred – and the problems of identifying which tissues underneath the cuticle had changed sex, meant that it was not not widely adopted. By the late 1980s this method was  overtaken by direct manipulation of genes and the tissues they are expressed in, but for many years it was cutting edge science, available in only a few leading laboratories.

_______

Jerry’s addendum: I used gynandromorphs, and Benzer and Hotta’s ring-X stock, to determine where in the fly the females’s sex pheromone (a waxy substance on her cuticule that incites the males to court her and mate with her) resided.  As Matthew noted, that stock of flies, which still exists, is prone to losing X chromosomes when they’re contributed by a male parent. The male’s XX (female) zygotes often lose the X at different stages of development, producing patches of tissue that are XO and therefore male. You can tell which patches are male because the female’s X carries a recessive gene causing yellow body color, so male bits (XO) are yellow and female bits (XX, with one gene for normal coloration) are normally pigmented.

XX females have very different sex pheromones from XY and XO males, so by correlating which bits of a gynandromorph fly were male vs. female, and then extracting each fly’s sex phreromones with hexane and testing the chemicals’ identities on a gas chromatograph, Ryan Oyama (an undergraduate student) and I were able to determine where in the fly’s body the sex pheromones were produced and/or sequestered. It turned out that this was in the cuticle of the abdomen only: flies with female heads, legs, or thoraxes but male abdomens produced only male pheromones. The amount of female pheromone was proportional to the amount of female tissue in the abdomen, at least as seen in the visible cuticle.

This correlated with behavioral observations, too, for when gynandromorphs were tested with normal males (always horny), those males courted gynandromorphs most vigorously when their abdomens were female.  (This could, of course, have been associated with behavior or morphology of those gynandromorphs rather than pheromones, so we needed to do the pheromone tests as well.) Later workers actually localized the pheromone-producing cells to a layer right below the abdominal cuticle, confirming our results.

We published our results in the Proceedings of the National Academy of Sciences (reference and free download below), and I thought it was a very clever way to use old genetic technology to study behavior and biochemistry. Sadly, the paper didn’t get much notice!

__________________

Coyne, J. A. and R. Oyama. 1995. Localization of pheromonal sexual dimorphism in Drosophila melanogaster and its effect on sexual isolation. Proc Nat. Acad. Sci. USA 92:9505-9509.

The things rats dream about

June 30, 2015 • 10:15 am

by Grania Spingies

We are such stuff
As dreams are made on, and our little life
Is rounded with a sleep.

The Tempest (4.1.168-170)

I should preface this with my regular caveat: I-am-not-a-scientist, nor do I play one on TV. My level expertise only allows me to say the rough equivalent of “Oh hey, this looks interesting.”

As a child I often used to watch my dogs dreaming. Clearly they were running, sometimes barking and huffing, sometimes panting. It used to fascinate me, and I wondered where in their heads they were running. Was it a field they knew? Were they alone or with companions? Were they chasing prey? Running for the fun of it? What does prey even look like to Canis lupus familiaris who may never met anything particularly prey-like in their modern suburban existence?

Once one of them barked so loud in her dream that she startled herself and woke up with a jump. I’d never seen a Labrador look more sheep-like when her eyes met mine. Unfortunately there was no way to ask her what she had been seeing in her dreams.

But it seems that remarkably a team of scientists has had a glimpse at what rats dream about.

Sleeping-Rat-1
Not an actual lab rat

Kiona Smith-Strickland over at Discover Magazine writes about a new study where a team looked at rats and determined remarkably that they dreamed about going places they were aware of but had not yet explored. She explains the process:

First, researchers let rats explore a T-shaped track. The rats could run along the center of the T, but the arms were blocked by clear barriers. While the rats watched, researchers put food at the end of one arm. The rats could see the food and the route to it, but they couldn’t get there.

Then, when the rats were curled up in their cages afterwards, scientists measured their neuron firing. Their brain activity seemed to show them imagining a route through a place they hadn’t explored before. To confirm this, researchers then put the rats back into the maze, but this time without the barriers. As they explored the arm where they had previously seen the food, the rats’ place cells fired in the same pattern as they had during sleep.

Neuroscientist Hugo Spiers, who co-authored the study, notes:

People have talked in the past about these kind of replay and pre-play events as possibly being the substrates of dreams, but you can’t ask rats what they’re thinking or dreaming. There is that really interesting sense that we’re getting at the stuff of dreams, the stuff that goes on when you’re sleeping.

You can read the paper here:

Hippocampal place cells construct reward related sequences through unexplored space by H Freyja Ólafsdóttir, Caswell Barry, Aman B Saleem, Demis Hassabis, Hugo J Spiers

The Centrifuge Brain Project

March 10, 2015 • 3:40 pm

by Matthew Cobb

Have you ever wondered why children love going round and making themselves dizzy, and what might be the effect of all that centrifugal force on their brains? If you haven’t, never fear, because Dr Nick Laslowicz has been doing that for you, as outlined in this excellent brief film from 2011 called The Centrifuge Brain Project.

I think Dr Laslowicz is a close colleague of Dr Denzil Dexter, who has a rather similar research outlook:

 

h/t Simon Ings

First Nobel Prize of the year goes to three neuroscientists

October 6, 2014 • 5:24 am

Well, another year went by, and with sadness I must put my bottle of champagne back in the fridge (it’s well past its prime by now). According to CNN, the Karolinska Institut announced this morning that the Nobel Prize for Physiology or Medicine went to three people: two Norwegians (a couple who works together) and an American working in England. Here they are:

141006110511-nobel-medicine-prize-winners-story-top

It’s a sad state of affairs that I have neither heard of any of them nor of their discoveries, described by CNN as follows:

John O’Keefe, along with May-Britt Moser and Edvard Moser, discovered cells that form a positioning system in the brain — our hard-wired GPS.

Those cells mark our position, navigate where we’re going and help us remember it all, so that we can repeat our trips, the Nobel Assembly said in a statement.

Their research could also prove useful in Alzheimer’s research, because of the parts of the brain those cells lie in — the hippocampus and the entorhinal cortex.

Humans and other mammals have two hippocampi, which lie in the inner core of the bottom of the brain and are responsible for memory and orientation. The entorhinal cortices share these functions and connect the hippocampi with the huge neocortex, the bulk of our gray matter.

In Alzheimer’s patients, those two brain components break down early on, causing sufferers to get lost more easily. Understanding how the brain’s GPS works may help scientists in the future understand how this disorientation occurs.

The research is also important, because it pinpoints “a cellular basis for higher cognitive function,” the Nobel Assembly said.

The scientists conducted their research on rats, but other research on humans indicates that we have these same cells.

I’m not sure how overblown the Alzheimer’s implications are; perhaps a reader could tell us, or further describe the research. Remember that this prize, the only one explicitly designated for biology, is supposed to go for insights that improve human welfare. In practice that’s not always the case, as prizes have been given for fundamental breakthroughs in non-health-related work (viz. T. H. Morgan’s prize for genetic work in Drosophila or Axel and Buck’s 2004 prize for work on olfactory receptors), but one can always argue that such work has potential implications for humans, as Morgan’s indeed did.

The physics prize will be announced tomorrow, the chemistry prize Wednesday, the peace prize on Friday, and the economics “prize” (not really a Nobel Prize) will go to a University of Chicago Professor, as always, a week from today (Oct. 13). The prize for literature will be announced at an unspecified date.

Would anybody care to guess the recipients? If you get two out of the five, you’ll get an autographed copy of WEIT with a Nobel-winning cat (sporting the medal) drawn in it. You can guess all five if you wish. Deadline by today at 5 p.m., and one guess per customer. First correct answer wins.  (p.s.: our panel of expert judges is looking at the “cat vs. dog breed” answers.)

E. O. Wilson on free will

August 20, 2014 • 10:24 am

Ed Wilson has finally decided to wade into the murky hinterlands of Consciousness and Free Will, as seen in a new article in Harpers called “On free will: and how the brain is like a colony of ants.” (Sadly, you can’t read more than a paragraph without paying.)  I’ll quote from the pdf I have, but, in general, the article adds little to the debate about free will, which to me seems largely semantic. The real issue—the one that could substantially affect society—is that of determinism, which most philosophers and scientists agree on (i.e., we can’t make choices outside of those already determined by the laws of physics).

There are two problems with Wilson’s piece: it doesn’t say anything new, its main point being that consciousness and choice are physical phenomena determined by events in the brain, and it doesn’t define the subject of the piece, “free will.” How can you discuss that when you don’t tell people what it means? After all, for religious people (and most others, I suspect) it means one thing (libertarian free will), while for compatibilists like Dennett it means another (no libertarian free will, but something else we can call free will).

So here’s Wilson’s tacit admission of determinism, or at least of the physical basis of consciousness and “free will”:

If consciousness has a material basis, can the same be true for free will? Put another way: What, if anything, in the manifold activities of the brain could possibly pull away from the brain’s machinery to create scenarios and make decisions of its own? The answer is, of course, the self. And what would that be? Where is it? The self does not exist as a paranormal being living on its own within the brain. It is, instead, the central dramatic character of the confabulated scenarios. In these stories, it is always on center stage—if not as participant, then as observer and commentator— because that is where all of the sensory information arrives and is integrated. The stories that compose the conscious mind cannot be taken away from the mind’s physical neurobiological system, which serves as script writer, director, and cast combined. The self, despite the illusion of its independence created in the scenarios, is part of the anatomy and physiology of the body.

And here’s what I take to be Wilson’s tacit admission, though he’s never explicit about it, that “free will” is a mental illusion, since it reflects not conscious choice but unconscious brain processes. There’s a lot more to be said here, but Wilson doesn’t say anything beyond this one sentence:

 A choice is made in the unconscious centers of the brain, recent studies tell us, several seconds before the decision arrives in the conscious part.

But one novel part of his piece is reflected in the subtitle: an analogy between mental activity and colonies of social insects. Each insect is basically a little computer programmed to do a job, with its task sometimes changing with the environment (bee larvae destined to be workers, for instance, can become queens with some special feeding). But if you look at the whole colony, it appears as a well-oiled “superorganism” that works together to keep the colony functioning like a “designed” unit.  Wilson sees the brain in the same way: each “module” or neuron is entrained to behave in a certain way, but the disparate parts come together in a whole that is the “I,” the person who feels she’s the object and (as G.W. Bush might put it) “the decider.” But this analogy isn’t terribly enlightening, and doesn’t point the way forward to a scientific understanding of consciousness. That understanding will come through reductionist analysis, I think, but we already knew that.

Wilson is a physicalist, and says that progress in understanding consciousness and volition (I won’t call it “free will”) will come not from philosophers but from neuroscientists. In the main I agree, though I do think philosophers have a role to play, if only that of holding scientists to some kind of consistency and conceptual rigor. By and large, however, I see compatibilist philosophers as not only having contributed little to the issue, but having sometimes been obfuscatory by sweeping determinism (the truly important issue) under the rug in favor of displaying their own version of compatibilism.

At one point Wilson, though, appears to abandon determinism, but makes the mistake of conflating “chance,” which is simply determined phenomena that we can’t predict, with true unpredictability: that which we see in the realm of quantum physics. Perhaps in the statement below he’s saying that human volition isn’t repeatable or predictable because of such quantum phenomena, which could make decisions differ even if one replayed the tape of one’s life with every molecule starting in the same position. But Wilson could have been much clearer about this.

. . . Then there is the element of chance. The body and brain are made up of legions of communicating cells, which shift in discordant patterns that cannot even be imagined by the conscious minds they compose. The cells are bombarded every instant by outside stimuli unpredictable by human intelligence. Any one of these events can entrain a cascade of changes in local neural patterns, and scenarios of individual minds changed by them are all but infinite in detail. The content is dynamic, changing instant to instant in accordance with the unique history and physiology of the individual.

Well, does that give us “free will” or not? Does it give us truly unpredictable behavior, even in principle? Wilson doesn’t say.

In the end, Wilson bails, floating the common but unsatisfying conclusion that we have free will because we think we have free will, and that the illusion of (libertarian) free will is adaptive.

. . . Because the individual mind cannot be fully described by itself or by any separate researcher, the self—celebrated star player in the scenarios of consciousness—can go on passionately believing in its independence and free will. And that is a very fortunate Darwinian circumstance. Confidence in free will is biologically adaptive. Without it, the conscious mind, at best a fragile, dark window on the real world, would be cursed by fatalism. Like a prisoner serving a life sentence in solitary confinement, deprived of any freedom to explore and starving for surprise, it would deteriorate.

So, does free will exist? Yes, if not in ultimate reality, then at least in the operational sense necessary for sanity and thereby for the perpetuation of the human species.

Wilson is right in saying that we all act as if we have free will; nobody disputes that. And I’d like to think that he’s right in claiming that our illusion of libertarian free will is adaptive, though I know of no way to test that proposition. (We can, as always, concoct adaptive stories about this. One writer, whose name I can’t remember, argued that knowing whether a “choice” came from your brain versus someone else’s is an adaptive bit of information: it makes a difference if your arm is pumping up and down because you’re doing it yourself or if somebody else has hold of it and is doing it to you.)  I would have liked this conclusion better had Wilson been a bit more tentative in his adaptive storytelling.

But in the main, the piece adds little to the debates about consciousness and free will. In fact, I find that it muddles the debate. In my view, the best popular exposition of the problem of consciousness remains Steve Pinker’s article in Time Magazine in 2007. The reason the Harper’s piece got published was not because Wilson had something particularly new to say, but because the person who wanted to hold forth was E. O. Wilson. As for free will, I still like Anthony Cashmore’s piece in The Proceedings of the National Academy of Sciences.