Computer chip replaces cerebellum in a rat

September 30, 2011 • 10:20 am

The amazing results reported in this piece from New Scientist, “Rat cyborg gets digital cerebellum,” haven’t yet been published in a scientific journal, but were reported in a meeting in the UK.  The details are sketchy, but scientists apparently built a computer chip using information from the inputs of a rat’s brainstem to its cerebellum as as well the output generated by its input.  (The cerebellum, a lumpy part of our brain located underneath and at the rear, is, among other things, responsible for motor control of the body based on input from the brainstem.)  How they got this information onto a chip is also unclear to me, but I trust some readers will enlighten us.

Once they made the artificial cerebellum-chip, they used it to see if it could substitute for the real one in an elementary brain-processing task.  As the journal describes:

To test the chip, they anaesthetised a rat and disabled its cerebellum before hooking up their synthetic version. They then tried to teach the anaesthetised animal a conditioned motor reflex – a blink – by combining an auditory tone with a puff of air on the eye, until the animal blinked on hearing the tone alone. They first tried this without the chip connected, and found the rat was unable to learn the motor reflex. But once the artificial cerebellum was connected, the rat behaved as a normal animal would, learning to connect the sound with the need to blink.

The journal also reports that another group used electronics to replace lost memory in rats.

While there are substantial differences in how brains process information versus computers, there isn’t any reason why computer chips couldn’t replace many of the functions of the brain.  It’s intriguing to contemplate, for example, the possibility that a computer chip might one day help blind people to see, or improve memory in those with diminished capacity.

39 thoughts on “Computer chip replaces cerebellum in a rat

  1. It’s probably inevitable to speculate on the possibility of downloading memories and “consciousness” onto a chip. And I put consciousness in quotes because we’d first have to define and quantify it before we “did” anything with it. Still, an intriguing possibility.

    1. Runs into the ol’ Star Trek transporter problem though. That is, you end up dying and creating a copy that believes it is you. I’d probably avoid it until I was dying otherwise, then give it a go. Immortal computer program that THINKS it is me is better than no me at all.

      1. I am not sure there is anything different to being “you” or “me” than thinking (having the conscious belief) that I am a “me,” except for some minimal categorical and naming constructions- such as simply claiming the real “me” is not just my continued consciousness but my continued consciousness as it is attached to the present body. But I would think that if your brain/mental structures suddenly found its self imbedded in a machine existence it would quickly change its mind to believe that the present (machined) “me” is every bit as real and worthy of dignity and respect as the (human) “me.” The idea that there is a significant difference between the two, or that one is a worse condition than the other (either “your” consciousness attached to a human body or a machine), is probably not a useful or pragmatic distinction or one that would hold with such a transferred experience. Unless of course, your brain/mental structures are so attached to the idea of a human body that it would self implode upon finding out that it was no longer attached to such.

        1. I have no problem with consciousness-on-computer. I guess it’d come down to the process by which the “uploading” occurred and how well it’d fit with my desperately clutching at a naive conception of “self”. Luckily it’s not really a pressing issue at this point. A bridge I’ll cross when I come to it.

      2. “I’d probably avoid it until I was dying otherwise, then give it a go.”

        Rule #1 of computer maintenance: establish a regular backup schedule before disaster strikes.

        1. Yea, this brings up the issue about just how special our conscious self is and how far we should go to save it. I think we are going to want to be humble about our consciousness, recognize that though IT is what we are, it is not a transcendental or extravagant entity that should be saved at all costs or preserved for eternity. That many more, infinitely more, consciousness will arise, and ones that are more sophisticated than ours.

          Or perhaps I am just doing my best to talk myself out of the need for freezing my brain on the hopes that it can be later uploaded, which seems like a world that we should want to avoid, where everyone is running around making sure their brain is kept in the best condition on the hopes of being re-awakened. But then there is always the conflicting problem of death/life and our drive and desire and belief that life is at least something to be maintained, and our simple acknowledgment of that seems to lend support to the idea that we should freeze our selves in the hopes of continued future existence, which we seem to validate on a daily basis by avoiding death.

          1. Seems to me the logic goes the other way: if our consciousness is transcendental, then attempting to back it up into other media would be futile. But if it’s merely a valuable (but non-magical) bundle of information, then there’s no reason not to treat it like any other valuable information, and keep multiple (inactive) copies of it to guard against accidental loss. People keep digital backups of old Superbowl games, for Pete’s sake; why should we consider our very selves less worthy of preservation than that?

          2. “did you hear Bob died? He hadn’t saved since last year!”

            “Oh man, didn’t he put all that work into learning Chinese this year? He’s going to be so pissed when we tell him.”

          3. Funny, Waffle.

            Gregory,

            That makes sense to me. Does that mean you are in race against time to make sure you upload or preserve your brain before the big moment when your consciousness is scrambled indefinitely?

          4. Lyndon:

            I’m not a cryonics fanatic, if that’s what you’re asking. I understand the logic of it, and if/when it becomes as cheap and painless as backing up my computer, I’ll probably do it. But if it requires mortgaging my house and putting my traumatized family through a difficult and controversial legal/medical process in the hours following my death, with no guarantee of success, then for now I’m going to deem it not worthwhile.

            Ask me again in 20 or 30 years (if I’m still around).

      3. Actually, the Science Fiction story that I think applies more is Pohl’s Heechee saga, where the main character gets digitized. Great storytelling, and plenty of food for thought about all these questions and more.

        Cheers,

        b&

      4. I don’t agree with this. To repeat something I once posted on Pharyngula:

        Regarding the classic objection to uploading:

        “The mind is the brain. If you destroy your brain, then you will destroy your mind. The simulated mind may be conscious, but it will be someone else entirely. You’ll be dead.”

        This misses a fundamental point. You are not the brain. You are the information encoded in the the brain. Your consciousness is not the result of the brain processing that information, but of something processing that information. The continuity in your consciousness is not continuity in the matter composing your brain, but continuity in the information. That information constantly changes, but you are not the same person you were ten seconds ago. The subjective experience of continuity arises from the fact that the information comprising the you of now (together with sensory input) will decide the you of ten seconds of time. That fundamental causality will remain, even if your neurons are replaced with computer simulations of the same network.

        This leads to some obvious absurdities, but just because something is obvious doesn’t mean it is true. For example, if you archived your brain (while remaining living) and then started up the archived version in 1 year time, the “absurd” consequence is that you of one year ago will simultaneously experience leaving the archiving machine, and jumping one year into the future. However, why not? The fundamental continuity between influencing future states and being influenced by past states is still there.

        It’s weird and counter-intuitive, but consciousness is weird and counter-intuitive.

        1. Yes and no.

          It depends on the continuity of conscious experience.

          If you copy your mind to a chip you now have two yous; if the brain is destroyed, that you dies, even though the you on the chip still thinks of yourself as you. See Greg Egan op. cit..

          If you replaced your brain gradually, moving (not copying) functions to the chip over time, so that the same conscious ensemble that is you spans meat and silicon, until eventually no meat-based functions remained, the original you would still exist; there was never a distinct you that has now ceased to be.

          I don’t see how that absurdity arises. If you archive your mind, the original mind in your brain will experience leaving the archiving machine, but it has no awareness of the mind in the archive (there’s no causal connection); starting up the archived version would mean the copy of you “wakes” after a year-long “coma” and the last thing that that you would remember is entering the archive machine, but that you has no awareness of the mind in your brain (the original you) so it would have no “memory” of leaving the archive machine or anything subsequent. As above, there are now two yous, whose histories and memories have forked at the moment of archiving. There’s no simultaneous experience.

          /@

          1. I think you’re actually violently agreeing. hyperdeath is saying that the archived version has the sensation of waking up and then finding out that a year has passed since the last archived memory, which I think is exactly what you’re saying.

            Well-expressed by both of you, though, I tend to agree with this analysis of the uploading/teleporter problem (i.e. that there really isn’t any problem).

  2. But the cerebellum is a relatively primitive brain structure. Doing this with human neocortex is orders of magnitude more difficult. I know it’ll happen one day, but in my lifetime? (I’ve got 60ish years left), I doubt it.

  3. The article isn’t very specific, but I’m guessing the chip implements a neural network that they then trained using the inputs and outputs of the actual cerebellum.

  4. McWaffle: “Runs into the ol’ Star Trek transporter problem though”
    …a problem of what to do with the original meat ‘self’! Which has priority? Is it even ethical to pose the scenario?

    Perhaps one way out would be if parts of the brain were replaced little bit by little bit? Your ‘self’ would just graduate from one media to the new one seamlessly, with no ‘duplicate’ you ever existing – it would be you all the way through the process!

    1. Or at least it’d appear to be. The “Star Trek” problem, as I’ve heard it, is that each time they go through the teleporter, they are disintegrated and rebuilt atom-by-atom. So, the question goes, did Kirk die? If you asked Kirk after he teleported, he’d say, “no, of course not, I’m here, hello!” But, is it really the “same Kirk” or an entirely indistinguishable “new Kirk”? Gets at the idea of self and whatnot.

      The brain replaced bit-by-bit was actually my thought too. Definitely a safer option conceptually, to me anyway.

  5. It’s on the way to Arthur C Clarke’s Braincap. I see this ratchip development as a good thing, but ultimately what happens to privacy when our children are mentally connected to all knowledge & each other ? It’s a SciFi staple so I will not bore with the details, but it will lead to very difficult questions about identity, ‘human’ rights & personal freedom. Will we have to consider the rights of the biologically dead who are living digitally ? And so on..

    1. It seems obvious to me that any entity capable of passing a Turing test (a real one, I mean; not the airhead chatbot gabfests being touted as Turing tests these days) should be afforded all the moral rights of a real person, regardless of the physical substrate on which they’re implemented.

      However we may find that moral distinctions between, say, murder and simple assault tend to vanish if the victim can be quickly restored from backup.

      1. What I’d say is tricky though, is that we still give human rights to humans who couldn’t pass a Turing test (people with severe handicaps). It seems electronic entities would be at a disadvantage, since if they were to become disabled and no longer could pass the test, they would lose their rights.

        1. Depends on what you mean by “severe handicaps”. Stephen Hawking can pass a Turing test. People on life support with irreversible brain damage can’t, nor do we grant them the same rights as conscious, thinking people.

          Obviously there are gray areas, but I think the point stands that there’s no moral justification for denying rights to people simply because they’re instantiated electronically rather than biochemically. If an electronic person is unable to communicate due to hardware malfunction, we should do our best to restore proper function before denying them any rights, same as we would for a biological person.

          1. Gregory,

            I agree with your analysis there. I would say that we are going to have (or should have) quite a shift in our understanding of rights and morality, and a shift in the institutions and practices that arise from that. Many of the old conceptions of morality and rights will be naturalized and seen as the useful tools that they are for producing a flourishing society and “healthy” people. One of the passages that tugs at me is from Peter Singer where he is talking about hospitals that make infants, that they have decided to let perish because of severe brain or health problems, starve to death because it would be “wrong” to (actively) kill them, and similarly for end of life questions. If we cannot come to grips with what is needed in such obvious situations, and maybe we have, you know it is going to be a tough transition.

            I think certain shifts should already be taking place. Accepting what it is that makes up the most interesting, enjoyable, and useful consciousnesses (and thus behaviors and people) should help us understand what is necessary for such qualities to be achieved (solid socialization, education, certain minimal material necessities for all beings and possible beings, etc.).

          2. How would one determine the difference between a malfunctioning AI and a non-sentient program? With people, it’s pretty easy to tell, since we have bodies. But I think it’d be hard to tell the difference between a decidedly non-sentient toaster AI and a designed-to-be-sentient AI that was damaged or corrupted and now believed itself to be a toaster?

            Another part of the complexity is due to the fact that electronically instantiated entities would be far more easily tinkered with. If my car’s AI was sub-person but could, through some hardware/software tweaks be upgraded into a Turing-complete Knight Rider-style car, would we have the same moral obligation to do so as we would to perform neurosurgery to improve mental function in somebody with a congenital brain defect? Flowers-for-Algernon-style?

          3. How would aliens from Alpha Centauri tell the difference between a sleeping human and his dog? By looking at the relative complexity of their brains. (Now if it were a cat, then it would be obvious who’s in charge.) If you find enough software and processing power in your toaster to support human-level AI, then there’s evidently a lot more going on there than just making toast, and you’d be wise to withhold judgment until you’ve figured it out.

            Do you have an ethical obligation to upgrade your ordinary toaster to human-level AI? I don’t see why you would, any more than we have an obligation to genetically engineer intelligent chimps. Just because we can doesn’t mean we must. Upgrading a properly functioning sub-sentient system to full sentience is not in the same ethical category as repairing a malfunctioning sentience.

    2. Funny you should mention Clarke. The comment above reminds me of Rama Revealed (written with Gentry Lee). Rama is revealed to be the bus that takes people to heaven (the Creator’s local headquarters, at least), and there they can be technologically maintained indefinitely. You get your parts replaced one bit at a time.

      Rama is shaped like a bus, after all, and it goes round and round on a regular route.

      It is at least a lot more plausible than Scientolgy or Mormon theology.

  6. Re: getting information onto the chip: Presumably they put in electrodes and monitored the bundles transmitting information into and out of the cerebellum. Since neurons either fire or not, monitoring the states is not so difficult a concept. Once you have the recordings of what patterns on the input produce what patterns on the output, there are a number of techniques which can be used to program a generic programmable chip to respond to the inputs and mimic the outputs. Another great thing about the nervous system is that the nerves are quite analogous to wires (until you get to the ganglia) so there is a good chance that you can mimic the cerebellum at least in a rough manner. The thing to do is see where the mimic fails and improve on it – that may at least provide a lot of information on how the machine (cerebellum) reacts even if you have no idea what goes on inside.

  7. See Greg Egan’s short stories “Learning to Be Me” & “Closer”, which “involve a … neural implant called a “jewel”—a small computer inserted into the brain at birth that monitors its activity in order to learn how to mimic its behavior. By the time one reaches adulthood, the jewel’s simulation is a near-perfect predictor of the brain’s activity, and the jewel is given control of the person’s body while the redundant brain is discarded. In this way, people with the jewel can eliminate the cognitive decline associated with aging by implementing their minds on a machine. Also, by transplanting the jewels into cloned bodies genetically altered to develop without brains, they can live youthfully forever.” [Wikipedia]

    See also here.

    /@

  8. When not reading superb science blogs (no, it is one) as this one, I study the neurophysiology of hearing and balance and their prosthetic replacements. While not an expert in any way on the cerebellum, I do read a bit on that brain region as it regards the oculomotor and vestibular systems. I have two main comments on this study, as preliminarily presented in Prof Coyne’s link:

    1) It’s hard not to find a sensory or motor region that doesn’t have a neural representation in the cerebellum. A quick Pubmed search or look at a brain atlas will show you just how complex it is. (e.g. The number of synapses a single Purkinje cell makes is staggering, even in mice.) That said, the researchers of this study picked a comparatively easy system to control with their chip, as the required motoneuron targets are limited.

    A neat side experiment would have been to close the control loop with a completely different muscle group than the eyelid. Variations of that has been done by other lab groups.

    2) It’s not clear from the article, but unless the state of brain-computer interfacing has changed in the last few months (I’m currently on sabbatical, so maybe it did), the recording and subsequent electrical stimulation of the brainstem handled by the chip was not at the single-neuron level. For a blink reflex, though, fairly coarse stimulation of the nerve tracts or brainstem nuclei would be sufficient.

    A small step, surely, but a promising one – for decoding the complexities of the brain if not for future medical applications. By the way, for those interested, I have it on good authority that a pioneer of the neuron-chip VLSI technology is recruiting for a large scale mapping project of the visual cortex. One can only imagine…

  9. Given that the religious claim that consciousness has a non-material component, it’s starting to look like it does not reside in the cerebellum.

    It’s god of the gaps once again, this time the playground is the mind.

    1. Well, as the Sophisticated Theologians would say, you’re simply missing the obvious: god is out of your mind!

Leave a Reply to Ben Goren Cancel reply

Your email address will not be published. Required fields are marked *