The Oxford English Dictionary gives three relevant definitions of the adjective “sentient”:
a.) That feels or is capable of feeling; having the power or function of sensation or of perception by the senses.
b.) Conscious or percipient of something.
c.) Physiology. Of organs or tissues: Responsive to sensory stimuli.
(“Sentience” itself is defined only as “The condition or quality of being sentient, consciousness, susceptibility to sensation.”)
The question that the Scientific American article below asks (and for once it’s written by a scientist in this field) is whether insects fit the definition of the first two definitions: do they have feelings and sensations experiencing qualia like pain, joy, pleasure, or the sensation of “redness”? Or are insects merely chitinous robots that are programmed by evolution to act (to us) as if they have feelings—programmed reactions that we anthropormophize as similar to our own sensations? After all, you can be “responsive to sensory stimuli” (the third sense above) without actually feeling the sensory stimuli the way humans do.
Answering the question of whether a bee or a fly is sentient in the first two senses, or has consciousness (the ability to be sentient and perceive stimuli), is difficult. Some would say it’s impossible. After all, we all know that we ourselves have consciousness and feel pain and joy, because we experience those things personally. But can I prove that, say, another person is conscious? Not directly, because we can’t get inside their brains. We infer that they’re conscious because they tell us they are; they are physically constructed with the same neurons that give us consciousness; and they act as if they experience qualia. It’s inference, but of a Bayesian sort, and the question has high priors.
But can we extend this to other species? Chittka uses the example of dogs:
Although there is still no universally accepted, single experimental proof for pain experiences in any animal, common sense dictates that as we accumulate ever more pieces of evidence that insects can feel, the probability that they are indeed sentient increases. For example, if a dog with an injured paw whimpers, licks the wound, limps, lowers pressure on the paw while walking, learns to avoid the place where the injury happened and seeks out analgesics when offered, we have reasonable grounds to assume that the dog is indeed experiencing something unpleasant.
This is a Bayesian approach to the question, and it’s really the only way to go. Yes, I think it’s highly probable that dogs, and most mammals, feel pain. But what about insects, reptiles and amphibians? They certainly avoid unpleasant stimuli and gravitate towards pleasant ones, which you could interpret as feeling joy, pleasure, or pain, but do they feel these sensations? If you say that the behavior denotes sentience, well remember that protozoans do these things, too (see below).
I’m fully aware that philosophers of mind have probably discussed this issue at length, and I haven’t followed that literature, so my musings here may seem childish to these philosophers. But this Sci. Am. article (click below to read, or find it archived here) is not written for philosophers of mind but for people like me: folks interested in science and wanting to see what’s happening in other fields. I found the article quite interesting, and for me it slightly raised the probability that insects can feel pain. But the answer remains far from settled—or even of having a high probability. And the author admits that. But he cites a number of cool studies.
Here are the lines of evidence that, to Chittka, raise the Bayesian probability that insects have sentience: experiencing pain, pleasure, and even joy.
a.) They learn and can do really smart things. (All quotes from Chittka are indented):
The conventional wisdom about insects has been that they are automatons—unthinking, unfeeling creatures whose behavior is entirely hardwired. But in the 1990s researchers began making startling discoveries about insect minds. It’s not just the bees. Some species of wasps recognize their nest mates’ faces and acquire impressive social skills. For example, they can infer the fighting strengths of other wasps relative to their own just by watching other wasps fight among themselves. Ants rescue nest mates buried under rubble, digging away only over trapped (and thus invisible) body parts, inferring the body dimension from those parts that are visible above the surface. Flies immersed in virtual reality display attention and awareness of the passing of time. Locusts can visually estimate rung distances when walking on a ladder and then plan their step width accordingly (even when the target is hidden from sight after the movement is initiated).
All of these responses, of course, could come from computers programmed to learn from experience, which is exactly what we and other animals are. Natural selection has endowed us with a neuronal network that will make us behave in ways to further our reproduction (or, sometimes, that of our group—like an ant colony). We can program computers to do this, too: robots that avoid aversive stimuli and gravitate towards good ones. And clearly we behave in such a way that furthers our reproduction, of which survival is one component. But do insects experience the world, with its pleasures and pains, by having qualia similar to ours?
A related question is this: is consciousness like we have (feeling pain and joy) something that’s merely an epiphenomenon of having evolved a sufficiently complex nervous system, or is consciousness itself a product of natural selection to further our reproduction? We don’t know the answer, but it’s pretty clear that some of our conscious experiences, like pain, have evolved by selection. People who can’t feel pain as a result of neurological conditions or disease (like Hansen’s disease) quickly start getting infections, hurting their bodies without being aware, losing fingers, and the like. If you didn’t experience pain when putting your hand in boiling water, you’d damage your body. But if consciousness is just an epiphenomenon of a complex evolved nervous system, then we can’t automatically say that bees that act as if they’re conscious really are conscious.
I’m prepared to believe, based on what I said above, that mammals feel pain. Maybe even reptiles or amphibians, though there are suggestions that fish don’t feel pain, at least in the way we do. All these creatures gravitate towards adaptive things and avoid nonadaptive ones, but again, they could be programmed to do so without the ancillary conscious experience that we have.
More evidence from Chittka:
b.) Insects act as if they can alter their consciousness:
Many plants contain bitter substances such as nicotine and caffeine to deter herbivores, but these substances are also found in low concentrations in some floral nectars. Researchers wondered whether pollinators might be deterred by such nectars, but they discovered the opposite. Bees actively seek out drugs such as nicotine and caffeine when given the choice and even self-medicate with nicotine when sick. Male fruit flies stressed by being deprived of mating opportunities prefer food containing alcohol (naturally present in fermenting fruit), and bees even show withdrawal symptoms when weaned off an alcohol-rich diet.
Again, seeking out things that are good for you, like curing you of illness or infection, could be programmed, either directly or as part of programs involved in “learning what gets rid of harmful conditions”. Now if bees are partial to coffee and cigarettes because it gets them high, then yes, it seems to show that they want to alter their consciousness, which implies that they have consciousness. But these facts aren’t that convincing to me, because nicotine and caffeine may have other beneficial physiological effects.
c.) Bees appear to be “optimistic”. Here’s the experiment Chittka adduces to support that:
We trained one group of bees to associate the color blue with a sugary reward and green with no reward, and another group of bees to make the opposite association. We then presented the bees with a turquoise color, a shade intermediate between blue and green. A lucky subset of bees received a surprise sugar treat right before seeing the turquoise color; the other bees did not. The bees’ response to the ambiguous stimulus depended on whether they received a treat before the test: those that got the pretest sugar approached the intermediate color faster than those that didn’t.
The results indicate that when the bees were surprised with a reward, they experienced an optimistic state of mind. This state, which was found to be related to the neurotransmitter dopamine, made the bees more upbeat, if you will, about ambiguous stimuli—they approached it as they would the blue or green colors they were trained to associate with a reward.
This is not a meaningless experiment, but to me shows only that bees conditioned to approach a color after a sugar reward are more likely to approach something like that color than those who weren’t conditioned. To call this “optimism” seems to me hyperbolically anthropomorphic.
d). Bees appear to experience “joy”. This experiment is more suggestive to me:
Other work suggests that bees can experience not only optimism but also joy. Some years ago we trained bumblebees to roll tiny balls to a goal area to obtain a nectar reward—a form of object manipulation equivalent to human usage of a coin in a vending machine. In the course of these experiments, we noticed that some bees rolled the balls around even when no sugar reward was being offered. We suspected that this might be a form of play behavior.
Recently we confirmed this hunch experimentally. We connected a bumblebee colony to an arena equipped with mobile balls on one side, immobile balls on the other, and an unobstructed path through the middle that led to a feeding station containing freely available sugar solution and pollen. Bees went out of their way to return again and again to a “play area” where they rolled the mobile balls in all directions and often for extended periods without a sugar reward, even though plenty of food was provided nearby. There seemed to be something inherently enjoyable in the activity itself. In line with what other researchers have observed in vertebrate creatures at play, young bees engaged more often with the balls than older ones. And males played more than females (male bumblebees don’t work for the colony and therefore have a lot more time on their hands). These experiments are not merely cute—they provide further evidence of positive emotionlike states in bees.
It’s hard to understand these results without thinking that bees, like panda cubs, are playful, messing around with balls that give them pleasure. And since bees don’t experience balls in their natural state, they could be enjoying the novelty. On the other hand, they could simply be encountering something they haven’t experienced, and are following neuronal instructions to manipulate it to see how it operates, which could be useful knowledge in the future. This second interpretation means that no “pleasure” need be involved. Remember, play behavior in animals is often there to prepare them for what happens when they become adults, and isn’t just there for pleasure.
Again, it’s hard to judge from such studies whether bees are feeling pleasure in the way we do. But to me this makes it marginally more likely.
Finally,
e). Bees appear to weigh pain against pleasure, and change their behaviors when the balance is altered. Here’s another experiment:
We decided to do an experiment with only moderately unpleasant stimuli, not injurious ones—and one in which bees could freely choose whether to experience these stimuli.
We gave bees a choice between two types of artificial flowers. Some were heated to 55 degrees Celsius (lower than your cup of coffee but still hot), and others were not. We varied the rewards given for visiting the flowers. Bees clearly avoided the heat when rewards for both flower types were equal. On its own, such a reaction could be interpreted as resulting from a simple reflex, without an “ouch-like” experience. But a hallmark of pain in humans is that it is not just an automatic, reflexlike response. Instead one may opt to grit one’s teeth and bear the discomfort—for example, if a reward is at stake. It turns out that bees have just this kind of flexibility. When the rewards at the heated flowers were high, the bees chose to land on them. Apparently it was worth their while to endure the discomfort. They did not have to rely on concurrent stimuli to make this trade-off. Even when heat and reward were removed from the flowers, bees judged the advantages and disadvantages of each flower type from memory and were thus able to make comparisons of the options in their minds.
To me, this really shows nothing more than that animals are attracted to adaptive stimuli and repelled by harmful ones, with the addition of being able to balance harms versus advantages. (This is like the “flight distance” of animals, with some individuals able to give more weight to attractive stimuli. That’s probably how cats got domesticated!) But it doesn’t tell us whether animals are feeling the pain or attraction the way we do.
And we should remember that even protozoans show avoidance of some external stimuli and can be induced by electrical shocks to avoid light. So these animals can be trained. Do they feel pain or pleasure? I doubt it—not protozoa! They may not show “play” behavior, but perhaps they can be trained to weigh aversive versus adaptive stimuli, as in section “d” above. I doubt anybody would conclude with confidence that protozoa feel pain the way we do (they don’t have a nervous system) or are even conscious.
Against the doubts that I’ve raised, Chittka offers a counterargument:
Critics could argue that each of the behaviors described earlier could also be programmed into a nonconscious robot. But nature cannot afford to generate beings that just pretend to be sentient. Although there is still no universally accepted, single experimental proof for pain experiences in any animal, common sense dictates that as we accumulate ever more pieces of evidence that insects can feel, the probability that they are indeed sentient increases.
The first sentence is what I have said already. And I’m willing to go along with the third sentence, too: as we learn more, the Bayesian probability that other species experience pain or pleasure can increase or decrease.
But I’m not willing to go along with the idea that “nature cannot afford to generate beings that just pretend to be sentient.” What does he mean by “afford”? My interpretation is this: he’s saying that natural selection cannot produce organisms that act as if they’re sentient unless they really are sentient. And I cannot see any support for that, for we already know that protozoans act as if they experience qualia, but almost certainly don’t. And saying “pretend to be sentient” is pretty anthropormorphic! It implies, for example, that programmed robots that do what bees do are “pretending to be sentient” when in fact we know they are NOT sentient.
Finally, that leads to the Big AI Question: if we generate robots sufficiently complex that they respond exactly as humans do in complex situations requiring consciousness, does that mean that they have become conscious? I say “no”, but others disagree. After all, there are those panpsychists who say that even electrons and rocks have a rudimentary form of consciousness.
I’m writing this on the fly, so forgive me if my thoughts are half-baked. I do think that Chtittka’s experiments are clever, and, over time, may give us a sense of sentience in other species. But I’m not yet ready to throw in with him on the claim that insects are conscious. It’s enough for me now to realize that they do experience some aspects of the environment as things to be avoided. And that is why I have always anesthetized my fruit flies before killing them. (When I was an undergrad I used to take them to the biology department roof and let them go, but my advisor Bruce Grant nixed that on the grounds that I was polluting the natural gene pool of Drosophila.)
The last bit of Chittka’s paper is a thoughtful analysis of how these kinds of studies should condition our behavior towards insects. But even if they don’t feel pain, aversion or attraction itself should help us confect a philosophy of “insect ethics.”
h/t: Howard, who brought this paper to my attention and wanted my take on it. I’m sending him this link as my take.
This is encouraging – keep it up, Sci . Am.
Interesting and thoughtful. I’ve come to believe, though, that progress on these sorts of questions will be limited to borderline philosophizing without paying much more attention to how brains (human, insect, any) actually work. Therein lies actual knowledge of how emotions are produced, an understanding of qualia, and maybe even insight into these cross-species questions. Terrific progress has been made on the actual science of human consciousness, and it barely gets mentioned in the endless debates about consciousness, free will, and the like. Nearly all the biological and cognitive neuroscientists in the field conclude, there is no “hard problem,” or mystery to self-awareness. I highly recommend DeHaene’s Consciousness and the Brain on what is actually known, the science behind it, and up-to-date date understanding of how consciousness actually works. IMO, this is the avenue down which progress will be made and will rescue us from the philosophical rabbit holes. It’s tremendously exciting, further along than you think, and accelerating.
In regards to “nature cannot afford to generate beings that just pretend to be sentient.” , I attempted to consider as broad and intricate spectrum of creatures I (not a biologist) quickly could. From amoebae to
nsects to reptiles to small mammals to humans (with Donald tRump at the pinnacle, of course) the levels, or performance, of animals’ and (in numerous examples) plants’ “sentience” are attributes relevant to that individual’s survival and reproduction. People tend to give pets, pests, livestock and wild game more credit than necessary to explain certain behaviours. On the other hand, it is intriguing to note how single-celled animals appear to have innate preferences (sugar v. ammonia) depending on quantities provided. Perhaps sentience is dependent on the sentience of the beholder.
Why does Vlad the Impaler spring to mind?
He’s the most brilliant, sentience -minded Stable Genius the likes of which the world may never see.
I found the article interesting but not conclusive. Insects may indeed avoid damaging stimuli and may indeed seek rewards, but this doesn’t mean that they feel pain or pleasure (or fun) in the way that we would describe these. That insects are programmed automata is *not* the only alternative. Insects may learn (as in learn the faces of hive-mates), and their behaviors may be malleable. But neither does this prove that they are sentient beyond their being responsive to their environment. They seem to show some behaviors that humans regard as humanlike, but we may be reading too much into their behaviors.
Regarding dogs and other mammals, we have reason to infer that these animals are sentient. The reason is parsimony. The animals behave like we do, they are closely related to us, and so the most parsimonious explanation is that the behaviors are homologous—that they are essentially the same behaviors in an evolutionary and genetic sense. The same may be true for insects, but the argument that the behaviors are homologous is much weaker.
I’m not a philosopher either, quite obviously. 🙂
“Insects may indeed avoid damaging stimuli and may indeed seek rewards, but this doesn’t mean that they feel pain or pleasure (or fun) in the way that we would describe these.”
That might be one of the biggest obstacles. We may not be able to recognize these behaviors or sensations, as we’re looking at insects in the only way we know: by comparison to our own experiences. Until we learn far more about how their brains are structured, what their behavior signifies, and so on, it seems to me that we can’t ever answer these questions with anything even approaching confidence. It’s quite a pickle. Our study of other animals, and especially non-mammalians, is limited by our own brains!
I agree with your emphasis on homology of qualia or experiences. I think the best we will be able to do is extend our understanding of human qualia to other mammals (and maybe some other tetrapods). It seems very unlikely that the common ancestor of insects and humans had a brain that generated anything like human qualia, so if insects have experiences with some similar qualities to humans they can’t be homologous with human experiences and we probably shouldn’t give them the same names like “pain” or “joy”.
OTOH there is evidence that insect and vertebrate limbs develop using some of the same patterning genes & molecules, even though their common ancestor almost certainly lacked legs. The evo-devo folks call this a kind of deep homology of the took kit but not of the structure built from the tools. IDK if it’s possible for qualia to have a similar kind of deep homology.
Good points about homology.
I too don’t see anything terribly convincing in these experiments. But bees might dream (well, they were recently recorded as doing a lot of twitching as they sleep), and they are claimed to be able to solve puzzles that require cooperation. But that too could be interpreted as more about solving a very simple puzzle by random accident, and happening to do it side by side with a nest-mate.
I think this approach should go to a different level. We have a good idea that other mammals experience a range of emotions like we do because we see they have similar neurochemical releases and similar brain eeg activities that we do when we experience those things. So what is going on in the head of a bee when it “plays”? They are sufficiently different from mammals that neurochemical and brain eeg activities won’t compare to ours, which makes this a bit questionable, but if a bee is “playing” is there something going on that is different from a bee that is merely foraging?
It really is very difficult to design experiments that could truly determine sentience in animals as small and structurally dissimilar as insects. For now, we can only do Bayesian inference, as Jerry suggests.
Are there ways to hook bees up to an EEG? What would such a thing even tell us? Do we know enough about the mechanisms of bee brains and behavior to interpret any data we would get? We don’t even know much about our own brains!
But yes, I remain skeptical like you. Although, as I note in my comment below, I find the play study to be the most “convincing.”
I haven’t seen the play experiment. But bees have a strong sense of smell and taste, and they use chemical signals to communicate. Since they lap up nectar and collect pollen, I bet they also leave a trace of those things behind as they investigate something like a ball.
So a ball that has been merely investigated by a bee could have traces of food, or other chemical signals that mean a lot to a bee. So another bee might explore the ball again, just bc it smelled like a bee had been there before. That will leave more chemicals behind. Then a third bee might really explore the heck out of that ball bc there are all these chemicals left behind by other bees.
Yeah, when I said “the most convincing,” I didn’t mean that it convinces me of anything. I’m entirely open to the idea that insects are sentient, but the play study is very far from proving anything. I just found it to be the most convincing of the bunch. Alternative explanations abound.
I think you just asked exactly the right question, and saved me the trouble of making the same point. The most important aspect of sentience, from my point of view, is affect. If a bee can see red, blue, and ultraviolet subjective colors but doesn’t care about them, then I don’t care about them either. And the clearest proof of affect is emotion.
As one animal psychologist (wish I could remember their name) pointed out, when a dog is running away from a threat, or running toward its favorite human, there are obvious differences in the way it carries itself. If you took videos where the threat or the human were out of the frame, anyone could tell which video was which. But for comparable videos of cockroaches running, it’s not so easy. That doesn’t prove the cockroach has no affect, but we would need another reason, perhaps based on analyses of brain processes, to think it does.
Jerry wonders if his musings may seem childish to philosophers of mind. But there is a large subset of philosophers (the smartest ones, IMO) who think that scientists have to take the lead here – and in many other areas.
We don’t think that intelligence is a binary property so why do we think that of sentience and consciousness?
I’m not sure what you mean by a “binary property.”
I find study “d” to be the most convincing, though I’m obviously not convinced of sentience (epistemic humility and whatnot). I’ve watched my cats play for my entire life. As any good cat servant knows, cats don’t just play when they’re young. One of my current cats is six years old now, and a few times every evening (they’re crepuscular, after all), she comes to me while I’m sitting at my computer, sits next to me, and whines for minutes until I get up just to play with her. She isn’t playing out of instinct; she actively begs me to come play with her multiple times a day. And it gets better! Once I get up, she walks in front of me, constantly looking back to make sure I’m following to her chosen destination of play for that session. Once we arrive in the room, she chooses which toy she wants to play with by going over to it and sitting next to it. If I pick up and try to use any other toy but the one she chooses for that session, she’ll continue insisting on the toy she wants until I pick up and use that one. I should note that she is, by far, the most advanced cat I’ve ever known in terms of communication, in that she found at an extremely young age many novel ways of communicating exactly what she wants, and there have been times where she spent weeks “training” me to understand what she was trying to communicate (the toys took me two or three weeks to figure out), including complex games with multiple stages. A couple of the games she likes to play took me months to figure out, but when I finally did, her excitement was palpable.
So, play, at least in mammals like house cats, does not seems to always or perhaps even often be rooted in preparing them for adult life. It seems to be something from which they derive joy, which they wish to do for those reasons, and which they use for various emotional reasons. Seeing adult bees “play” instead of eat is certainly suggestive of sentience, though I must remain skeptical.
Question for our host: what is the EQ for bee brains? What kind of brain structures do they have? I ask the second question because we know that having a “good” brain-body ratio doesn’t necessarily mean anything in many species (for example, koalas don’t have the tiniest brains, but they’re dumb as fuck because they’re smooth-brained morons. Though they are adorable! And there are plenty of birds with small brains that can do remarkable things).
Also, my cat taking months to communicate certain games with multiple steps also appears to demonstrate significant forethought and planning, as well as dogged insistence in her own mind that she would eventually succeed. She even changed up her attempts at communication with a couple of games as time passed and I continued to misunderstand what she wanted.
Sorry, I can’t resist. “Dogged” insistence? Really?!
But the whole account is a fascinating story. Thanks so much.
I hoped someone would pick up on that incongruity 🙂 Of course cats are much smarter than dogs, although they’re often less determined when it comes to trifles like playing.
My cat has sometimes used symbols to communicate with me. She has a ribbon toy and drags it with her mouth into whatever room I’m in at random times in the evening. I couldn’t figure out what she was trying to tell me for at least six months, as she didn’t want to play with the ribbon when I picked it up and flicked it around. One day, it finally clicked for me: she was telling me that she wanted to play, and the ribbon toy was just a symbol to represent it. She only does this when I ignore her “I want to play now” whining. In hindsight, I feel pretty stupid for not understanding a lot quicker.
When she isn’t using her toys she places them in the same spots, with the mice next to the mice, the balls with the balls, and so on. I wasn’t even aware that cats could have an innate desire to do organize inanimate objects in this manner without any reward incentive. It’s just…what she does.
Often evolution is indirect compared to the way we humans tend to think of purposes and how problems are solved. I suspect that the trait that was under selection pressure was not play itself but rather the release of happy chemicals in the brain in response to play.
Seems to me that evolution is a drug dealer and we are all junkies.
If behavior cannot be determined or affected by mental states, but only by physiological states (i.e., if the mind cannot move molecules, as I assume), there can never be a scientific test for the existence of sentience because sentience could never move the (physical) needle.
This philosophising is all very well and good, but we know that in practice, humans are a long way from being humane to other humans – even ones with whom they share a language. So the point of assessing whether non-human animals can experience pleasure/ pain, is rather moot. Humans will successfully overcome the better angels of their nature as soon as they see a profit. We’ll have soldiers, slaughterhouses and carnivorous humans for the foreseeable future.
Jerry, did you miss-read item c?
“shows only that bees conditioned to approach a color after a sugar reward are more likely to approach something like that color than those who weren’t conditioned.”
The lucky subset of bees that got the pre-reward were not “conditioned” to expect that. It was a one-time surprise to them and their subsequent behavior was apparently from remembering that surprise. Or did I get that wrong.
I’m fairly sure that the answer to the question is not “yes” or “no” but “to some degree”. That is, consciousness/sentience is a continuum, and it is thus possible to have a low degree of it (and thus, likely insects do).
One argument for this: it is not sensible to suppose that, in the evolution leading to humans, and some point a parent animal that was never conscious gave birth to an offspring that became conscious. The only sensible position is that consciousness developed gradually, in degrees, along the lineage.
Good points, Coel. As we move along the continuum/spectrum of visible light there comes a point where yellow has changed unmistakably to green, so it seems there are points on the continuum of more and more complex nervous systems where sentience becomes consciousness and consciousness becomes reflective as in humans. (Reflective means that we know that we know. Do dolphins and elephants have reflective consciousness?)
At any rate, allow me to riff off Jerry’s closing implications: I want to save any living thing from unnecessary harm, including plants and protozoans.😇
I really like that last paragraph. I bestow upon your forehead this invisible “Good Job” sticker.
Just wanted to say I really enjoyed this science post, I know sometimes you feel people aren’t reading them as much, so I made sure to click through to your page instead of just reading my email. This was a fascinating read!
Well, thank you!. I wish everyone would click through to the page! The only way I know whether anybody’s reading is to look at comments (getting fewer) or visit to the site as a whole.
Me too. I use the email as a cue to get onto WEIT. I read every one of our host’s posts. I try to comment only if I have something to say. Mostly I don’t; but I keep learning both from Jerry and from the many contributors (much more erudite than me) who respond with such enthusiasm.
Thanks everyone.
PCC(E): “I’m writing this on the fly”
While impressed with your tiny calligraphy skills, I think, given the possible implications of this study, you should ask the fly’s permission first.
The results of some of these experiments were surprising. Perhaps not totally convincing, but nothing to be discounted. I would look forward to other experiments by this clever scientist. It does suggest that intelligence and sentience are linked, and that cooperative insects, like bees or ants (and this correlates to cooperative birds and mammals) do seem to be comparatively “smarter” than other insect species and thus more capable of exhibiting sentience. As Coel mentions above, sentience does seem to be a continuum. Just like gender! (sorry, I kid).
If “sentience is a continuum” means that there is no sharp boundary between experiential mental/neural states and nonexperiential ones, then I fail to see how that could be true. For our concept of subjective experience (aka phenomenal consciousness) seems to be inherently binary, in the sense that we don’t have any coherently intelligible concept of an intermediate mental/neural state which neither determinately has nor determinately lacks experiential content. How could a subject “half-experience” anything? (Of course, a subject can be only dimly aware of an experience, but a weakly cognized or only peripherally noticed experience is still an experience rather than a “half-experience” occurring in an ontological twilight zone between being and nonbeing.)
This is verging on world salad for me, sorry. But “half-experience” is sort of what we’re talking about here. Why do you insist on a binary…maybe it’s a millionth of a moment of an experience by a human brain. Perhaps human sentience is a different sort of sentience than in bees, and now semantics get involved. Perhaps a bee has beentience. A whole different realm of sentience (still in the natural world by definition, of course) but nothing like a human brain can ever experience or understand. It’s perhaps moving way, way too fast and it hears/sees/feels etc. in a realm millions of years from our grasp. Perhaps it’s not a very good question in the first place.
I’m with Daniel Dennet when he says that Illusionism should be “the Obvious Default Theory of Consciousness”. If Illusionism is true it means consciousness is very different from what we think it is.
At least it would mean that we make judgements about consciousness we don’t understand and don’t know about. Without any hard facts about consciousness, judgements in these matters are more a matter of taste and have not really any value (except maybe entertainment value).
It could also mean all the qualitative differences we see between different species are illusions foisted on us by Mother Nature. There are certainly quantitative differences between individuals and species; these are real and can be talked about without placing things in a moral hierarchy.
We already know that we are some kind of biological robot, just like al the other living things (or even viruses). Fact is that no biological robot one is morally more important than any other biological robot, but this is not how we humans perceive the world. It is not surprising that we are biased against things that harm us (like viruses or mosquitos).
All the behaviors we humans show can be programmed (in principle), and the question if it is really conscious is a pseudo question if we don’t know what “real consciousness” is. “Real consciousness” may not exist, there may be only “illusionary consciousness” that is real.
Concerning Dennett’s illusionism: There may be aspects of what we call consciousness that are illusory, but it does not make sense to say that the very presence of one’s own conscious awareness is an illusion. If one is consciously aware of one’s own consciousness (as most of us are), then conscious awareness is genuinely present, by definition, and not merely an illusion.
And by “genuinely present,” I mean to say that conscious awareness is a presence over and above the somatic apparatus that attends it.
Indeed, if you’re alive and have no awareness of your own consciousness, you’re either tripping balls or comatose 😀
Really?? If “conscious awareness” in general is an illusion then surely conscious awareness of our own conscious awareness is equally an illusion.
And perceptual illusions can be just as “genuinely present” as percentions of the physical world.
(I was what Dennet calls a Mysterion for a long time, until I read Dennet and converted to Illusionism.)
It’s becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman’s Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with primary consciousness will probably have to come first.
What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990’s and 2000’s. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I’ve encountered is anywhere near as convincing.
I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there’s lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.
My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar’s lab at UC Irvine, possibly. Dr. Edelman’s roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461
Do insects feel pain? I’d say yes, there is no way an individual will survive long if there is no mechanism to avoid damage. So yes, definitely insects will have something corresponding to pain.
Joy is more difficult, but I think rewards will give positive feedback, we may call that joy, but that does not mean it is the kind of joy mammals experience, it may even be greater than mammal joy, we can’t know with our present knowledge. But natural selection dictates that pain and pleasure will be ubiquitous.
I’ll still swat that mosquito though, never mind its feelings.
“I’d say yes, there is no way an individual will survive long if there is no mechanism to avoid damage. So yes, definitely insects will have something corresponding to pain”.
A mechanism to avoid damage doesn’t necessarily have to be ‘pain’ as we would normally understand it. It is possible to conceive of a robot with appropriate sensors being programmed to withdraw from areas of damaging high temperature, say, but even though its behaviour might resemble you or I withdrawing our hand after accidentally touching a hot plate we would surely not say that the robot experiences pain? An invertebrate with a relatively simple nervous system may be more akin to the the robot in terms of its perception of a harmful stimulus than it is to us. That of course doesn’t mean they don’t feel pain as we do, simply that it’s not inevitable.
Eyes have independently evolved several dozens of times. Yet
the Hox Gene Pax 6 is identical in mice and flies. It must be very old.
Moreover, that kind of stimulus (avoiding damage) must be very strong to be effective, something like acute pain.
I never said it is inevitable, just that it is highly probable.
I would agree with the virtual inevitability of evolving mechanisms to detect potentially harmful stimuli and to take evasive action of some kind. I am not convinced that this necessarily has to involve ‘pain’ as we understand the term. Plants respond strongly to relevant stimuli e.g. bending towards the light or releasing chemicals when attacked by herbivores but do they ‘feel’ anything?
A mechanism to avoid something and feeling something are different things and that’s where your reasoning here falls short. There are drugs used in medical procedures such that a person can respond to discomfort that exceeds the amount of pain killers provided, yet the person has no experience of it. Similarly, when a person becomes concussed, they can continue to respond to a wide variety of stimuli yet they have no experience of it. We could even artificially produce a robotic system that avoids damage, and we can be reasonably sure it doesn’t experience it as well.
Bees seem to be capable of learning and exhibit complex behaviour. I like watching them in the summer. My bird baths provide popular drinking stations when we’ve had dry weather. I assume they find “natural” water closer to home when there has been recent rain. I only see them when it gets dry. They seem very deliberate in their actions. Landing on my carefully placed stones then edging down to the water surface for a drink.
I’m giving them the benefit of the doubt.
[I think biologist Colin Wright used to study wasps. I asked him on Twitter – ” do you think they are conscious ? “. He answered “probably”. ]
It seems to me that the terminology needs to be better defined. What do we mean by consciousness and sentience? There’s a difference between being aware of the world around us and being self-aware. All animals have the first, but how many the second?
And without self-awareness can one suffer from pain?
I would find it hard to believe that insects are self-aware to any degree, or even that fish are.
The term “self-awareness” (“self-consciousness”) is ambiguous between the internal perception of physical/physiological states of or events in one’s body (called interoception or proprioception) and the internal perception of psychological/phenomenological states of or events in one’s (conscious) mind (called introspection or reflection).
Insects arguably have self-awareness in the former sense, i.e. bodily/corporeal/somatic self-awareness; but they arguably lack self-awareness in the latter sense, i.e. introspective/reflective self-awareness. And if higher-order theories of phenomenal consciousness are right in asserting that it depends on introspective/reflective consciousness of it, then it becomes extremely improbable that insects are phenomenally conscious.
Yes, I agree, I doubt that insects have introspective self-awareness. The question would then be in which species does that begin to appear, and is it a spectrum.
All you’re doing is waving your hands over what “aware” actually means, and we get the same murky reasoning and arguments made as we do with consciousness. I would also posit most animals are self-aware, but humans are ill-equipped to probe that question. We come up with things like the mirror test, and then proclaim dogs lack self awareness because they don’t recognize their image as their self, yet I’m sure some doggy scientists are running experiments on humans in some alternate reality, and concluding that since they cannot smell their own urine and identify it as their own, they must not be self aware.
I prefer to think of this subject in terms of experience. We can exhibit complex behaviors and we can have an experience of it. In some cases such as anesthetic drugs and concussions, humans can exhibit complex behaviors and not experience any of it. That, to me, would be the dividing line in what constitutes ethical behavior, in so far as harm and infliction of harm from the perspective of suffering of the object (not counting other ethical considerations such as ecological ones and so on).
I would guess that most mammals and a wide variety of other animals do experience, and I would guess that plants do not, even though they can do things like respond to stimuli and communicate danger to one another through root / chemical signaling. The question here is then “do insects experience?” or maybe lobsters, since they’re tasty, and where the line is drawn between complex robotic-like behavior and the experience of it.
It is clear that…
“The solution to the problem of others’ consciousness relies on an inference to the best explanation. This is at its strongest when dealing with other adult humans. Such people belong to the same species as me, with the same sense organs and with similar brains that are organized (to the best of my belief) in the same way that mine is. Moreover, they behave and respond to the world in ways similar to myself, crying out when injured, navigating around obstacles in the light but not in the dark; and so on. Similar sorts of evidence can be available for other animals, of course, but in a graded manner. The brains, sense organs, and behavior of chimpanzees are more similar to mine than are the brains, sense organs, and behavior of mice; which are in turn more similar to mine than are the brains, sense organs, and behavior of chickens; and so on. A natural first thought, then, is that an inference to the best explanation when attributing phenomenal consciousness to other creatures besides oneself can be made with lesser and lesser confidence as one moves from other humans, through other great apes, to monkeys, to mice, to birds, to reptiles, and then to invertebrates like bees and spiders.” (p. 25)
“Only a subset of the evidence described above is available in the case of nonhuman animals (as well as human infants), of course. They can’t talk to us, and so cannot provide the same sort of direct evidence that one can get of a first-person perspective on their experiences that one can obtain for other adult humans. But they can be more or less similar to us in biological descent, in brain structure and organization, and in nonverbal behavior. In advance of further inquiry, these similarities provide some
reason to think that the creatures in question are phenomenally conscious, with a degree of confidence graded by the degree of similarity between us.” (p. 26)
(Carruthers, Peter. /Human and Animal Minds: The Consciousness Questions Laid to Rest./ New York: Oxford University Press, 2019.)
I agree with Chittka, the bees are probably exhibiting sentience. Coyne confuses things by switching back and forth between sentience, which only requires feeling, and consciousness which is ill defined but which he’s sure protozoa don’t have it. I think human consciousness is dependent on language and images, which I agree protozoa don’t have enough data processing to possess. But sentience…yeah I think protozoa have a little.
A recommendable book:
* Tye, Michael. /Tense Bees and Shell-Shocked Crabs: Are Animals Conscious?/ New York: Oxford University Press, 2017: https://global.oup.com/academic/product/tense-bees-and-shell-shocked-crabs-9780190278014?cc=de&lang=en&
I am conflicted regarding whether pain/pleasure “qualia” (in, say, mammals) are real, and if so, whether they are merely epiphenomenal. The only tentative conclusion I have come to is that they can’t both be “yes”. If, say, pain/pleasure qualia were real but epiphenomenal, then it is easy to imagine that they could have been reversed—-e.g. painful qualia associated with things we seek (food, sex) and pleasure qualia associated with things we avoid (e. g. freezing temperatures). Yet, such a creature would be neurologically and behaviorally indistinguishable from us. I guess this is somewhat related to the zombie problems in philosophy, but a creature with inverted qualia is even more weird for me to contemplate than a zombie lacking qualia.
Is it an aspect of our human mindset that we are seeking to prove sentience in other creatures rather than seeking to prove that they are not?
I would be interested to know about similar experiments with spiders. (They aren’t technically insects, but close enough.) I have hate spiders, partly because they SEEM so sentient. Every time I am confronted with the disgusting duty to kill one, I feel like the little demon is looking back at me thinking “I know know what your up to. Don’t try it. I know where you sleep.” They are crafty little buggers! Or seem so.
I don’t think you should kill them; why not capture them and put one inside? They won’t hurt you and they eat other insects.
Even if you could prove that insects are not just machines that are responding to stimuli, but are, in fact, able to consciously be aware that they are experiencing pain, that would not enable us to say whether it is right or wrong to inflict pain upon them. This would be to make a claim about what is, and then to draw an ought from that. There is no moral law in the Darwinian materialistic world; just moral laws made up by humans.
I didn’t say whether it was right or wrong: to say that from the fact is the naturalistic fallacy, which I’ve discussed many times before.
Unfortunately, until we figure out where “experience” happens in the brain, we cannot answer this question. No amount of observed complex behaviors can answer it, as there is a simple counter point in humans we can use: certain drugs produce a state of conscious responses without the person experiencing it. These drugs are used specifically for the purpose of eliciting a response of discomfort, such as when giving a colonoscopy, while also preventing the person from experiencing the discomfort.
Anyone who has ever woken from mild anesthesia, such as from getting wisdom teeth removed, may have found themselves in conversation with a nurse or mobile and in another room, where they suddenly snapped back into being able to experience the events. Similarly, people who become concussed may have experienced this, and can continue to exhibit complex behaviors while not experiencing any of it only to suddenly snap back into an experiential nature.
From this, we can deduce that the experience of the event is different from the processing and execution of the event (and other near related scientific discoveries such as when an action / idea is processed and when the conscious experience / thought happens is different, where the conscious experience happens after the processing). The counter point is obviously related to learning, but this is another hard question that nobody has an answer to.
But it appears there are specific brain mechanisms separate from the mechanisms required to exhibit complex behaviors (even as complex as human speech) that bring about our experiential nature. From my understanding, it’s most likely related to our short term memory buffer, and under this sort of understanding, it has nothing to do with whether any given animal responds to stimuli or remembers locations or events or times or anything else, but rather whether their short term memory buffer acts as ours does or doesn’t, but this isn’t something we can remotely answer. This doesn’t seem to stop people from giving sensational claims such as insects being able to experience things. Unfortunately, too many people have an aversion to the most common statement we can make about most things: “we don’t really know.” People would rather cling to specious data that does not actually corroborate their propositions.
Ethically speaking, issues of harm arise due to the infliction of a negative experience. We can reasonably extend our personal experience to other humans, and I would consider it reasonable to extend it to at least other mammals as well, but when it comes to insects or lobsters or other simpler organisms with markedly different brain/neuronal construction than we have, we can’t reasonably extend it, and then it becomes some heuristic / individual preference as to how we proceed with. Even plants exhibit reaction to harmful stimuli and go further and communicate it through root signaling, and yet I doubt anyone would consider them capable of experience, nor would I think anyone would use this to conclude that inducing such responses in plants is unethical.