Andrew Brown: 9/11 was good for religion (?)

September 14, 2011 • 5:01 am

Andrew Brown hasn’t been on the radar screen lately, which is all well and good, but he’s now popped up with a Guardian piece, “Why 9/11 was good for religion,” which is dreadful even by his abysmal standards.

Why was 9/11 good for religion? It’s almost completely unclear from his piece.  His subtitle suggests that the good part about 9/11 was that it revived debate about religion, “challeng[ing] the notion that theism is doomed,” and he’s big on the interfaith dialogue inspired by 9/ll, Brown also says these things in the body of his piece:

  • At the same time, the heretical understanding of jihad as the sixth pillar of Islam, which originated in Egyptian circles in the 1980s, spread across south-east Asia. Children in the disputed areas of Pakistan are taught by the Taliban that jihad can compensate for other flaws in a Muslim’s life.
  • The mass killer [in Norway] was clearly influenced by a post-9/11 theology that sees Christian Europe under attack from Muslim immigration. Variants of this idea animate political parties in many European countries: the Netherlands, Belgium, France, Denmark, Sweden, Norway and Italy. For them, Europe’s Christian identity has become a sacred value.
  • The same polarised reactions can be seen in secular ideologies. The new atheist movement was started by a group of writers who perceived Islam as an existential threat. “We are at war with Islam,” argued one of its leaders, Sam Harris, who also called for the waterboarding of al-Qaida members. Meanwhile The God Delusion author, Richard Dawkins, refers to Islam as the most evil religion in the world. The publication of anti-Muhammad cartoons in the Danish newspaper Jyllands-Posten in 2005, and the furore surrounding it, demonstrated the deliberate use of blasphemy as a weapon in cultural wars.

That’s all bad stuff.  Counterbalancing it is this questionable assertion, coupled with a garbled sentence by a Notre Dame professor:

At the same time, secular governments across Europe have made increasing efforts to understand and accommodate religious sensibilities. As welfare states come to seem increasingly expensive, many have turned more and more towards religion to deliver social services. Whatever happens, it appears the idea that religion is doomed and disappearing was buried in the rubble of the twin towers.

“9/11 was good for business” says Scott Appleby, professor of history at Notre Dame University. “For many people, we told people that religion is really important and that the secularisation theory, which had been very fashionable, was wrong.”

And this is supposed to be good?

Many serious students of politics question what role theology could play in the modern world. After all, neither “religion” nor “faith” appears in the index of Henry Kissinger’s acclaimed memoirs. Yet the wars of the past 10 years cannot be understood without their theological component. People are not just fighting for freedom or for oil. Some are fighting to bring about the kingdom of God on Earth, or caliphate of perfect justice, or the fulfilment of Biblical prophecy, and these aims cannot be satisfied by money or power.

Fundamentalism, says Appleby, is not a primitive phenomenon, it is a modern one, and every bit as much a reaction to modernity as liberalism or secular optimism. These ideologies provide ways of dealing with the abundance of choices that modern societies offer and traditional societies cannot imagine.

Maybe Brown is just bad at expressing himself (no surprise there), but his discussion of 9/11, which surely helped bring about New Atheism, doesn’t show at all that the World Trade Center bombings were good for faith. His confusion is evident in his last paragraph:

Religion in traditional societies is part of its fabric. In the modern world, it is a conscious choice. Some people discard it; others make it more deliberate and sharp-edged. The same has been true of the secular and enlightenment values that, before 9/11, seemed self-evident to much of the western world.

What the bloody hell does that mean?  Doesn’t Brown have an editor?

HPV vaccine: Republicans prove themselves morons once again

September 14, 2011 • 4:14 am

This is a prime example of how religion, and its willful ignorance of facts in favor of faith, can be deadly. In Monday’s debate between Republican presidential candidates, Michele Bachmann laid into Texas governor Rick Perry’s order that female students in Texas be vaccinated against the HPV virus, which causes cervical cancer. The New York Times blog, the Caucus, reports:

In Monday night’s debate, Mrs. Bachmann seized on an executive order that Mr. Perry issued requiring sixth-grade girls in Texas to be vaccinated against the human papillomavirus, or HPV, criticizing him for an overreach of state power in a decision properly left to parents.

On Tuesday she expanded her criticism, suggesting that Mr. Perry had potentially put young girls at risk by forcing “an injection of what could potentially be a very dangerous drug.’’

Mrs. Bachmann said on NBC’s “Today” show on Tuesday that after Monday night’s debate in Tampa, Fla., a tearful mother approached her and said her daughter had suffered “mental retardation” after being vaccinated against HPV. “It can have very dangerous side effects,’’ Mrs. Bachmann said.

Bachmann is wrong.  The HPV vaccine is one of the safest vaccines around, and it saves lives—many of them.  Its safety is discussed by the U.S. Centers for Disease Control (CDC):

HPV vaccines were studied in thousands of people in many countries around the world, including the United States. These studies found that both HPV vaccines were safe and cause no serious side effects. The most common adverse event was injection site pain, redness and swelling. This reaction was common but mild. More than 35 million doses of HPV vaccine have been distributed in the United States as of June, 2011.  Almost all doses distributed have been Gardasil.

Wikipedia adds a bit more:

As of 1 September 2009, there have been 44 U.S. reports of death among females who have received the vaccine. None of the 27 confirmed deaths of women and girls who had taken the vaccine were linked to the vaccine.

Its efficacy (from the CDC):

The main efficacy studies of the quadrivalent vaccine were conducted in young women and men (16 through 26 years of age).  Among persons not previously exposed to a targeted HPV type, the trials demonstrated nearly 100% vaccine efficacy in preventing cervical precancers, vulvar and vaginal precancers, and genital warts in women caused by the four vaccine types, as well as 90% vaccine efficacy in preventing genital warts and 75% vaccine efficacy in preventing anal precancers in men.

Although mandatory vaccination (recommended for girls aged 11 and 12, and given in three doses) is required only in the District of Columbia and Virginia, given its safety and efficacy, Gardasil or a similar vaccine should be required everywhere.  And what about the “manadatory” part.  Bachmann said this:

To have innocent little 12-year-old girls be forced to have a government injection through an executive order is just flat out wrong. That should never be done. That’s a violation of a liberty interest.

“Well, I’m offended for all the little girls and the parents that didn’t have a choice,” replied Bachmann. “That’s what I’m offended for.”

But, of course, we all know that vaccinations are already required for school in many states.  The CDC notes:

Each state has immunization requirements, sometimes called “school laws,” that must be met before a child may enter school. These may include vaccination against diphtheria, pertussis (whooping cough), tetanus (lockjaw), Haemophilus influenzae type b, measles, mumps, rubella, polio, and hepatitis B. Some states have added varicella (chicken pox) vaccination to the list of required vaccines. Smallpox vaccination was once required, but the disease has been so successfully eradicated that this vaccination is no longer needed.

These days, sexual activity of young people is something one can reasonably expect.  One can make a good case that HPV vaccines should also be required—at the very least, offered—in a similar fashion.

So what’s the problem?  Religion, I suspect, and its attendant disapproval of sex among unmarried young people.  It’s been said before, but it’s worth repeating: people like Bachmann spread their lies for one reason alone.  It’s not that they oppose government injections: those already exist, and Bachmann hasn’t said a word about diphtheria.  No, it’s the fact that HPV is a disease transmitted through sexual contact, and people like Bachmann would prefer to have women die from cancer than to have safe sex.

_______

Update: See postings on the vaccine issue by Orac and erv

Free will: the neuroscientists versus the philosophers

September 13, 2011 • 6:57 am

In a news item in the September 1 issue of Nature, “Taking aim at free will” (free online), Kerri Smith recounts the latest findings of neuroscience about how and when we make “decisions,” and how that bears on philosophical issues of free will.  The two-page piece is worth reading for its exposition of the latest research (some not yet published), and how philosophers are reacting to it.

The research, as we’ve discussed before, largely involves experiments that force participants to make decisions, and doing simultaneous brain scans that can a) “predict” the decision (albeit not with perfect accuracy) and b) find out when the brain actually takes action.  Those studies, pioneered by Benjamin Libet and continued in more sophisticated form by John-Dylan Haynes, involve scanning the brains of subjects who are forced to make choices, and comparing when the brain registers a choice with when the subject becomes conscious of having made that choice.  All the studies find that brain scans can predict, sometimes with high accuracy, which decision will be made, and that the brain activity occurs up to several seconds before the subject records having made a decision.

Here’s an example of Haynes’s recent findings:

Haynes. . . has replicated and refined his results in two studies. One uses more accurate scanning techniques to confirm the roles of the brain regions implicated in his previous work. In the other, which is yet to be published, Haynes and his team asked subjects to add or subtract two numbers from a series being presented on a screen. Deciding whether to add or subtract reflects a more complex intention than that of whether to push a button, and Haynes argues that it is a more realistic model for everyday decisions. Even in this more abstract task, the researchers detected activity up to four seconds before the subjects were conscious of deciding, Haynes says.

Another study  by Itzhak Fried, a scientist and neurosurgeon:

He studied individuals with electrodes implanted in their brains as part of a surgical procedure to treat epilepsy. Recording from single neurons in this way gives scientists a much more precise picture of brain activity than fMRI or EEG. Fried’s experiments showed that there was activity in individual neurons of particular brain areas about a second and a half before the subject made a conscious decision to press a button. With about 700 milliseconds to go, the researchers could predict the timing of that decision with more than 80% accuracy. “At some point, things that are predetermined are admitted into consciousness,” says Fried. The conscious will might be added on to a decision at a later stage, he suggests.

More than 80% accuracy!

The experiments show, then, that not only are decisions made before we’re conscious of having made them, but that the brain imagery can predict what decision will be made with substantial accuracy.  This has obvious implications for the notion of “free will,” at least as most people conceive of that concept.  We like to think that our conscious selves make decisions, but in fact the choices appear to have been made by our brains before we’re aware of them.  The implication, of course, is that deterministic forces beyond are conscious control are involved in our “decisions”, i.e. that free will isn’t really “free”. Physical and biological determinism rules, and we can’t override those forces simply by some ghost called “will.”  We really don’t make choices—they are made long before we’re conscious of having chosen strawberry versus pistachio ice cream at the store.

We’ve discussed this issue before, and have seen how some philosophers like Daniel Dennett, and many of the commenters here, aren’t bothered by this: they simply redefine “free will” as something more sophisticated than what I see as the common idea (i.e., were we to relive a moment of decision, we could have decided the other way).   Nevertheless, the neuroscience clearly perturbs the philosophers.  Here’s how Walter Glannon, a philosopher at the University of Calgary, reacts:

And if unconscious brain activity could be found to predict decisions perfectly, the work really could rattle the notion of free will. “It’s possible that what are now correlations [he’s referring to the correlations between specific areas of brain activity and the decision that’s made after that activity occurs] could at some point become causal connections between brain mechanisms and behaviours,” says Glannon. “If that were the case, then it would threaten free will, on any definition by any philosopher.” . . . If neuroscientists find unconscious neural activity that drives decision-making, the troublesome concept of mind as separate from body disappears, as does free will. This ‘dualist’ conception of free will is an easy target for neuroscientists to knock down, says Glannon. “Neatly dividing mind and brain makes it easier for neuroscientists to drive a wedge between them,” he adds.

In other words, Glannon recognizes the problem that pre-conscious “decisions” pose for free will.  But although he says the threat is to “any definition by any philosopher,” not all philosophers agree.  The “compatibilist” school, for instance, manages to reconcile complete physical determinism of decisions with some notion of “free will.”

This shows, as Kerri Smith points out, that philosophers are revising the definition of “free will” in light of these neuroscientific findings.  This reminds me of how theologians redefine the meaning of Adam and Eve in light of genetic findings that we didn’t all descend from two ancestors, although I have a lot more respect for philosophers than for theologians.

There are conceptual issues — and then there is semantics. “What would really help is if scientists and philosophers could come to an agreement on what free will means,” says Glannon. Even within philosophy, definitions of free will don’t always match up. Some philosophers define it as the ability to make rational decisions in the absence of coercion. Some definitions place it in cosmic context: at the moment of decision, given everything that’s happened in the past, it is possible to reach a different decision. Others stick to the idea that a non-physical ‘soul’ is directing decisions.

This sounds to me very much like post hoc rationalization.  What does it mean to “make rational decisions in the absence of coercion” if that decision has already been made?  All it means is that our brains can cough up “rational” outputs in the face of diverse inputs, which of course is what they were evolved to do. And the second definition (which, by the way, is also my own), doesn’t solve the problem: if you’re a determinist (and what else is there besides molecules, genes, environments and physical forces?), there’s no possibility of deciding “otherwise” if all else is equal.  Even the compatibilist commenters on this site don’t believe that, at any moment, with all conditions identical, we could make two different decisions.  The third re-definition, of course, is bogus, since there’s simply no evidence for a non-physical “soul” that can guide our actions.

In the end, though, I think philosophers are bothered by the science.  Al Mele, a philosopher who’s participating in a Templeton-funded study that involves scientists, philosophers and perhaps theologians (I hope not!) weighs in:

Imagine a situation (philosophers like to do this) in which researchers could always predict what someone would decide from their brain activity, before the subject became aware of their decision. “If that turned out to be true, that would be a threat to free will,” says Mele.

Well, this is only my feeling, but I think this is precisely where neuroscience is going.  We can already predict some decisions with 80% accuracy.  This will only improve as neuroscience becomes more sophisticated.

Beyond redefinition, there’s another way critics attack experiments like Haynes’s and Fried’s: they go after the the methodology:

Philosophers who know about the science, she [Adina Roskies, a philoospher and neuroscientist from Dartmouth] adds, don’t think this sort of study is good evidence for the absence of free will, because the experiments are caricatures of decision-making. Even the seemingly simple decision of whether to have tea or coffee is more complex than deciding whether to push a button with one hand or the other.

I find that criticism unconvincing.  How, exactly, is deciding between coffee and tea more “complex” than deciding which button to press?  And suppose you did the same experiment, but instead of using a button, just open a window in front of the subject behind which there is a cup of coffee and a cup of tea.  If we could associate brain activity with their coffee vs tea preference, I’d bet you’d still get Fried-ian results: the brain would show a decision well before the subject was conscious of having made one.

Why is all this important, and not just a debate about philosophy? The answer is obvious: whether our actions are predetermined has obvious consequences for how and why we hold people responsible for their actions.  As I’ve said several times before, the law already takes “responsibility” into account by treating criminals differently depending on whether their actions may have been caused by extenuating circumstances like mental illness.  Nobody, I think, would refuse to consider the possibility that an act of aggression may have been caused by a tumor in the criminal’s brain.

The more I read about philosophers’ attempts to redefine and save the notion of “free will” in the face of the neurological facts, the more I think that they’re muddying the waters.  I believe that the vast majority of nonphilosophers and laypeople hold a consistent definition of free will: that we really do make decisions that are independent of our physical make-up at the moment of deciding.  If this isn’t the case, we need to know it.  Yes, it may be depressing—Haynes admits that he finds it hard to “maintain an image of a world without free will”—but we can still act as if we had free will.  We don’t have much choice in that matter, probably because we’re evolved to think of ourselves as choosing agents.  But rather than define free will so we can save the notion in some sense (this is like substituting the word “spirituality” for “religion”), why don’t we just rename the concept we’re trying to save?  Otherwise we’re just giving false ideas to people, as well as providing succor for religion, where the idea of real free will—the Holy Ghost in the machine—is alive and crucially important.

h/t: John Brockman

A virus gene creates zombie caterpillars

September 13, 2011 • 5:41 am

There are lots of studies showing how insect parasites affect the insect’s behavior in a way to facilitate the parasite’s transmission. Some fungi, for example, affect the behavior of ants, causing them to climb trees or blades of grass and then die, making them easy prey for the next host (ruminants or birds), who poop out the fungal spores which are then eaten again by ants.  In some cases the fungi even turn the ant’s abdomen bright red, like a berry, making the dead, fungus-filled ant a tempting target for hungry birds.

The latest issue of Science contains an even more amazing and nefarious parasite—more amazing because it is a virus, and viruses have very few genes with which to manipulate their hosts. In this case, in a paper by Kelli Hoover et al., they found that a baculovirus called LdMNPV (large, rod shaped DNA viruses that are insect pathogens) manipulated its host, the gypsy moth (Lymantria dispar).  Infected caterpillars of the moth climb to the top of host trees to die, where they then liquify and “release millions of infective virus particles, with dispersal facilitated by rainfall.”  Here’s a photo from the paper of a liquifying, dead caterpillar on a tree:

For reasons that aren’t explained in the short paper, the authors hypothesized that one of the genes in the virus, ecdysteroid uridine 5′-diphosphate (UDP) glucosyltransferase (shortened to egt), caused the caterpillar’s behavior by inactivating its molting hormone, 20-hydroxyecdysone (20E).

They tested that idea by injecting caterpillars with genetically modified viruses that disrupted the egt gene, asb well as a virus in which this disruptive element, after being added, was removed, restoring the normal egt gene. Finally, they created controls by injecting caterpillars, but with no virus at all.

In all cases, disruption of the egt gene removed the climbing behavior (although the disrupted-gene caterpillars still died and liquified, but did so at the bottom of their containers).  When the disrupted gene was restored to normal, the caterpillars climbed up their containers before death.

The exact mechanism of how this works isn’t known, but the authors suggest that the disruption of the 20E hormone in caterpillars enables them to remain viable longer, so that they can actually climb up and feed while infected.  What is pretty clear, though, is that this is a genetic adaptation on the part of the virus that creates its “extended phenotype”— the behavior of the caterpillar that facilitates its spread.

It always amazes me that what we consider “simple” organisms nevertheless have the genetic repertoire to affect the behavior of their hosts.  egt truly is a “selfish gene,” turning caterpillars into zombies to facilitate its own transmission. (Malarial parasites in humans are sometimes thought to do the same thing, making us sick enough to lie prostrate, a tempting—and non-swatting—target for the mosquito whose bites carry the protozoan.)

Without a doubt, there are many yet-unknown cases of behavioral modification by parasites that are equally intriguing, and equally nefarious.  Faye Flam at the Philadelphia Inquirer, who has also described this paper, recounts some other chilling stories of parsite manipulation of behavior.

____

Hoover, K., M. Grove, M. Gardner, D. P. Hughes, J. McNeil, and J. Slavicek. 2011.  A gene for an extended phenotype.  Science 333:1401.

The New York Times reviews the new Hitchens book

September 12, 2011 • 12:11 pm

I hadn’t realized that Christopher Hitchens had a new book, even though it was reviewed in the same issue of the Sunday NYT that contained my own review of D. S. Wilson’s book.  At any rate, Hitchens’ latest is a collection of essays called Arguably, and you can get it for $18 from Amazon, where it’s already at #26.    I’ll be reading it for sure, as I have read all of his other four essay-books.

Bill Keller, a writer for the paper, gives it a thumbs up, though he faults some of the book for smugness and haughtiness, particularly in Hitchens’ infamous (and unfortunate) Vanity Fair piece, “Why women aren’t funny,” which explains a supposed gender disparity in humor as a result of sexual selection.

Let’s begin with the obvious. He is unfathomably prolific. “Arguably” is a great ingot of a book, more than 780 pages containing 107 essays. Some of them entailed extensive travel in inconvenient places like Afghanistan and Uganda and Iran; those that are more in the way of armchair punditry come from an armchair within reach of a very well-used library. They appeared in various publications during a period in which he also published his best-selling exegesis against religion, “God Is Not Great”; a short and well-­reviewed biography of Thomas Jefferson; a memoir, “Hitch-22”; as well as various debates, reading guides, letters and rebuttals — all done while consuming daily quantities of alcoholic drink that would cripple most people. As Ian Parker noted in his definitive 2006 New Yorker profile of Hitchens, the man writes as fast as some people read.

The second notable thing about Hitchens is his erudition. He doesn’t always wear it lightly — more than once he remarks, upon pulling out a classic for reconsideration, that he first read the work in question when he was 12 — but it is not just a parlor trick. In the book reviews that make up much of this collection, the most ambitious of them written for The Atlantic, he takes the assigned volume — a new literary biography of Stephen Spender or Graham Greene or Somerset Maugham, or a new collection of letters by Philip Larkin or Jessica Mitford — and uses it as pretext to review, with opinionated insights, the entire life and work of the writer in question, often supplementing his prodigious memory by rereading several books. He is a master of the essay that not only spares you the trouble of reading the book under review, but leaves you feeling you have just completed an invigorating graduate seminar.

This is not one of the better reviews I’ve read in the NYT; it leaps from topic to topic and, like Winston Churchill’s famous pudding, “lacks a theme.” But of course it’s always difficult to review books of essays.  And, although Keller admits frankly that Hitchens is dying (the first time I’ve seen this statement nakedly in print, and the accompanying photo, below, supports the claim), at  least he pays homage to Hitch’s godlessness:

If there is a God, and he lacks a sense of irony, he will send Hitchens to the hottest precinct of hell. If God does have a sense of irony, Hitchens will spend eternity in a town that serves no liquor and has no library. Either way, heaven will be a less interesting place.

Photo by Brooks Kraft/Corbis

A symposium in which I debate a theologian

September 12, 2011 • 10:12 am

On October 10-12, the Gaines Center for the Humanities at the University of Kentucky is presenting a symposium, “On religion in the 21st century,” which features two speakers per night “debating” on a faith-related topic.  (“Debate” is not really accurate; we each give a half-hour presentation followed by audience questions.)  This year I’ll be sharing the podium with John Haught, a theologian at Georgetown University whom I’ve discussed here several times (e.g. here, here, here, and here).  The topic is, of course, “Science and religion: are they compatible?”

Note that Bart Ehrman (author of some nice books debunking Jesus and the Bible) and David Hunter are debating “Are faith and history compatible?” on October 10.

Here’s the very nice poster for this symposium (click to enlarge):


The topics of some previous symposia are here; the 2009 edition was on evolution.

I’ll also be signing books after the talk.  If you’re in the area, do drop by.  You might want to combine this with a visit to the Creation Museum, which isn’t far away . . . .

Wonderful prose on an ugly topic

September 12, 2011 • 8:22 am

The coda of The Mismeasure of Man, Steve Gould’s book on the history of racism and “scientifically based” eugenics, is one of my favorite pieces of science writing.  It’s a scant two pages, but is beautifully written and so emotionally powerful that it almost moves me to tears.  In it, Gould recounts Oliver Wendell Holmes’s 1927 U.S. Supreme Court decision upholding the forced sterilization of Carrie Buck, a young woman who, by all accounts, was of normal intelligence.  Embarrassed that Carrie had become pregnant after being raped, her parents committed her to the Virginia Colony for Epileptics and Feeble-Minded, where she was forcibly sterilized under a Virginia eugenics law.

Upholding that law, Holmes made the infamous pro-eugenics statement, “Three generations of imbeciles are enough.” (Buck’s mother and daughter were also allegedly “feeble-minded.”)

Buck later married and was desolate that she and her husband couldn’t have children.  She died in 1983, but was still alive when Gould wrote his book in 1981.  Sadly, her sister Doris suffered the same fate: forcibly sterilized, married, and desperate but unable to have children.

The last paragraph of Gould’s book is, to me, an example of his writing at its finest:

One might invoke an unfeeling calculus and say that Doris Buck’s disappointment ranks as nothing compared with millions dead in wars to support the designs of madmen or the conceits of rulers.  But can one measure the pain of a single dream unfulfilled, the hope of a defenseless woman snatched by public power in the name of an ideology advanced to purify a race? May Doris Buck’s simple and eloquent testimony stand for millions of deaths and disappointments and help us to remember that the Sabbath was made for man, not man for the Sabbath: “I broke down and cried. My husband and me wanted children desperately.  We were crazy about them. I never knew what they’d done to me.”

Carrie Buck, who along with 6,682 others (4042 of them women), was forcibly and involuntarily sterilized under Virginia’s Racial Integrity Act of 1924.