Analytical thinking erodes religious belief

April 30, 2012 • 7:43 am

UPDATE (2/27/2017): In view of later work, the results of this paper should be considered inconclusive. A new paper in PLoS ONE by Sanchez et al. failed to replicate one of the four experiments of Gervais and Norenzayan (the statue experiment), getting results not even close to significance. Further, an analysis of psychology papers published in Science showed that the Gervais and Norenzayan paper was one of 15 examined (out of 18) that had a success rate much higher than expected (this could be due to a number of factors). The Sanchez et al. paper, however, does report two other studies that replicated some of the results of Gervais and Norenzyan. In view of these conflicting results, it’s best to suspend judgement on the paper described below.

Atheists, but not religious people, will love this paper from the latest Science, “Analytical thinking promotes religious disbelief,” (reference below) by Will Gervais and Ara Norenzayan, two psychologists from the University of British Columbia (see also this popular summary in the Los Angeles Times). It shows that there’s an antagonism between analytical thinking and religious belief, such that after engaging in even a short and simple task requiring analytical thinking, one’s faith in God weakens.

As the paper is written clearly, I’ll quote a bit from it rather than paraphrase it. Here’s the authors’ rationale:

If religious belief emerges through a converging set of intuitive processes, and analytic processing can inhibit or override intuitive processing, then analytic thinking may undermine intuitive support for religious belief. Thus, a dual-process account predicts that analytic thinking may be one source of religious disbelief. Recent evidence is consistent with this hypothesis finding that individual differences in reliance on intuitive thinking predict greater belief in God, even after controlling for relevant socio-demographic variables. However, evidence for causality remains rare. Here we report five studies that present empirical tests of this hypothesis.

We adopted three complementary strategies to test for robustness and generality. First, study 1 tested whether individual differences in the tendency to engage analytic thinking are associated with reduced religious belief. Second, studies 2 to 5 established causation by testing whether various experimental manipulations of analytic processing, induced subtly and implicitly, encourage religious disbelief. . . Third, across studies, we assessed religious belief using diverse measures that focused primarily on belief in and commitment to religiously endorsed supernatural agents. Samples consisted of participants from diverse cultural and religious backgrounds.

Here are the studies.

Study 1.  The authors gave 179 undergraduates an “analytical” thinking test consisting of three questions, each of which had an incorrect “intuitive” answer and a correct “analytical” answer. Here’s one of the questions:

A bat and a ball cost $1.10 in total. The bat costs $1.00 more than the ball. How much does the ball cost? ____cents.

The intuitive answer is 10 cents, but of course the analytical answer, obtained by solving [1.00 + x] + x = 1.10, is 5 cents.

The participants then completed an assessment of their religiosity, including three different tests. The result?

In study 1, as hypothesized, analytic thinking was significantly negatively associated with all three measures of religious belief, r[Religiosity] = –0.22, P = 0.003; r[Intuitive] = –0.15, P = 0.04; andr[Agents] = –0.18, P = 0.02. This result demonstrated that, at the level of individual differences, the tendency to analytically override intuitions in reasoning was associated with religious disbelief, supporting previous findings.

Study 2.  A group of Canadian undergraduates were exposed to one of four randomly chosen images, two of which depicted contemplative activity and two showing control artwork matched for posture and surface texture.  Here are two pictures, one of each type:

After viewing their random image, each student rated his/her belief in God on a scale from 1-100.

The results?

In the present study, as hypothesized, viewing The Thinker significantly promoted religious disbelief [t(55) = 2.24, P = 0.03, Cohen’s d = 0.60; Table 2]. In sum, a novel visual prime that triggers analytic thinking also encouraged disbelief in God.

Well, that’s a marginally significant probability value, and they talk only about the Rodin image, though presumably there was at least one other contemplative image.

Study 3.  93 Canadian undergraduates scored their degree of religious belief after completing “a modified verbal fluency task priming procedure previously used to activate analytic thinking without explicit awareness.” The participants got a set of five words, and were instructed to drop one word and arrange the others into a meaningful phrase.  Some sets included “analytical” words (i.e.,”reason,” “think”), and others non-analytical words (“hammer,” “jump”). There were 50 individuals in the analytical test and 43 in the controls.

The results?

As hypothesized, implicitly primed analytic thinking concepts significantly increased religious disbelief [t(91) = 2.11, P = 0.04, Cohen’s d = 0.44; Table 2].

Again, this is barely significant in a statistical sense. There was one control, showing that these results were not correlated with the degree of a student’s religious belief measured several weeks before the test.

Study 4.  This was the same as study 3,  but conducted on 148 American adults selected for a wide range of backgrounds (71 analytical, 77 control).  The results?

Implicitly primed analytic thinking concepts again increased religious disbelief [t(143) = 2.20, P = 0.03, Cohen’s d = 0.36; Table 2].

Now the authors consider these tests suggestive but not conclusive, saying this:

Nonetheless, experimental manipulations in studies 2 to 4 elicited analytic thinking by having participants perform one task or another (looking at pictures or unscrambling sentences) before rating their religious beliefs. Although unlikely, it is conceivable that the act of performing any task—not just tasks known to elicit analytic cognitive tendencies—may decrease religious belief.

Perhaps, but they had a non-analytic control, which does count as a “task,” and the analytical task significantly decreased religious beliefs. I’m not sure exactly why, then, they raise this point.

To try to obviate the need to assign a “task” to promote analytical thinking, the researchers then conducted

Study 5. Previous work had shown that even making people read something in a hard-to-read font improved their performance on analytical-thinking tests relative to the same thing presented in an easy-to read font.  So they asked students to just rate their religiosity, but using questions presented in either “hard” or “easy” fonts.  Here’s some samples they give:

The result?

As hypothesized, analytic thinking activated via disfluency significantly increased religious disbelief [t(177) = 2.06, P = 0.04, Cohen’s d = 0.31; Table 2]. As in study 4, individual differences in pre-experiment religious belief did not moderate the effect of analytic thinking on religious belief (F < 0.05, P = 0.96). Additional alternative explanations focusing on experimental artifacts introduced by the disfluent font did not receive empirical support (20).

Note again that the probability value (0.04) is very close to the cut-off value that defines statistical significance in biology (0.05).

And the overall conclusion:

. . . the hypothesis that analytic processing—which empirically underlies all experimental manipulations—promotes religious disbelief explains all of these findings in a single framework that is well supported by existing theory regarding the cognitive foundations of religious belief and disbelief.

In other words, the more analytically you think, the less religious you become, at least temporarily.

The authors offer three hypotheses about the precise way that analytical thinking erodes religious thinking. One, for example, is that analytical thinking makes people reflect on their “intutive” religious beliefs and reject them.  But let’s leave these aside for the nonce. In general, I find the study’s interesting and coincident with my intuition, but not terribly statistically significant.  Although each study gave a significant result, the probabilities are mostly marginal. Still, the paper is a useful starting point for further studies about the antagonism between analytical thinking and faith.

It’s notable that the authors bend over backward at the end of their paper to avoid criticizing religion: either the reviewers made them do this or they’re aware of how unpalatable these conclusion might be to the religious American public:

Finally, we caution that the present studies are silent on long-standing debates about the intrinsic value or rationality of religious beliefs, or about the relative merits of analytic and intuitive thinking in promoting optimal decision making. Instead, these results illuminate, through empirical research, one cognitive stage on which such debates are played.

Their nervousness is also evident in the last part of the paper’s abstract:

Combined, these studies indicate that analytic processing is one factor (presumably among several) that promotes religious disbelief. Although these findings do not speak directly to conversations about the inherent rationality, value, or truth of religious beliefs, they illuminate one cognitive factor that may influence such discussions.

I’m guessing that what we’re seeing here is two extremely nervous psychologists worried about a backlash from both the faithful and the believers in belief.   Of course these data speak to the rationality and truth of religious belief, for they show that if one thinks analytically about something—anything—religious belief tends to dwindle. That suggests that religious belief has a component of irrationality, and also that one’s confidence it religion tends to weaken when one is being analytical; i.e., there’s less truth value, if truth is gauged using analytical skills.

About the “value” of religious beliefs the study of course says nothing.  But we all know that most people wouldn’t be religious, and hence would derive no value from faith, if they were convinced that the tenets of religion were false.


Gervais, W. M. and A. Norenzayan.  2012.  Analytical thinking promotes religious disbelief. Science 336:493-496.

125 thoughts on “Analytical thinking erodes religious belief

  1. I thought the whole point of faith is that it is irrational, so why should the religious get offended?

    …cue the discussion about cognitive dissonance….


    1. True. In my experience, if a godbot points out that God is not subject to rational analysis, they just seem to be proud of it, and use it as a defense all the time. Yet if I do the pointing out that God is a logical impossibility, they get all bent out of shape. I guess it depends on who is doing the pointing out.

      1. They really get upset when you point out that an entity that is “beyond” logic is, therefore, by very definition, illogical.

        I really, truly think that, were basic set theory to be part of the standard grade school curriculum, apologetics would get laughed out of the debate.


        1. Sure. I’ll get right on that. I’m positive my local school board that thinks evolution is only a theory will be thrilled to add set theory. Oh, wait! That’s only a theory, too…

    2. That matches my experiences in discussions with the religious. When they run out of arguments they just say “You’re using your head too much. You have to just follow your heart.”

      Or as our last president used to say, follow your gut. And we all saw how well that worked.

        1. Your snarky comment is, imho, more insightful than perhaps you realize. I’ve long felt that the right way to attack religious belief is by sowing small seeds of doubt, by burrowing from within, instead of coming on strong with frontal attacks that do nothing but evoke a defensive response.

          Hypothetically effective technique to sow doubt:

          “So, you, as a good Xtian, don’t think that analytical thinking is a good thing?”

          “Right! Reason is the devil’s whore!” (paraphrasing Luther from comments further down)

          “Well, tell me then, if you are buying a used car, do you take a skeptical attitude, have any proposed purchase checked out by your own mechanic, and take everything the salesman says with a grain of salt?”

          “Why, yes, buying a car is important. You’d be a fool to just buy a used one off the lot on the salesman’s say-so.”

          “Is religion more important than your car?”

          “You bet!”

          “Then why don’t you take at least a moderately skeptical attitude toward its salemen’s spiels instead of swallowing everything they say?”

          Would that work? I’d like to believe so, but I have a strong suspicion that even that little pretend psychodrama is too obvious to fool the believers.

          1. While I applaud people who are thinking about alternate strategies to make believers think about their faith, I despair that it is ultimately useless. I find it difficult to imagine how to reason someone out of a position they’ve not reasoned themselves into. In particular when that group has an inbuilt defense mechanism like one finds in Christianity: that ‘voice’ trying to dissuade you is the devil lying to you. It’s so strange to think that one is dismissed as evil the moment one starts making too much sense (a concept itself that is unknown beyond the ambit of religion).

  2. I’m not sure I get the emphasis on the significance levels in your summary, Jerry. In psychology, where the phenomena are far messier than in biology, a p-value of less than .05 is considered standard, and by no means suspect. And looking at this study overall, they find the same basic result using vastly different experimental manipulations, which increases the confidence in the overall finding.

    1. Well, we get nervous (or at least I do) when p values hover around 0.05 (remember that one in 20 of these will be due to chance alone, though you’re right that we have independent tests here, and I suppose that combining them using Fisher’s tests for combining probabilities (if that’s kosher here) would yield a much smaller joint p value.

      1. we get nervous (or at least I do) when p values hover around 0.05

        I think this is a matter of the culture of the respective disciplines — such significance may be suspect in biology, but in psychology a p of less than .05 , or 95% confidence level, is quite acceptable. By contrast, the physics convention is “5 sigma” for particle discoveries, or 99.99997% confidence level. These relative differences in acceptable confidence reflect, I think, the relative inherent “messiness” of the data they can collect.

        In other words, I think it is appropriate to judge these results against the context of the rest of the field, rather than the conventions of a different domain of science. (Otherwise, physicists would continually scoff at the reliability of the findings of biology.)

        1. Certainly there is an attempt to the best ROI of experiments, if that is what you mean by “messiness” affecting them.

          But in particle physics there is a specific reason mentioned for 5 sigma: because they are looking for signals, more or less gaussian peaks, over an energy range, 3 sigma spurious peaks “comes and goes”.

          The analogous situation would be “effect fishing” in medicine and, why not, psychology. If they can’t get to higher confidence levels in one experiment, one can understand why repetition becomes vital.

      2. Well, the probability to get at least one significant (p<0.05) test out of 5, given that the null (no effect) is true in each case, is 23%. But to get all 5 tests significant, that has a probability of only 0.0000003.

        So I'm pretty confident that a significant real effect has been demonstrated.

      3. It’s always seemed to me that you can’t have it both ways. If you insist on application of an arbitrary cut-off of ‘statistical significance,’ then you have to apply it! P=0.04 is “statistically significant.” Period. It’s not “marginally significant” or “hovering around” your criterion. It’s lower. P=0.048 or 0.053 would be “marginally significant.”
        Of course, there is no longer any need to use any arbitrary convention of “significance.” We are no longer looking up t and F values in tables of critical values; the software spits out an actual P value, often to several decimal places. Just report them and let readers draw their own conclusions. The formal concept of “significance” ought to be wholly abondoned imo.
        Since these are presented as separate experime nts with separate protocols and subjects, I don;t think combining probabilities or Bonferroni adjustments are called for.

        Not that I disagree with uyour conclusions–the null hypothesis is rejected only weakly in each case. But again-significance is significance, if you’re using that concept.

        1. I agree about not having it both ways. There is a cutoff for a reason, and the cutoff is a rather strict one. Scientists ask for at least a 95% chance that an effect is real – I’d say that’s pretty good.

          Also, let’s not forget that the researchers looked at effect sizes, which is a great thing to do that many researchers will skip (probably because they know their effect sizes are small).

          1. Scientists ask for at least a 95% chance that an effect is real

            That convention differs greatly across disciplines — as I mentioned above, a physicist requires a signal of 99.99997% to declare a particle discovery.

            Ultimately, confidence levels are just a matter of convention — how much certainty one demands can vary depending on the area and the nature of the data available.

        2. The Bonferroni correction is a very rough one, which I think should not be used. But chascpeterson, I am curious why you think that no correction (of any kind) is necessary. If you do 50 experiments, at least one will probably have a p < 0.05. A correction is called for whenever there are multiple experiments.

          1. Firstly, a Bonferroni correction is only used when the same data are repeatedly tested. For instance, you measure height, weight and IQ in each subject, then test the effect of height and weight on IQ in separate tests. This inflates the chance that you’ll find a false effect (i.e. it becomes greater than 1 in 20). If you measure height and IQ in one group, then weight and IQ in a second group, the tests are statistically independent and no correction is required (i.e. no matter how many tests you do this way, the false discovery rate remains 1 in 20 for each test). The only way you can correct for a 1 in 20 false discovery rate is to set your alpha lower.

            Secondly, 1 in 20 is the widely accepted point to set the false discovery rate. Sure every now and again your going to get a false positive. But, that is one reason why replication is important. If you did do 50 tests as you suggest, think about what the probability of false discovery would be if all 50 tests have a p below 0.05?

            1. You claim that if you do two different, independent tests, no correction is needed. Maybe I am misunderstanding what you are really trying to say, but in fact corrections are required for independent tests (and the derivations of these corrections depend on the tests being independent). You surely see that if you do multiple independent tests of a null hypothesis, there is more than a 1 in 20 chance that one of them will be significant at the 0.05 level even when the null hypothesis is true.

              I don’t see the relevance of your second point either. Of course if all 50 tests were significant (or even close to it), this would be strong evidence against the null hypothesis. The formula for combining p-values would give an astronomically low p-value to that case. But corrections are important in intermediate cases; in 50 tests, several will be significant even if the null hypothesis is true.

      4. You’re right to be suspicious. P-values between .05 and .01 often provide little real evidence against the null hypothesis. For example, for Study 4 (p=.03), the applet at Jeff Rouder’s web site gives a default Bayes Factor of 1.05, which means (despite what the p-value naively implies) that the data provide essentially equal support for the null and alternative hypotheses (in fact, slightly greater support for the null than the alternative).

        Furthermore, when a researcher reports 5 studies, each of which is only significant by a small margin, one might wonder how the researchers were so good at designing studies so efficiently, somehow knowing each time what sample size would be needed to just attain statistical significance. One possibility is that the researchers were conducting interim significance tests and stopped sampling when they attained significance, a procedure that biases p-values in favor of the alternative hypothesis. Another possibility is that some outliers may have been removed. Either of these possibilities would explain the curious imbalance between test and control sample sizes in several of the reported studies.

        Then, there’s the possibility that the reported studies were a subset of studies with mixed results, and the researchers only reported the studies having statistically significant results.

        Recent controversies in experimental psychology have strongly suggested that such deeply flawed statistical procedures are commonly employed in the field (eg, see here and here for interesting criticisms).

        1. Those are really sharp observations. Good reasons to stop using this silly null-hypothesis-testing model and instead use the parameter-estimation model (with confidence intervals, which also are subject to some of these problems but not quite so badly).

          1. Testing is simpler and more general I assume, since parameter estimation would be model dependent or dimensional reductive (multivariate analysis).

            There is nothing inherently problematic with p values. Maybe people would do less mistakes with other methods, but then again maybe not.

            1. There is nothing wrong with null-hypothesis-testing, if the mere presence of an effect is interesting. If we care about the size of the effect (and we usually do) then p-values are red herrings with no real value.

              Parameter estimation does not require a model. All it requires is an interpretable measure of the magnitude of the effect. Sometimes it is hard to find such a measure, but usually it is not that hard, if people would only bother to try.

              Null-hypothesis-testing is appropriate if we want to know whether neutrinos travel faster than light. Parameter estimation is appropriate if we want to know how much faster than light they travel.

              In ecology and genetics, and even more in psychology, many workers tend to think of measures as mere tools for generating p-values. This has resulted in the widespread acceptance of measures whose magnitudes have no stand-alone interpretation.

              This is what McCloskey and Ziliak aptly called “sizeless science” in their book, “The cult of statistical significance”. And it plagues biology.

              1. “There is nothing wrong with null-hypothesis-testing, if the mere presence of an effect is interesting. If we care about the size of the effect (and we usually do) then p-values are red herrings with no real value.”

                I think your trying to throw the baby out with the bath water here. As you acknowledge, p-values give us some information (presence of an effect). You seem to be suggesting that null-hypothesis testing is next to useless because p-values don’t also give us other bits of information (e.g. effect size). But, there isn’t a parametric statistical test, that I am aware of, that you can’t get effect sizes out of. Note that, in addition to p-values, the authors of the study report Cohen’s d, which is a measure of effect size. It’s not a problem inherent to null-hypothesis testing that effect sizes are rarely reported. That’s a problem with the culture of those reporting the results of null-hypothesis tests.

              2. Yes, it does give us some information. However, for most scientific questions, it is not the information that we want to know.

          2. Null-hypothesis testing is a tool. It’s not the fault of the tool that people use it badly. And people using it badly is not a good reason to get rid of it.

            1. It is a perfectly valid tool to answer the question “Is there evidence of an effect?”. That is almost never an interesting question in biology. We want to how big the effect is.

              1. As I’ve pointed out, you can get effect sizes out of all the tests that produce p-values. You also get a rough idea of the effect size by looking at the degrees of freedom. And it is rarely practical to collect sample sizes so large that the p-value is badly compromised. I’m not disagreeing with you in substance. I just think your exaggerating the problem.

              2. Maybe it depends on the field. In my experience, in population genetics, ecology, and psychology, p-values are often misused, and actual interpretable magnitudes of the effect are often not given, or if they are given, they are not analyzed.

            2. But it’s a flawed tool. As I stated in my first reply, the p-value for Study Number 4 (for instance) was .03, which, by conventional standards (p<.05), means that we reject the null hypothesis and accept the alternative hypothesis. However, as I mentioned in the same reply, a Bayesian analysis shows that the alternative hypothesis and the null hypothesis are about equally likely given the data. Therefore, the p-value in this case is misleading.

              Note that this problem is not rectified by publishing the effect estimate, because the same Bayesian reasoning implies that the observed effect size is no more likely to be correct than an effect size of 0.

        2. The paper in fact shows objective evidence of an excess number of significant findings, due to selective reporting or other biases.

          I have analyzed the results of the paper using the Ioannidis and Trikalinos test for an excess of significant findings, and the test was positive.

          My analysis can be found here


      5. While the rest of you argue statistics, I’ll get the ice bags and ibuprofen ready to deal with the headaches that ensue.

        For those in the direst of straits post argumentum, there’ll be bottles of whiskey at hand.

    2. A pet peeve of mine: in a media report of a study (and in the study paper itself) “statistical significance” morphs into the more common definition of “big” or “large” or “meaningful” difference by dropping the “statistical” qualifier when they get to the discussion of the results.

      It may very well be the case that eating wheat bran was associated with a statistically significant 0.1% reduction in the incidence of acne in a large study, but even if causation is determined, that is not a meaningfully significant effect.

      1. in a media report […] “statistical significance” morphs into the more common definition of “big” or “large” or “meaningful” difference

        Sadly, this is also true for undergrads…

      2. So right!!!The authors should have avoided p-values altogether. These are not questions that are appropriately addressed by a test of a null hypothesis of “no effect”. These are better treated as parameter-estimation problems, where the parameter to be estimated is a an interpretable measure of religious belief, and where uncertainty intervals convey the statistical uncertainty of the parameter estimate.

        We always know, without getting our hands dirty in actual studies, that the difference between two groups of humans, on any interesting dimension, is not zero. So before doing the study, we already know that the null hypothesis we are testing is false, if it is a two-tailed hypothesis. The difference may be tiny, but since it is nonzero, a large enough sample can demonstrate this difference at whatever significance level you like. Not only psychologists but biologists frequently fall into the trap of null-hypothesis-testing.

        Physicists, in contrast, use null hypotheses correctly. A null hypothesis is appropriate when the mere existence of the effect is itself interesting. For example, the mere existence of the Higgs boson is interesting, and so evidence strong enough to confidently reject the null hypothesis is grounds for a Nobel Prize.

        A reader above complemented the authors on the use of effect size. This is indeed a step up from the usual drivel. But effect sizes are usually measured relative to the amount of variation within groups. It is not really the actual magnitude of the effect. Effect size could be very large if within-group variation is small, but the actual practical difference between the groups could still be small.

        Jerry mentioned Fisher’s test for combining p-values. I derived a simple analytical expression for the global p-value of a set of experiments with individual p-values (see my website), which turned out to equal Fisher’s messy test. The formula shows that any set of five barely significant or nearly significant tests is highly significant when taken as a whole (as another commenter predicted). But as someone else mentioned, publication bias in favor of statistically significant studies makes it impossible to do rigorous meta-analyses of other people’s work. In this case, where all studies appear to be the authors’, it is legitimate but, like most p-values in psych or biology, not very interesting.

        1. I’m sure you have studied this a lot more than me, but this is so confusing that I can’t get any significance out of it:

          “We always know, without getting our hands dirty in actual studies, that the difference between two groups of humans, on any interesting dimension, is not zero.”

          But if you take sub-groups out of each group or a blend, there wouldn’t be any difference. Hence it seems you are mistaking biological populations for statistical populations.

          And then what is the claim? Particle physicists would also know pre-study, that “the difference between two” populations “on any interesting dimension, is not zero.”

          And if hypotheses testing is good for them…

          There is no inherent problem with p-values. That is why physicists standardize on sample sizes. (Say, when deciding whether an exoplanet is confirmed or not.)

          1. “We always know, without getting our hands dirty in actual studies, that the difference between two groups of humans, on any interesting dimension, is not zero.”

            Many biological studies test the null hypothesis that two (or more) groups exposed to different treatments have the same value of some parameter (say, diversity). Yet we know that diversity will never be exactly equal, to several decimal places, between two groups. Even the smallest difference in diversity can be detected at whatever significance level we want, if sample size is large enough.

            Detecting an exoplanet is a legitimate use of null-hypothesis testing. We want to know whether or not it exists. This is a sizeless, binary question. It would be very peculiar to measure the temperature difference between two exoplanets by testing the null hypothesis that the temp difference is zero. We know the temperatures are different, if only in the third or fourth decimal place, and if our measuring devices are accurate enough and we are patient enough, we can always reject the null hypothesis that the temperature difference is zero, at whatever significance level we choose. Is this interesting? Only if no more meaningful or informative measure is available or possible. It would be more meaningful to try to measure the temp directly, even if such measurements were noisy, and establish confidence intervals around these parameter estimates to express the statistical uncertainty.

        2. Again you seem to be criticising null-hypothesis testing not because of any actual problem with it, but because of problems with the way people use or report such tests. It’s like you’re trying to convince us that hammers are the best tools because screwdrivers aren’t very good at banging in nails. Sure p-values are limited in what they can say, but interpreting the results of null-hypothesis tests isn’t limited to looking at p-values.

          1. In many studies, the p-value IS the stopping point. You recognize that this is bad science. I am not criticizing null-hypothesis-testing per se. I am pointing out that it is virtually never the appropriate tool for the job. If you want to bang nails, a hammer is the right choice. Having a screwdriver in your hand at the same time may or may not mess up your hammering, depending on your skills. Since the screwdriver adds nothing, and often causes mistakes, why not stick to the right tool?

  3. please don’t laugh, but I am not aware of how the intutitve answer is false to first question? are they taking into consideration taxes—ah please don’t laugh at me.

    the total is 1.10…okay, but the bat cost 1.00 and the ball is 10 cents.

    So how is the answer 5 cents?

    ”The intuitive answer is 10 cents, but of course the analytical answer, obtained by solving [1.00 + x] + x = 1.10, is 5 cents.”

    does the answer lay in how the question was posed?

    it says that the ball is

    The bat costs $1.00 more than the ball. How much does the ball cost? ____cents.

    costs 1.00 MORE…does the answer lay in the MORE? meaning…if the 1.00 more than 10cent is 1.10…no NEVER MIND. this doesnt work. forget the MORE.

    I swear i’m not as dumb as this makes me out to be…

    Jerry says that the answer lays in solving this:

    [1.00 + x] + x

    I just see nothing. Pure nothing. Not nothing like dark matter nothing, or nothing like the nothigness that created the something of a universe, but just nothing in terms of help.

    1. The bat costs £1.05 and the ball £0.05. Together they cost £1.10. If the ball costs £0.10 then the bat would have to cost £1.10 to be £1 more expensive.

      1. But Rudi, that doesn’t work, because the original question was in dollars!

        (Although it isn’t clear if those were actually US dollars, or Canadian dollars, or Australian dollars…)

    2. We don’t know how much the ball cost: x.
      The bat cost a dollar more: x+1.
      Together they cost $1.10.

      So x + (x+1) = 1.10

      The ball costs 5 cents; the bat a dollar more $1.05; together $1.10.

      Maybe the joke is on me.

      1. I had to work through the algebra, too, before I believed it.

        But, by way of confirmation, if you go with the naïve solution of the ball costing $0.10, then, yes, the total is $1.10, but the bat only costs $0.90 more than the ball — we’re looking for the answer where the total is still $1.10, but the bat is a full dollar more than the ball, and that’s $1.05 / $0.05..



        1. Don’t tell anyone, but I always have to work the math through. I do not trust ‘intuition’ because even a very simple problem like this causes it problems. And the fact that it’s quite easy to exploit one’s intuitions to get them to the wrong conclusion is precisely the reason I answer questions in complete sentences.

          1. I have to abandon my intuitions as well to get anywhere, and writing it down certainly helps. I’m working on the spoken answering stuff, because I too have noticed the problem with off the cuff “short cuts”.

            A corollary to the above study, another recent paper IIRC, claims using a foreign language makes one more analytic. I’m more atheist in english. Go figure… (and I mean it =D).

          2. I got it “intuitively right.” That is, I knew at once that the ball costs 5 cents. That’s because I’ve done lots of questions like this before.

            Here’s the thing: I think of intuition as that which my brain has learned to do without guidance from ‘me.’

        2. I meant maybe the other person was joking and I was wasting my time explaining; it’s such a simple (obvious solution) problem. I got it even by intuition.

          1. Yeah, me too. I did that in my head in two seconds flat. (Probably I’ve seen it before somewhere). Umm, I think it probably goes something like this: The difference is $1, so take away the difference and you’ve got 10c evenly split between the two items – so, 5c for one and 1.05 for the other.
            Works for any of those ‘difference’ puzzles.

      2. Thank you for explaining that. If the ball cost $0.10 and the bat cost $1.00 more, then the bat would be $1.10, making a total cost of $1.20.

        Obviously, I need to refresh my algebra, which I took X+10 billion years ago.

    3. I had this -exact- question in a test to qualify me to join a “prediction” study (predicting future political events, e.g. “Assad will still be in power on June 30 2012…) at UC Berkeley. One’s first, intuitive answer (i.e. the bat costs ten cents) sends up an immediate caution flag (how can this be a test??? too obvious..) which causes one to restudy the question to find the key information that creates a non-obvious answer.

        1. Read the video information below the video: “Now this is dedication and complete irony- Chelsea is turning down an appearance on Jay Leno Tonight Show later this month because she has a college math final that day.”

        2. No, I think she is just floridly, spectacularly, innumerate, even though she may be very good at (pure) mathematics. She doesn’t seem to have considered – or even be able to consider – the non-trick answer. All her speculations are epically irrelevant, since the original question made no mention of running, or trucks or stick shifts or tyre sizes. Somehow she has managed to reach college level pure mathematics without this neurological deficit, worthy of study by Olivers Sacks, having been detected.

          (But I’m concerned about HIS wisdom to film while driving.)

          1. That is truly spectacular. How could she possibly not see the answer?

            Now the sort of guesstimation she was trying to do is, in fact, entirely respectable and a very useful knack, I sometimes do it myself in cases where the answer is not arithmetically simple. (Not saying she was doing it anything like right, though).

    4. Shut up and calculate [sorry of it sounds rude, it is a joke on what is called instrumentalism in physics]: y = x + 1; x + y = 1.10 with x ball, y bat -> x + (x + 1) = 2x + 1 = 1.10 -> 2x = 0.10 -> x = 0.05.

      Linear equations: 2 unknowns, 2 equations -> unique solution. So y = x + 1; x + y = 1.10 et cetera.

      Mental calculation: y = x + 1; x + y = 1.10 -> try: x = 0.05, so y = 1.05 and x + y = 0.05 + 1.05 = 1.10. Ok, x = 0.05.

      Heuristics: x + x + 1 = 2x + 1 = 0.10 + 1 = 2*0.05 + 1 -> x = 0.05.

  4. “That suggests that religious belief has a component of irrationality”

    I would say that these studies show that religious belief has a component of intuitiveness, and not necessarily irrationality.

    1. Religion has to be 100% intuitive, because now, in the face of science, there is no evidence for the supernatural.

      I say today, because it made reasonable sense that phenomena such as rain falling from the sky, some animals could fly, had wings, others did not, some could live underwater, but people could not, lightning, thunder, so many things could only be explained, reasonably explained, as ‘supernatural’ phenomenon, beyond the understanding of mere humans. Now we have knowledge of why these things occur, and all are explained by natural phenomenon. Evidence for the supernatural is reduced to vague phrases (“it’s blindingly obvious!!”) and no evidence, and the only thing buttressing religion is a “feel-good feeling, so there must be something to it…”

    2. It would be easy to say that intuition = irrational, but ’tain’t so. Daniel Kahneman’s “Thinking Fast and Slow”, a book I would recommend to anyone interested in “how we think”, is a detailed study of when and where and how intuition (Kahneman’s “thinking fast”) works and when/where/how it doesn’t work.

      The question about the ball and the bat demonstrates precisely Kahneman’s distinction, and in that case exemplifies thinking fast being wrong. (N.B., wrong, not irrational).

  5. Faith (ie belief without evidence) is by definition non-rational, so it’s surely high-time people stopped tip-toeing around this fact.

    When religious people get up in arms about people who state this fact, then they merely prove the point. It’s one thing to call someone stupid or crazy for being religious, then of course people can legitimately complain about this. But anyone who complains about the simple, uncontroversial fact that religious faith is non-rational really needs to be told where to go.

  6. Are people here familiar with Bruce Hood’s book “Supersense, From Superstition to Religion – The Brain Science of Belief”?

    I think it offers the best or most scientific explanation for a theory of religion.

    Basically Hood’s thesis is that our natural intuition which is designed for survival can quickly go to irrational lengths leading to superstition and then to religion. It’s rather like a confirmation bias gone mad into delusion.

    I think the study is interesting, but I know that people can use their intuition for analytic purposes, such as in Chess. Bruce Hood’s examples are far more convincing.

    1. Just look into James Frazer’s famous “Golden Bough” for endless examples of religious superstition, plus an analysis of its causes.

  7. Per Ben’s invitation, this

    . . . the hypothesis that analytic processing—which empirically underlies all experimental manipulations—promotes religious disbelief explains all of these findings. . .

    certainly shines some light on how the human mind so easily compartmentalizes. Such as in practicing scientists who maintain a belief in gods.

    Also makes you wonder, just how aware are we? The more science unveils, the more it looks like that on a moment to moment basis we have no clue what is going on in our own minds.

  8. fascinating – & I’m glad soeone questioned the analytical correction in study 1 ’cause maths & I were never friends so I spent quite a few minutes trying to suss it. Of course the answer was in a correct reading. I got consumed by the numericals instead of what it was actually about. Now isn’t that a nice little statement on the whole affair 🙂

    1. One of the hardest things students face (so I’ve read) is often how to apply the theory to a real-life example, in whatever field. And applying maths to a real-world problem is the classic instance of that.

  9. JC: But we all know that most people wouldn’t be religious, and hence would derive no value from faith, if they were convinced that the tenets of religion were false.

    Actually, no we don’t all know that. People derive plenty of value from their religious communities, which require faith as the price of admission. (Yes, you could fake it and hoover up all the social goodies, but I assert that most folks are more morally honest than that.)

    Narrowly construed, of course what you say is true, the beliefs themselves hold little comfort if they’re known to be false. But what draws people in is not the beliefs, it’s the ancillary benefits, which predisposes them to work the little magic “close eyes, wish real hard, now it’s true” trick that we all have built into us.

    Yes, all of us. Otherwise we wouldn’t need a scientific method.

    1. “But what draws people in is not the beliefs, it’s the ancillary benefits,. . .”

      I think this line of reasoning is very interesting. Can you point me to any studies that support this claim? It sounds reasonable, but is there any reliable data that shows that the ancillary benefits are the driver, or even just an important one of several drivers?

  10. I tried this on my 6th-grade daughter. I showed her the bat and ball question, and of course her first answer was the “intuitive” one. Then I made her write out an equation like Jerry’s and got her to solve it, and she then got the correct answer. Finally I asked her if she believes in God, and she said, “No.” Case closed! (n=1, p=0.5, but who’s counting?)

    1. I like this, even though it reminds me of an uncle (when I was about nine) who tried to link a bad experience that I reported, as a demonstration that god was punishing me because, etc etc etc. I was not a disbeliever in deities, but I did not think I, a mere “little kid”, merited judgment and punishment.

      Your sixth-grade daughter’s non-belief is obviously way ahead of most at that age. And, we had no equations in the sixth grade way back then.

      1. Oh, they’ve been doing simultaneous equations in 2 or 3 variables for a while now. This is Palo Alto, and she’s in a “test out” program where a subset of the kids work separately from the mainstream if they can show they already know the stuff, but I think it’s essentially what the CA curriculum calls for these days. Certainly she’s doing more advanced stuff than I was doing at my British Grammar school at that age (39 years ago…)

        As for the God stuff, of course she knows well my views, and I’ve done things like bought her The Greatest Show On Earth (I’ll lend her my WEIT in a year or two!), but I’m trying not to indoctrinate her especially. I think just the lack of indoctrination she’s experienced in the religious direction is sufficient to keep her grounded.

        I had an interesting email exchange with the mom of one of her classmates this weekend. They’re a Christian Korean family, and the boy keeps on saying things like “Darwin is f***ing stupid,” to my daughter. I asked his mom to (a) ask him to curb the language a bit, and (b) try to encourage him not to be quite so closed-minded at such a young age. She was very receptive and cooperative about the whole thing and they had a chat with their son. We’ll see if it makes any difference!

        (Sorry for the mostly OT post…)

      1. Yes, I even believe she said, “There is no need for God in this hypothesis.” 😀

        Interestingly after I got home from work I gave her another simple sum/difference problem, and she just went straight for a pencil and paper, no attempt to intuit it. But maybe that’s because I said it to her verbally instead of showing her a written question.

  11. There is probably someone here who knows more about this than I do, but wasn’t Martin Luther suspicious of human reason on the grounds that it was a danger to faith?

    1. He was, indeed. Here’s one from a quick google:

      “Reason is a whore, the greatest enemy that faith has.”

      and another:

      “Faith must trample under foot all reason, sense, and understanding.”

    2. “But since the devil’s bride, Reason, that pretty whore, comes in and thinks she’s wise, and what she says, what she thinks, is from the Holy Spirit, who can help us, then? Not judges, not doctors, no king or emperor, because [reason] is the Devil’s greatest whore.”

      “Reason is the greatest enemy that faith has: it never comes to the aid of spiritual things, but–more frequently than not –struggles against the divine Word, treating with contempt all that emanates from God.”

      “Reason must be deluded, blinded, and destroyed. Faith must trample underfoot all reason, sense, and understanding, and whatever it sees must be put out of sight and … know nothing but the word of God.”

      “There is on earth among all dangers no more dangerous thing than a richly endowed and adroit reason… Reason must be deluded, blinded, and destroyed.”

  12. Let me raise another hypothesis. Analytical thinking promotes skepticism. So if you were to ask them about another topic, I would be interested to see if confidence in something we are quite sure about, say evolution, would decrease. Not that I think this would happen, but it would be good to double-check.

    1. I think that’s an interesting idea, but I don’t see that evolution is a great alternative topic to religion, because thinking about evolution analytically should only increase one’s confidence in its soundness (at least compared to the alternatives).

      I’d propose homeopathy or acupuncture as alternatives to religion as the test topic, because with either of those the briefest of skeptical/analytical thought about them would reveal that they’re both nonsense on stilts, to borrow a phrase.

      1. I think you missed the point of Kevin’s post. He was wondering if engaging in analytical thinking would lead the thinker to feel less confident about evolution, *even though* “thinking about evolution analytically should only increase one’s confidence in its soundness”.

        1. Yes, I see that now, thanks. In that case, I agree with Kevin, in that I don’t think it would happen! I’d hope that increased “skepticism” wouldn’t just mean “more likely to arbitrarily reject the accepted wisdom” (though of course plenty of creationists/AGW deniers would claim that they reject the science because of their critical thinking/skepticism. They’re just wrong, is all).

          1. truthspeaker clarified correctly. I was thinking along the lines of the Dunning-Kruger effect. Those who know more and less confident about how much they know because they know how much they don’t know. I was wondering if analytic thinking might trigger something of this sort. Those who are or primed to be analytical might assign lower probabilities to beliefs because they know or reflect on how fallible we tend to be (i.e. skepticism).

  13. I’m musing about ‘Voting by Doing’…help me out here, guys. I’m trying to develop a philosophical argument that the use of any technology involving comparative measurement endorses science over religion. Comparative measurement: that phenomenon seems (in my opinion, subject to refinement) to be the key divider between science and religion. Armchair musing (so far) seems to indicate that everything in the Bible and Qur’an =never= involves the scientific method, never involves comparative measurement. No indication of scientific episodes is ever presented within the so-called holy books. Please indicate any if you know of one. Planting crops, making clothes, building a house, tending livestock, smiting foes, is all done in ways that are done by methods that are based upon “that’s how it was always done” (ironically, it could be described as evolved methods!) Even today, one could easily live within the style of the third and fourth century,current era, and never use the products of ‘scientism’ or comparative testing and measurement.

    I posit that -use- of things like gasoline, guns, vehicles beyond the two-wheeled oxcart, electricity, that are only available because of the scientific method, and use of these products of science, is a continuous, unambiguous endorsement of science and scientific analysis over all religious texts. If one endorses, truly endorses religion, one should eschew all use of the products of science. Otherwise, you are ‘voting by doing’, i.e. the use of the products that derived from comparative measurement, means you dismiss the Qu’ran, you dismiss the Bible. In my (nascent) thinking about this, surely one cannot have it both ways. Either you are actively endorsing =against= holy texts by using gasoline, using airplanes, using electricity…products of comparative measurement…or you should eschew their use and rely upon only the methods of living within the bounds of the holy books that do not originate in comparative measurements of natural phenomenon.

    Jesus never mentioned comparative measurement of physical phenomenon. Maybe he compared human(s) to human(s), but never the comparison of physical phenomenon in order to derive knowledge.

    Even the use of bicycles is against Islam. When the first bicycles appeared in (the so-called) Middle East, they were referred to by mullahs as “Devil Carts”, as their ability to balance upright could not be explained within the teachings of the Qur’an. Thus their operation and use was an endorsement of the Devil.

    Why are the opinions of these 19th century mullahs any less valid today? Same holy book from which the opinion derived.

    any comments are helpful.

    1. Buddhism makes the point that anyone can verify its assertions if they want. Not really what you seek, but in the right ballpark.

      Example: Buddhists (at least the smart ones) don’t really believe in an objective reality and consider the evidence of the senses flawed. Method of verification: visualization exercises that lead to visualizations indistinguishable from reality (see Alexandra David-Neel’s “The Secret Oral Teachings of the Tibetan Buddhists”.) The logical mistake (from the p.o.v. of someone who believes in an objective reality and the truthfulness of sensory impressions) lies in proceeding from “See? You can conjure up sensations indistinguishable from the real thing, hence all sensations are of the same kind and none reflect an objective reality.”

      Apologies if I’ve garbled this, as I have only a shallow understanding of the issues. Corrections welcome.

  14. If I’m right (which I am) getting a decent education is the major cause of religious decay.

    Which would make the US education system zomgwtfcrappy. At least in comparison to the rest of the developed world.

    1. And the fundies know this (even if they know it intuitively), hence the American right wing’s war on education. (At present, the equation “fundy” = “right wing” holds in the USA.)

  15. This doesn’t strike me as terribly surprising. Or to put it another way, it makes intuitive sense 😉 I can think of a lot things that diminish when we go analytical: falling in love, enjoying a joke, mourning, having a beer with your buddies, losing your temper, dancing. Can you enjoy dancing while you count the steps or worry about what you look like? Ever tried thinking about oral hygiene during a kiss? Could you judge a gymnastics display and enjoy it at the same time? (Before you answer be careful to distinguish enjoying the gym from enjoying judging.) Basically, you can’t immerse yourself in something and analyse it at the same time. The two are mutually exclusive but not enemies.

    Analysis itself can be a real pleasure – just not at the same time as engaging with the thing you’re analysing. I love unpacking stuff, often at the expense of just letting go and immersing myself in the experience. That probably makes it sound like you have to let your brains leak out when you open yourself up to something but it can be quite the contrary. Think of how one’s enjoyment of a good work of art, a good wine or a Shakespearian sonnet is deepened after some good analysis.

    Clearly, analysis can undermine or support the thing you immerse yourself in but it depends of the specifics of the analysis so a bunch of stats isn’t going mean an awful lot to me, especially if the study fails to bring into account that natural separation of getting into something and looking at it from the outside. Do they deal with that?

    1. I think you are discussing action vs reflection here. The study was action (priming) and then reflection (census), so orthogonal to that.

      1. OK, thanks, I think I understand what you’re saying and I can accept it, but I still have a hard time understanding how analysis of one thing can affect one’s belief in another totally unrelated thing. My abandoning the idea of life after death was just about 100% due to thinking it through, along with the notion of reincarnation. And when my kids came home one day with the idea of fairies, I helped them out by getting them to analyse the idea. Why has no-one ever found a fairy poo? Where do they get their clothes? What sort of bone structure would they need to be able to support wings and fly? Etc. It seems perverse to me that, as this article suggests, I could instead have given them other, unrelated analytical tasks to manipulate in them an erosion of their belief in fairies.

  16. Get a christian to commit to the rule that every thing must have a designer , therefore god must have a designer, otherwise he’s not a thing aka nothing.

    1. No, no, no, silly — every created thing must have a designer, but God isn’t created, of course! (I don’t know why you can’t see this very obvious fact…)

    2. Richard Dawkins (to name but one) makes that point all the time, but of course religious folk have lots of workarounds (God is eternal, outside of time and space, he just is, and doesn’t need a creator, etc. etc.) It’s what puts the “super” in the “supernatural” 😀 Of course, most of those arguments can also be applied to the uni/multiverse itself…

      1. Yes that is true, that’s why I think it’s less to do with ones intellectual prowess or analytic ability and more to do with character as to how hard a person will work to maintain the belief that they are going to be saved and to hell with everyone else

  17. A common rebuttal to atheists is the “well you haven’t read all the theologians” argument. Neither have most believers, of course; they are just happy to know that the theologians are there.

    I suspect that most theologians–a very small minority among believers–are perfectly capable of using analytical thinking processes. Indeed, they are probably smarter than the average atheist, in the same way that the best lawyers are those who can get acquittals for their obviously guilty clients.

    1. I agree. Especially the ones making money off their followers. It’s quite possible they are closet atheists and have just found a nifty way of making enough money to pay for a luxury survival bunker in Kansas

    2. Among believers, those identify more strongly with their religion and/or are more religiously observant tend to be more intelligent.

      I’d also note that there’s some similarities to constructing an argument and writing a computer program. Brian Kernigham observed Everyone knows that debugging is twice as hard as writing a program in the first place. So if you’re as clever as you can be when you write it, how will you ever debug it?” However, most religious believers rely not merely on the cleverest arguments they can come up with, but the cleverest arguments the cleverest believers can come up with… rendering their chances of spotting the mistakes even slimmer.

  18. The article in Science which started all this was the subject of a short article on Scientific American online on April 26th. The brief article is OK as far as it goes, but by far the more interesting part of it is the discussion following it online. I’ve shortened the longish link:

  19. “I’m guessing that what we’re seeing here is two extremely nervous psychologists worried about a backlash from both the faithful and the believers in belief.”

    I’m guessing otherwise, given that they published the paper.

    The only response I would anticipate would be a questioning of the extent to which the researchers looked for possible lurking variables such as the quality of the religious education received by the participants. For example, had they been taught traditional Catholic apologetics, doctrine and externals of the Church? Or had they been brought up in a parish whose priest dressed up as Barney the Dinosaur?

    1. For the record, I’m not sure the distinction you draw between Catholics and the parish priest dressed up as Barney the Dinosaur is one with a difference.

    1. Fascinating!

      That’s consistent with Haidt’s five pillars of morality. That is, the moral foundations of conservatives have been shown to include ingroup loyalty, respect for authority, and purity, and these all obviously link in with religious belief. Whereas the left, who spend more time exploring foreign viewpoints, have moral compases shown to be driven more solely by compassion and equality (rejecting the other pillars as the foundations of racism). This would predict an inverse correlation between religiosity and being internally driven by compassion. Perhaps the study would have seemed to get an opposite result if it had investigated tendency to respond to a appeals by pastors/bosses for volunteers at local in-group directed charity work?

  20. I hope someone would make another kind of experiment, where the priming done is for fanciful thinking. Perhaps stories of fairies, ghosts, etc.

    And then try to see if there would be a significant effect for/against religious belief.

  21. A friend who survived a Catholic education says the nuns would say, “Don’t think, we have the priests to do that for us.”

  22. I’d note that this seems to strongly coincide to the data from Altemeyer and Hunsberger’s “Amazing Conversions” study, where they noted that those brought up highly irreligious who ended up highly religious tended to have conversions catalyzed by emotional stresses, while those brought up highly religious who ended up highly irreligious tended to have (de)conversions resulting from sustained periods of rational reflective consideration.

    (Oversimplified. “Highly” means above the 75th percentile for their sample, not necessarily relative to some arbitrary external level.)

    1. Interesting. My own experience has been the opposite: stress and anger. And in fact, coming across the writings of people like Sam Harris has had exactly the opposite effect. I am overwhelmed by the sense that he doesn’t know what he’s talking about and doesn’t seem to know what a historicism is, and it actually occasionally has me reading the Bible again, which is reinforcing my view, though certainly not going back to church again.

      1. Stress appears to be a contributing factor; however, it seems mostly to prompt people to look at alternatives. For those starting irreligious, they encounter the benefits of religion for dealing with emotional stresses, and thus may take it up. For those who find religion failing to help as much as expected with the stresses, that cognitive dissonance can contribute to starting a course of rational reflection about religion, particularly among those who’ve developed the habit of sustained rational reflection.

        And, of course, there are outliers from the tendencies.

        Personally, I prefer Hume to Harris, though the now slightly archaic language of the former makes it nigh-impossible for his work to be light reading. Still, it’s held up well… though YMMV.

  23. Analytic thinking may erode christian belief but certainly buddhism because buddhism requires analytic thinking in order to look at the nature of the world and the self.

Leave a Reply