Statisticians 51, Pundits 0

November 26, 2012 • 12:47 am

by Greg Mayer

As both an undergraduate and graduate student, I was fortunate to be taught statistics by some of the best statistical minds in biology: Robert Sokal and Jim Rohlf at Stony Brook, and Dick Lewontin at Harvard. All three have influenced biostatistics enormously, not just through their many students, but also through writing textbooks, the former two coauthoring the still essential Biometry (4th edition, 2012), the latter joining the great G.G. Simpson and Anne Roe in revising the seminal Quantitative Zoology (2nd edition, 1960).  In my first year of graduate school, while on a two month field course in Costa Rica, other students, knowing I’d already “done” Sokal and Rohlf, would consult with me on statistical questions. Towards the end of the interview that got me the position I currently hold, I was casually asked if I could teach a course in “quantitative biology”, to which I replied “yes” (the position had been advertised for evolution and vertebrate zoology). The course, now entitled biostatistics, has wound up being the only class I have taught every academic year.

From xkcd (

I mention these things to establish my cred as, if not a maven, at least an aficionado of statistics. It was thus with some professional interest that I (along with others) noted that towards the end of the recent presidential election campaign, pollsters and poll analysts came in for a lot of flak. Polling is very much a statistical activity, the chief aim being, in the technical jargon, to estimate a proportion (i.e., what percent of people or voters support some candidate or hold some opinion), and to also estimate the uncertainty of the estimate (i.e., how good or bad the estimate is, in the sense of being close to the “truth” now [which is defined as the proportion you would get if you exhaustively surveyed the entire population], and also as prediction of a future proportion). The uncertainties of these estimates can be reduced by increasing the sample size, and thus poll aggregators, such as those at Pollster and Real Clear Politics, will usually have the best estimates.

In the last weeks before the election, a large swath of the punditry declared that polls, and especially the aggregators, were all wrong. Many prominent Republicans predicted a landslide or near-landslide win for Mitt Romney. The polls, it was claimed, had a pro-Obama bias that skewed their results, and a website called UnSkewed Polls, to ‘correct’ the skew, was even created. Nate Silver, the sabermetrician turned polling aggregator of (at the New York Times), was the subject of particular opprobrium. Joe Scarborough of MSNBC had this to say:

Nate Silver says this is a 73.6 percent chance that the president is going to win? Nobody in that campaign thinks they have a 73 percent chance — they think they have a 50.1 percent chance of winning. And you talk to the Romney people, it’s the same thing. Both sides understand that it is close, and it could go either way. And anybody that thinks that this race is anything but a tossup right now is such an ideologue, they should be kept away from typewriters, computers, laptops and microphones for the next 10 days, because they’re jokes.

Dylan Byers of Politico mused that Silver might be a “one-term celebrity”, apparently referring to Silver’s accuracy in 2008 , but apparently not noticing his accuracy in 2010 as well. The nadir of these attacks, offered up by Dean Chambers, was not just innumerate, but vile; Silver, he wrote, is

a man of very small stature, a thin and effeminate man with a soft-sounding voice that sounds almost exactly like the ‘Mr. New Castrati’ voice used by Rush Limbaugh on his program.

[Chambers has removed this passage from his piece, but many, including Jennifer Ouellette at Cocktail Party Physics and Andrew Sullivan, captured it before it was taken out.] I’ve seen Silver on TV many times, but he’s usually sitting, so I have no clear idea of his size, and I have no idea what Chambers finds effeminate about him (unless this is a code to reveal that Silver is gay, something that a follower of Silver’s analyses would never know– I didn’t). But even if Chambers physical description were true, what could it possibly have to do with the veracity of Silver’s statistical analyses?

Averages of large numbers of polls have rarely if ever been as far off as these pundits would have had us believe, but in polling, as in science, the proof is in the pudding.  As the results came in, Fox News analyst Karl Rove, one of those who had foreseen a Romney victory, seemed to enter a dissociative state, as his inability to assimilate the election results was painfully displayed before the viewing audience. Anchor Megyn Kelly eventually asked him, “Is this just math you do as a Republican to make yourself feel better?” So just as paleontologists can boast “we have the fossils, we win”, poll aggregators can now boast, “we have the election results, we win”.

To me, it seems that there is a class of related, and unfounded, positions taken up primarily by conservatives that have a common source: the determination that when the facts are inconvenient, they can be wished away. As scientists, we’ve seen it mostly in scientific issues: embryology, evolution, the big bang, global warming, the age of the Earth. Some conservatives don’t like the facts, so they create a parallel world of invented facts, or dream up conspiracy theories, and choose to dwell in an alternate reality that, unfortunately for them, isn’t real. In a curious convergence with postmodernism, the very notion of “fact” is disdained. Paul Krugman has noted that the problems are at root epistemological, constituting a “war on objectivity“, and that these conservative pundits have this problem not just with science, but with political reality as well. Andrew Sullivan is also mystified by the divorce from reality.

The poll-based statisticians, all of whom predicted an Obama victory, were broadly correct. Several analyses of which analyst did best have already appeared. Nate Silver has compared the pollsters, although Pollster notes it may be too early tell. The LA Times has self-assessments by several pundits and poll aggregators.

Being, as I said, a statistics aficionado, and a few weeks having passed since the election, I thought I’d compare the prognostications myself. I chose to compare the three poll aggregators that I followed during the run-up to the election.

All three did a state-by-state (+ District of Columbia) analysis, which, under the electoral college system, makes the most sense., run by Andrew Tanenbaum, the Votemaster, has the simplest aggregating algorithm: non-partisan polls from every state are arithmetically averaged over a one-week period starting with the most recent poll. Each candidate’s electoral votes are whatever the states he’s leading in add up to.

The Princeton Election Consortium, run by Sam Wang, takes the median of recent polls, assumes an error distribution to give a win probability, then calculates the probability of all  2^51 possible outcomes, creating a probability distribution over the possible electoral vote outcomes. Wang prefers to look at the median, but this distribution also has a mode.

Finally, Nate Silver at 538 takes a weighted average of state polls, where the weights discount known “house effects” of particular pollsters and the recency of the poll, and then throws in corrections for various other non-polling data (e.g. economics), national polling data, and things that effect polling (e.g. convention bounces). This all leads to a win probability, which again leads to a probability distribution of electoral vote outcomes. Silver emphasizes the mean, but this distribution also has a mode. When polling data is dense, and especially when the election date is near, all three should have about the same result. When polling data is sparse, Silver’s method, because it uses other sources of data for predictive inference, might be better.

So, how’d they do? We can look at how they did on state calls, electoral vote, and popular vote.

State (including District of Columbia) Calls. Nate Silver got all 51 right. Sam Wang got 50 right, missing on Florida, which he called for Romney, but noted it was on knife edge. The Votemaster got 49 right, called North Carolina a tie, and called Florida for Romney. It’s of course easy to call Texas, New York and California correctly, so the test is how they did in toss up and leaning states. They all did well, but advantage Nate.

Electoral Vote. Obama got 332 electoral votes. Nate Silver’s model prediction was 313, Sam Wang’s prediction was 305, and the Votemaster gave Obama 303.

In addition to their predictions, we can also add up the electoral votes Obama would get based on the state calls—I term this the “add-up” prediction. For this prediction, Nate gave Obama 332 (exactly correct), and Sam gave him 303 (because he got Florida wrong). The Votemaster’s prediction is the add-up prediction of his state calls, so it’s again 303. We could perhaps split the tied North Carolina electoral votes for the Votemaster, giving 310.5 for Obama, but this brings him closer to the final result only by counting for Obama votes from a state he lost.

For the two aggregators that showed full distributions of outcomes, we can also look at the mode of the distribution (remember, Nate prefers the mean of this distribution, Sam prefers the median). Nate’s mode is 332, again exactly right, while Sam’s mode is 303, although 332 was only slightly less likely an outcome in his final distribution. All of them did pretty well, each slightly underestimating Obama, but once again a slight advantage goes to Nate.

Popular Vote. The Votemaster does not make a prediction of national popular vote, so he can’t be evaluated on this criterion. Sam Wang doesn’t track the popular vote either (stressing, correctly, the individual state effects on the electoral college), but he did give every day what he calls the popular vote meta-margin, which is his estimate of the size of the shift in the national vote necessary to engender an electoral tie. Also, in his final prediction, he did make a popular vote prediction, a Bayesian estimate based on state and national polls.

Even more problematic than the predictions is knowing what the election results are. As Ezra Klein (see video 3) and Nate Silver have both noted in the last few days, there are still many votes to be counted, and most of them will be for Obama.

The compilations of major news sources, such as the New York Times or CBS News, are derived from the Associated Press, and have been stuck at one of two counts (Obama 62,211,250 vs. Romney 59,134,475 or Obama 62,615,406 vs. Romney 59,142,004) for some days now, with latest results not added.

Wikipedia is in general an unreliable source (perhaps more on this later). However, David Wasserman of the Cook Political Report, despite his article being for subscribers only, has been posting his invaluable collection of state results in Googledocs. The latest results are Obama 64,497,241, Romney 60,296,061, others 2,163,462, or 50.80%, 47.49%, 1.70%. (Rounding to the nearest whole number, this gives Romney 47% of the vote, a delicious irony noted by Ezra Klein in the video linked to above.) We could also calculate the two-party percentages, which are Obama 51.68%, Romney 48.32%

Nate Silver predicted an all-candidate popular vote distribution of Obama 50.8%, Romney 48.3%, others .9%. This is spot on for Obama, and a tad high for Romney. We can, however, convert Nate’s numbers to two-party percentages, and get 51.3 vs. 48.7; this slightly underestimates Obama. Sam Wang gave only a two-party vote prediction, 51.1 vs. 48.9; this is a slightly greater underestimation of Obama’s percentage. Sam’s final popular vote meta-margin was 2.76%, and this is closer to the actual margin (3.31% [all] or 3.36% [two-party]). So one last time, advantage Nate.

(I should note that Nate Silver’s and the Votemaster’s calls are not personal decisions, but entirely algorithmic, with Nate’s algorithm being complex, and the Votemaster’s very simple. Sam Wang’s calls are algorithmic up until election eve, at which point he makes predictions based on additional factors; for example, this year he expanded his usual one week polling window in making his final predictions. In fact, his last algorithmic prediction of the electoral vote, 312, was slightly better than his final prediction.)

More refined analyses of the predictions can be made (political science professors and graduate students are feverishly engaged in these analyses as you read this). We could also do individual state popular votes, and extend the results to Senate races, too. (Quicky analysis of the 33 Senate races: Silver 31 right, 2 wrong; the Votemaster 30 right, 0 wrong, 3 ties; Wang a bit harder to say, because he paid less attention to the Senate, but I believe he got all 33 right.) But overall, we can say that the pollsters (on whose work the predictions were based) and the aggregators did quite well. Of the three I followed closely for the presidential election, Nate Silver gets a slight nod.

The critics of statistics and the statisticians got several things wrong.

First, they did not understand that a 51-49 poll division, if based on large samples, doesn’t mean that the second candidate has a 49% chance of winning; rather, it is far smaller.

Second, they thought the polls were biased in Obama’s favor, but, if anything, they slightly underestimated his support and slightly overstated Romney’s (Obama’s margin will increase a bit further as the last votes are counted).

And finally, they thought that the predictions were manipulated by the biases of the aggregators. But the opinions of the aggregators enter only in setting up the initial algorithms (very simple for the Votemaster, most complex for Nate Silver), and in most cases seem to have been well chosen.

Rather, it is the pundits who engaged in what Sam Wang has rightly mocked as “motivated reasoning”; Andrew Sullivan has also noted the bizarre ability of pundits to precisely reverse the evidential meaning of the polls. It was the pundits who were guilty of picking through the polling data to find something that supported their preconceived notions (scientific creationism, anyone?); it was not the aggregators, who, especially in Silver’s case, were generally paragons of proper statistical humility.

52 thoughts on “Statisticians 51, Pundits 0

  1. If I do not mistake, what all these poll analysts are doing is using modern meta-analysis methods. The methods differ slightly, but they all weight other people’s polls, using weights based on their errors, and they all sample from the possible results. I haven’t noticed anyone naming the approach, though meta-analysis is now quite common in many scientific fields.

  2. One other aggregator poll of sorts is the InTrade online wagering data. The InTrade electoral count was dead-on for 24 straight days and then wavered after Benghazi and the first debate in Denver. It settled down thereafter and only missed Florida, which Silver called fairly late. InTrade was also quite accurate in 2008, so as they say, follow the money. 🙂

    1. There’s a question, though, of where bettors are getting their information from. It does not seem unreasonable to suggest that they were doing nothing more than a less formal aggregation of the exact same polling data as Silver, the Votemaster, and Wang. Indeed, the bettors were likely getting their own information primarily from one or more of those three.

      At least Silver and the Votemaster had articles during the race about the betting markets, and both noted significant opportunities for arbitrage, indicating that the markets were far from optimal. Either they’re not big enough to be efficient, or somebody was placing bets designed to move the markets, not make money. Intrade, as I recall, showed some signs of such manipulation.



      1. I had not heard of any attempts to move the Intrade market, although I do believe they shut down the DC contracts because of the huge concentration with President Obama. I perceive the Intrade market as thousands of folks doing their own analysis of polling data and then making their wagers – a meta, meta analyis so to speak 🙂

        1. As I recall, Romney always had better chances on Intrade than on the Iowa Electronic Markets and the British betting houses, and that there were some big swings towards Romney (indicative of a large bet being placed) not associated with anything in the news.

          The idea is that Intrade is chump change to plenty of influential people and organizations, and one of them may have decided to place bets in a way that made Romney look like he had a better chance of winning than the overall market as a whole thought. That would then be used as evidence that Romney actually was doing better in campaign publicity.

          Don’t forget: significant numbers of people pick the candidate they think is going to win, not the candidate they way to win. Significant manipulation of a prediction market could actually turn into a self-fulfilling prophecy.



  3. Nice wrap up.

    Scarborough needs to read about the Dunning-Kroger effect or just look in the mirror.

    Silver tweeted a haiku at that time:

    Pundits retweeting
    Polls, when news cycle is void
    Just take the average

    Oh, Ouellette rocks, and of course she curates on G+ 🙂

  4. As a dumb humanities major, even I could gather that when all the results are within a margin of error yet all “err” in the same direction, that Obama was likely to win.

    Also, if there’s any bias, it would be toward people who have landlines and answer the phone. Those without landlines would tend to be younger citizens who would vote Democrat.

  5. Just for the lulz, Nate a little while ago retweeted the following: “And with that, the moment many have been waiting for has arrived. @MittRomney drops to 47.49% of the popular vote [link deleted]”

    Here’s the url to the tweet:

  6. There is a typo at the top of the survey form on the UnSkewed Polls web site:

    “Please complete a few Democraphic Questions:”

    Sort of comes out as short for “Demonstrably Crap Fiction”.

  7. I was actually surprised at Karl Rove’s detachment from reality this time. He’s a highly intelligent man, he didn’t get good at what he does by failing to understand reality, and his 2008 predictions were pretty accurate and quite realistic about Obama’s ensuing victory.

    1. Have you seen the proposed conspiracy theories, that Rove became unhinged precisely because he KNEW the results he expected, but did not expect “ANONYMOUS” to intervene and stop his electronic vote redirection ploy?

      If this all turns out to be conspiracy-theory dreck, I apologize now. But for now, could there be a better reason than “wishful thinking” for Rove’s meltdown?

  8. “Some conservatives don’t like the facts, so they create a parallel world of invented facts, or dream up conspiracy theories, and choose to dwell in an alternate reality that, unfortunately for them, isn’t real.”
    Unfortunately for them? Unfortunately for the whole world.

  9. “To me, it seems that there is a class of related, and unfounded, positions taken up primarily by conservatives that have a common source: the determination that when the facts are inconvenient . . . .”

    Below is an op-ed from the NY Times “Sunday Review” from yesterday, a critique of science journalism’s coverage of neuroscience. It specifically critiques Chris Mooney’s writings about Republicans (conservatives,eh?) and science.

    Just to keep themselves honest, do liberals possibly also warrant examination of what if any diregard of inconvenient facts, especially as regards so-called “post-modernism,” they may exhibit? Is it corret that “post-modernism” is associated (much?) more with the liberal than with the conservative mindset? Does neuroscience have anything to say about the “liberal” or “libertarian” or any other mindset?

    As regards polls,I wonder how many citizens enjoy taking calls from pollsters? Is there some sort of patriotic duty to participate in a poll? A lot of people don’t answer their land lines but check voice mails (from those who will trouble themselves to leave one). Do people always say what they mean? Do they sometimes tell a pollster most anything just to get the call over with? How are such things statistically accounted for?

    I was once sent a $1 bill for participating in a poll. Is that a widespread motivating tactic here in The Land of The Fee?

    1. Regarding your last points about polls, those represent measurable biases. It’s not a question of whether they produce inaccurate results, but are people consistent in the way that they provide bad data. In other words, are those types of polling errors random or consistent.

      Most biases like that are consistent and hence are measurable and correctable from election to election.

      If, on the other hand, they are random, then more polling reduces the error.

      Either way, you can account for both repeatable bias shifts of polls and for random polling errors.

  10. Nate Silver says this is a 73.6 percent chance that the president is going to win? Nobody in that campaign thinks they have a 73 percent chance — they think they have a 50.1 percent chance of winning.

    Both of those were true at that time. The two numbers are measuring quite different things.

    Joe Scarborough needs to learn enough about statistics to understand why they were not contradictory.

    1. I’m not sure I’d say both were true; he couldn’t have both a 73.6% chance of winning and a 50.1% chance of winning. Rather, I’d say he had a 73.6% chance of winning with 50.1% of voter popularity. The latter isn’t a measure of chance of winning since aggregate national popular vote doesn’t select the winner. (They are correlated, of course.)

      To me, it’s that difference that pundits like Joe need to focus on.

      1. Taken as context free statements, they contradict. But in the context of where those numbers came from, they were both true.

        The 73.6 percent was something like a confidence level for the prediction of a win, while the 50.1% was the anticipated vote total.

        Whether Joe Scarborough was actually confused, or was just constructing a polemic, I cannot tell. However, many people are confused by statistics.

      2. They were both true, and present no contradiction at all. It was Scarborough’s error not to realize this. If in a particular state a candidate has 50.1% of the vote, then the probability of that candidate winning the state is higher than 50.1%, and can be much higher, depending on the size of the sample that produced the 50.1% estimate. The exact win probability depends on assumptions about the error distribution, but given quite standard and conservative assumptions, the win probability of the leading candidate will, correctly, be much higher than the estimated share of the vote. Both Silver and Wang did such calculations.


        1. Just to clarify a bit further, Obama was polling 60+% in California through most of the lead up to the election, but this did not mean he had a 60% chance of winning. Rather, because the 60% was based on large samples from multiple polls, the probability that his true percentage was below 50% was vanishingly small, and thus Nate and Sam correctly had Obama’s win probability for California at over 95%. The relationship between vote share % and winning probability is strongly nonlinear; the win probability goes up much more rapidly as the vote share goes up from 50% (and conversely goes down more rapidly as vote share drops below 50%). Chad English knows all this, and I explicate for readers less familiar with the statistics of polling. His post on Silver as a fox, which he links to below, is well worth reading.


        2. What I find most aggravating about pundits who pretend to miss this point is that we know politicians on both sides clearly get it, because they gerrymander quite effectively. The whole process of gerrymandering consists of doing exactly this sort of calculation: what is the smallest vote margin I can use to guarantee a very high (~90%) overall chance of victory?

          The fact is, republican political analysts do that sort of calculation just as well as democrat ones. They demonstrate that they can do it extremely well every ten years (or less, in Texas). So when these conservative pundits suddenly seem to get alzheimers about exactly the sort of political calculus they themselves use to gerrymander districts, I don’t buy it. I can’t buy it. Its all theater; they understand exactly how and why Silver was accurate because they use the exact same math themselves, for their own benefit, and they’re just bulls****ing their followers when they say otherwise.

  11. This is a great post. I suggest it is very complementary to my article on the same topic (even same quotes) but where I focused the personality reasons for why “hedgehog” pundits can’t fathom how “fox” mathematicians can do it better (based on Philip Tetlocks 18 year analysis of pundit predictions):

    “Why Nate Silver is a Fox”:

    As a bonus, it’s interesting that the same prediction accuracy is unachievable in Canada due to the difference in how votes are aggregated by ridings:

    “Canada gets the bronze: Why Nate Silver’s forecast accuracy is unachievable in Canada”:

  12. Actually, Nate Silver predicted a Romney victory in Florida, albeit only 50.3% to 49.7%, which is probably in the noise level. So his score was 50 out of 51 with Florida really up for grabs (i.e. not much of a miss on Florida). And, in fact, Florida was close as it turned out, 49.9% to 49.0%, although not nearly as close as in 2000.

    1. Silver’s final popular vote estimate for Florida was Obama 49.8%, Romney 49.8%, but Obama’s estimate was ever so slightly higher (by less than .1), giving him a win probability of 50.3%. This is very close, and he labeled Florida verbally a “tossup”, but his numerical calculation (and color coding on his map) called it for Obama. The call for Obama did come in his final estimate based on the last pre-election released polls. He had estimated a Romney win prior to these very last polls. Either way it was close, and we shouldn’t give Silver too much credit, or Wang or the Votemaster too much blame, for their Florida calls, but Silver did edge them to go 51 for 51.


      1. Hi Greg.

        One thing I think you might want to consider when grading the methods against each other is how well they estimated their uncertainty. For example, if a method were to estimate Texas as 70% or even 90% for Romney would be an extreme overestimate of the uncertainty involved. And as you mention, since both Nate and Sam called Florida a tossup (and they were right), getting the call right is worth very little in grading their performance. Indeed, Sam’s penultimate estimate of the electoral college called Florida for Obama (not surprising that a “knife edge race” would flip back and forth in the final days).

        In particular, I think it is possible that Nate overestimated his uncertainty on the whole election. Of course, we won’t be able to determine if this is true or not until we have a lot more replicates (i.e. decades of additional elections). One way we can kinda get at this is by calculating Brier scores, as Sam suggested:

        This basically puts Sam and Nate on equal footing. If Sam had put confidence intervals on the scores (I have no idea on how to do that off the top of my head) I would be surprised if Nate and Sam didn’t overlap extensively.

        Anyway, I’d say it was a wash on calling the electoral college, since the biggest differences between them was on a state that both of them called correctly a tossup.

        On the other hand, Sam seems to have done much better on the Senate and Nate may have had an edge on the popular vote.

        1. A fuller analysis would look quantitatively at each state’s results vs. predictions, using some sort of average error or variance to evaluate the estimates of the various aggregators. I was going to do that, but realized i) it would be a much longer post, and thus more work; ii) there were various technical issues that I had no strong background on (e.g., should I use all-candidate or two-party share for the analysis); and iii) some poli sci grad student, who had the requisite background, was surely already hot on the case. You’re absolutely right that calling Florida correctly was more of a coin flip than anything, and not much credit (or blame) attaches to a call on an essentially 50-50 state. It’s like a statistical test with p=.049 vs. .051 for the same experimental setup: the evidential import of the two experiments is essentially identical, but we call one significant, and the other not. For evaluating how good they were at estimating the uncertainty of their estimates, decades of results would help as you suggest. I’d thought of doing something like comparing distributions of results within a single election, and looking at how many times the result fell outside some kind of confidence interval. For example, if, in say a hundred races, the results never fell outside a 95% interval, then your estimated uncertainty was probably too great, i.e. your confidence was actually higher than 95%. But I didn’t think much about how to actually operationalize the comparisons, and, as I said, I’m sure someone is working on it already. As I was finishing the post, Sam’s site was down, and I had to find cached versions of his various predictions, and so couldn’t easily check all his posts. I find Sam’s site a little hard to navigate when it’s up, so it’s much harder trying to work from cached pages alone.


          1. I do hope somebody dissects the performance of the reputable aggregators in a detailed, rigorous way. I’d love to read the results.

    2. His Nov 6 forecast is still up on and it has Obama winning Florida with 50.3% of the vote. He did earlier have Romney winning, perhaps at the time Joe made the comment (with 73.9% probability vs 90.1% in his final prediction.

      In the end, for me it’s not even the binary call that matters, but the accuracy of the vote distribution predictions. If he said Florida would be toss-up and could go either way, I would have given him full points. Guessing correctly which side a coin lands on isn’t really predictive power. Recognizing it’s pretty much 50% odds in Florida is the valuable information.

  13. Thanks for a such a clear and incisive post-mortem.

    In view of the Gallup Poll’s signal failure this time, I was instantly reminded of Alistair Cooke’s Letter from America commemorating George Gallup (July 14, 2000):

    Gallup’s biographer, Richard Reeves, has best summed up Gallup’s achievement:
    “He made clear the mortality of instinct, first in commerce then in government and politics”. In a word, he “killed the hunch.”

    Those of you who remember the early years of Gallup’s invasion of our hunches and guesses will remember he was not received with cheers and the blare of trumpets. He was most resented by politicians, who, until Gallup, had been able to claim a monopoly on what the public thought and wanted.
    But then, we all at first inclined to belittle Gallup and were eager to find and quote examples of the rare occasions he was off the truth.

    The main effect it seems to me has been to abolish the politician who talks and acts wholly on principle. He first has to find out if his asserted principle is acceptable to most people.
    So what we have seen is the disappearance of the statesman. Today in most democratic countries the leader is led by what a majority of the public thinks it wants.
    The leader is led by our prejudices – not what Dr Gallup had in mind.

    Yet still today the old Adam asserts itself. Most people deeply resent the results of scientific method because it tends too often to prove their own opinions dead wrong. So they say; “Polls can be famously unreliable” or “Well, that was not the experience of my brother-in-law”.
    After the statistical revolution most of us I’m afraid still cling to what Mr Justice Holmes called “Every man’s view of truth. His belief, in the teeth of the facts, in what he can’t help thinking must be true.”

    Emphasis mine. The whole transcript available online:

  14. Your mention of Andrew Sullivan reminds me that we all have blind spots. Sullivan, who I read daily for his political analysis and many insights has a huge blind spot in relation to his personal catholicism, applying a thought process that seems to be: “I believe this because I really, really, really want to”. This aspect of his writing, while it can be interesting, and for an outsider entertaining, is an area where he applies a much lower standard of objectivity than in the rest of his work. Given high proportion of atheist/agnostic readers of his blog (dailydish – )- it seems that none of us are put off by this.

    On this note I wonder if Karl Rove, for example, might be objective in those areas of his life that are divorced from politics.

    I also wonder about my own blind spots – areas that study sections love to point out to me!

  15. “towards the end of the recent presidential election campaign, pollsters and poll analysts came in for a lot of flak”

    Insofar as polling information was being used by non-pollsters as a substitute for discussion of policy, I can see why people could have felt antagonized.

  16. Joe Scarborough’s non-apology is classic. He is someone who is quite obviously, and hilariously, so wrapped up in his own masculine self-image, he can barely bring himself to admit error.

    It starts off with mandatory hippie bashing: liberals were so busy reading 538 they didn’t bathe! Liberals smell bad, just like hippies! Get it?!!

    Moving on, Scarborough strikes the Serious Moderate pose, thus proving he is the only adult in the room and superior to both squabbling sides without having to spend any time understanding any of the issues.

    Then, finally, the half-apology: “But I do need to tell Nate I’m sorry for leaning in too hard and lumping him with pollsters whose methodology is as rigorous as the Simpsons’ strip mall physician, Dr. Nick.”

    Translation, “Sorry, kid, but sometimes I get so wrapped up in my bristling manliness that I accidentally wind up ‘leaning in too hard’ against pencil-necked dweebs like you. Oops.”

    The real lesson Scarborough needs to learn is that from now on, he ought to stick to whatever it is he has professional knowledge about and leave the polling analysis to people who can do math. At least he is willing to admit error, even if this is one of the most grudging apologies I have read in some time.

  17. There’s a story out there that the hacker Anonymous foiled a plan to skew the Ohio results, and that is the real basis of Rove’s apoplexy on Fox that night.

    While at this point this is all in the conspiracy theory realm, here’s a link to the story, which concludes: If this story is true (and at this point there is no way to verify it)…

    But suppose it is true, and it hadn’t been foiled – Silver would have looked like a genius but for colossal failure in Ohio, which we might hope would have been a smoking gun leading to the unravelling of the whole thing.

    1. Yeah, I’m afraid that story just smells like made-up BS. It’s like the conservative claims that all the polls were biased and Nate Silver was biased and there was massive voter fraud, rather than the polls and Silver being accurate and the only “voter fraud” being Republican attempts to stop Democrats voting – the essence of conspiracy theory is treating a negligible probability as non-negligible.

      1. And I find that the story has a different aroma – the bouquet of possible authenticity.

        For one, we know some things about the character and capabilities of Anonymous – and the story comports with these. No one from the ‘real’ Anonymous has made any public accusations that the video was a fake, AFAIK.

        For two, we know the track record of black box voting and its vulnerability to vote tampering. Quite a few examples have been documented, always favoring Republicans, it seems.

        Third, we can verify the track record of the Ohio election authority as being one of incredible bias against Democratic voters.

        Fourth, we should be able to verify or disprove whether the Ohio electronic voting system actually did go off line at nearly the same time in the evening during various elections. I have not seen mention of any dispute of these claims.

        Fifth, the Anonymous accusations make much more sense of the bizarre actions of Karl Rove and other prominent Republicans pundits, than does a hypothesis of collective dementia.

        Sixth, there is the case of Mike Connell, a Republican computer programmer who died mysteriously just days before he was going to testify in D.C. about his first hand knowledge of Karl Rove’s efforts to steal votes electronically.

        1. If so, UFOs have a different aroma – the bouquet of possible authenticity.

          For one, we know some things about the character and capabilities of people who throw pie plates into the air – and the story comports with these. And no one from the ‘real’ pie plate throwers has made any public accusations that UFO videos are fake.

          Nope, a negligible probability is just that.

          1. So the probability of alien life (which has never been seen) visiting – of all places – the Earth – in a space ship (for which there is zero evidence) is just as unlikely as voting fraud. Gotcha.

        2. “And I find that the story has a different aroma – the bouquet of possible authenticity.”

          That smell is plausibility, which is a phenomenon found in stories. But each detail that adds to plausibility subtracts from probability, given the explanation that doesn’t involve multiplying together a series of improbabilities.

  18. Great analysis! If someone else does this we can start a meta-analysis of analyses.

    Another reflection is that probability contexts seems to have an unusual frequency of “p”. So: Perhaps “paragons of probabilistic propriety” produces a preferable pronunciation of prognosticator pandering!?

    Paul Krugman has noted that the problems are at root epistemological,

    I think the statistic polls on reality is in: empirical facts 100, everything else 0.

    Rather, the problem is likely psychological.

  19. Long…everything long! Plenty of comments to chew on.

    FYI, Greg Mayer, the proper quote is not “The proof is in the pudding” but rather

    “The proof of the pudding is in the tasting”

  20. Well, have we missed anyone? I mean, is there a single leftist blogger out there who hasn’t repeated this schtick about being “mystified” about how conservative pundits deliberately ignored the truth as represented by the polls? It’s getting a bit rich, isn’t it? When you look in the mirror and see a good, noble, humble respecter of the truth, but, glancing over at those poor, misguided souls who disagree with you, see only benighted, knuckle-dragging morons who not only deliberately embrace comfortable falsehoods but are morally suspect as well, isn’t it just remotely possible that you’re embracing a narrative instead of the truth yourself?

    Evidently you and the countless others who’ve posed similar word salads think not, because none of you, or at least none of you that I’ve read, have bothered to take a stroll outside the echo chamber and actually listen to what the conservatives have been saying. None of you, or at least none that I’ve read, ever seem to have actually listened to talk radio or read a conservative blog. If so, you certainly haven’t left many clues to that effect in your own posts. I might listen to the likes of Limbaugh and Beck two or three times in a given week, usually for not more than half an hour, and yet I frequently heard them talking about precisely why they thought the polls were wrong. I’ve never seen any of those reasons seriously discussed on this or any similar post. The conservative pundits were certainly wrong, but the notion that they were wrong because of a cavalier disregard for the truth and a stubborn unwillingness to face reality is nonsense. In general, the polls were reasonably accurate in the last three elections. They were not quite so accurate before that, though, and, at least since Carter/Ford in 1976, the errors have almost invariably been biased to the left. People have a longer memory than three elections. The conservatives were wrong, but their reasons for being wrong were not nearly as wildly implausible as you, Sullivan, Josh Marshall, and the rest have been claiming as you sadly shake your heads over the unfortunate fact that not everyone can be as wise as you.

    Isn’t it just possible that the stalwarts of the left occasionally have a tendency to embrace pleasant fictions as well? Take your dear old teacher, Dick Lewontin, for example. The last time I looked, he was still a Marxist. That doesn’t seem very plausible to me, either. Milovan Djilas had firsthand experience of the reality of Marxism in practice, and exposed it as a hoax in a work of nonfiction: “The New Class.” George Orwell also experienced the reality of Marxism firsthand in Catalonia and exposed it as a hoax in works of fiction: “1984” and “Animal Farm”. There were countless other eyewitness accounts, not to mention tens of millions of corpses, all of which Lewontin has managed to ignore. Even Trotsky realized that Marxism had ended in a utopia just before Stalin had him murdered, for Christ’s sake. And then there was “Not in Our Genes,” in which Blank Slater Lewontin maintained that innate human nature has little or no effect on human behavior. Apparently he was wrong about that, too, if the flood of books coming off the presses recently maintaining the exact opposite are any indication. He should have listened to Robert Ardrey, who was right about innate behavior. In a word, Lewontin was dead wrong about two things that really matter, not just to biologists, but to all of us. No matter, he is still revered as a great expert, apparently because, as you say, he wrote a good book about statistics. As for Ardrey, he is an unperson. Rightly so, of course. He was a mere playwright, and had no business being right when all the “men of science” were wrong. He was impertinent to try and rise above his station that way, don’t you think?

    1. It’s getting a bit rich, isn’t it?


      When you look in the mirror and see a good, noble, humble respecter of the truth, but, glancing over at those poor, misguided souls who disagree with you, see only benighted, knuckle-dragging morons who not only deliberately embrace comfortable falsehoods but are morally suspect as well,

      You are being far too complimentary to the right wing extremists and christofascists.

      They aren’t poor, misguided souls and they aren’t morally suspect. They are evil.

      It’s telling that the reaction of the christofascists was to immediately want to secede. I’ve noticed it years ago. They really, deeply hate the USA, the US government, and will destroy both if they can. They say so often.

    2. Isn’t it just possible that the stalwarts of the left occasionally have a tendency to embrace pleasant fictions as well?

      Helian is a reality denier, of course.

      The US left more or less doesn’t exist.

      And BTW, commie-ism has been dead for decades. It is 2012. You need to update your demonology.

  21. I’m just going to give a shout-out to Mike the Mad Biologist, who has for years been bastardizing the Dobzhanski quote: “Nothing in biology makes sense except in the light of evolution.”

    He correctly points out that “Nothing in movement conservatism makes sense except in the light of creationism.”

  22. Dean Chambers offered an apology for his comments on Nate Silver’s appearance in the “” website.
    I almost had a glimmer of admiration for his coming clean – but then he had to spoil it with his sad excuses (face-palm).
    He got a sound thrashing in the comments section though. Recommended Schadenfreude reading 🙂

  23. It’s interesting that you should try to link poor statistics to AGW scepticism. In fact the main thing which got the AGW sceptic movement started all those years ago was the discovery by competent mathematicians and analysts that the statistical analyses on which the whole ‘global warming’ movement is founded were full of holes and completely unreliable.

    It’s a little like believers becoming atheists when they actually start reading the Bible, instead of taking other people’s word for what’s in it.

  24. But… Jonathan Haidt says that conservatives are more (essentially) reality-based than liberals, so I guess Rove was “right”…
    except that empirical reality differs from Haidt’s ‘please give me a job on FOX’ conclusions…

  25. The problem is that all those atomistic macho contractors have their wives working for the government to get them all benefits, not to mention the real reason they work for themselves is so no one can see them break the law or go out of business. This is why Galbraith advocated that we prefer big business, big labor and big government.

Leave a Reply