I could go on and on about the errors and misconceptions of the paper from Nautilus below, whose aims are threefold. First, to convince us that several of the founders of modern statistics, including Francis Galton, Karl Pearson, and Ronald Fisher, were racists. Second, to argue that the statistical tests they made famous, and are used widely in research (including biomedical research), were developed as tools to promote racism and eugenics. Third, that we should stop using statistical analyses like chi-squared tests, Fisher exact tests, analyses of variance, t-tests, or even fitting data to normal distributions, because these exercises are tainted by racism. I and others have argued that the first claim is overblown, and I’ll argue here that the second is wrong and the third is insane, not even following from the first two claims if they were true.
Click on the screenshot to read the Nautilus paper. The author, Aubrey Clayton, is identified in the piece as “a mathematician living in Boston and the author of the forthcoming book Bernoulli’s Fallacy.”
The first thing to realize is that yes, people like Pearson, Fisher, and Galton made racist and classist statements that would be deemed unacceptable today. The second is that they conceived of “eugenics” as not a form of racial slaughter, like Hitler, but by encouraging the white “upper classes” (whom they assumed had “better genes”) to have more kids and discourage the breeding of the white “lower classes.” But none of their writing on eugenics (which was not the dominant interest of any of the three named) had any influence on eugenic practice, since Britain never practiced eugenics. Clayton desperately tries to forge a connection between the Brits and Hitler via an American (the racist Madison Grant) who, he says, was influenced by the Brits and who himself influenced Hitler, but the connection is tenuous. Nevertheless, this photo appears in the article. (Isn’t there some law about dragging Hitler into every discussion as a way to make your strongest point?)
My friend Luana suggested that I use this children’s book to illustrate Clayton’s point:
As the email and paper I cite below show, Clayton is also wrong in arguing that the statical methods devised by Pearson, Galton, and especially Fisher, were created to further their eugenic aspirations. In fact, Clayton admits this for several tests (bolding is mine).
One of the first theoretical problems Pearson attempted to solve concerned the bimodal distributions that Quetelet and Galton had worried about, leading to the original examples of significance testing. Toward the end of the 19th century, as scientists began collecting more data to better understand the process of evolution, such distributions began to crop up more often. Some particularly unusual measurements of crab shells collected by Weldon inspired Pearson to wonder, exactly how could one decide whether observations were normally distributed?
Before Pearson, the best anyone could do was to assemble the results in a histogram and see whether it looked approximately like a bell curve. Pearson’s analysis led him to his now-famous chi-squared test, using a measure called Χ2 to represent a “distance” between the empirical results and the theoretical distribution. High values, meaning a lot of deviation, were unlikely to occur by chance if the theory were correct, with probabilities Pearson computed. This formed the basic three-part template of a significance test as we now understand it. . .
If the chi-squared test was developed to foster eugenics, it was the eugenics of crabs! But Clayton manages to connect the crab study to eugenics:
Applying his tests led Pearson to conclude that several datasets like Weldon’s crab measurements were not truly normal. Racial differences, however, were his main interest from the beginning. Pearson’s statistical work was inseparable from his advocacy for eugenics. One of his first example calculations concerned a set of skull measurements taken from graves of the Reihengräber culture of Southern Germany in the fifth to seventh centuries. Pearson argued that an asymmetry in the distribution of the skulls signified the presence of two races of people. That skull measurements could indicate differences between races, and by extension differences in intelligence or character, was axiomatic to eugenicist thinking. Establishing the differences in a way that appeared scientific was a powerful step toward arguing for racial superiority.
How many dubious inferential leaps does that paragraph make? I count at least four. But I must pass on to other assertions.
Ronald Fisher gets the brunt of Clayton’s ire because, says Clayton, Fisher developed his many famous statistical tests (including analysis of variance, the Fisher exact test, and so on) to answer eugenic questions. This is not true. Fisher espoused the British classist view of eugenics, but he also developed his statistical tests for other reasons, even if he ever applied them to eugenic questions. In fact, the Society of the Study of Evolution (SSE), when deciding to rename its Fisher Prize for graduate-student accomplishment, says that the order of eugenics —> statistical tests is reversed:
Alongside his work integrating principles of Mendelian inheritance with processes of evolutionary change in populations and applying these advances in agriculture, Fisher established key aspects of theory and practice of statistics.
Fisher, along with other geneticists of the time, extended these ideas to human populations and strongly promoted eugenic policies—selectively favoring reproduction of people of accomplishment and societal stature, with the objective of genetically “improving” human societies.
In this temporal ordering, which happens to be correct (see below), the statistics are not tainted by eugenics and thus don’t have to be thrown overboard. As I reported in a post last year, several of us wrote a letter to the SSE trying to correct its misconceptions (see here for the letter, which also corrects misconceptions about Fisher’s racism), but the SSE politely rejected it.
Towards the end of his article, Clayton calls for eliminating the use of these “racist” statistics, though they’ve saved many lives since they’re used in medical trials, and have also been instrumental in helping scientists in many other areas understand the universe. Clayton manages to dig up a few extremists who also call for eliminating the use of statistics and “significance levels” (the latter issue could, in truth, be debated), but there is nothing that can replace the statistics developed by Galton, Pearson, and Fisher. I’ll give two quotes showing that, in the end, Clayton is a social-justice crank who thinks that objectivity is overrated. Bolding is mine:
Nathaniel Joselson is a data scientist in healthcare technology, whose experiences studying statistics in Cape Town, South Africa, during protests over a statue of colonial figure Cecil John Rhodes led him to build the website “Meditations on Inclusive Statistics.” He argues that statistics is overdue for a “decolonization,” to address the eugenicist legacy of Galton, Pearson, and Fisher that he says is still causing damage, most conspicuously in criminal justice and education. “Objectivity is extremely overrated,” he told me. “What the future of science needs is a democratization of the analysis process and generation of analysis,” and that what scientists need to do most is “hear what people that know about this stuff have been saying for a long time. Just because you haven’t measured something doesn’t mean that it’s not there. Often, you can see it with your eyes, and that’s good enough.”
Statistics, my dear Joselson, was developed precisely because what “we see with our eyes” may be deceptive, for what we often see with our eyes is what we want to see with our eyes. It’s called “ascertainment bias.” How do Joselson and Clayton propose to judge the likelihood that a drug really does cure a disease? Through “lived experience”?
It goes on. Read and weep (or laugh):
To get rid of the stain of eugenics, in addition to repairing the logic of its methods, statistics needs to free itself from the ideal of being perfectly objective. It can start with issues like dismantling its eugenicist monuments and addressing its own diversity problems. Surveys have consistently shown that among U.S. resident students at every level, Black/African-American and Hispanic/Latinx people are severely underrepresented in statistics.
. . . Addressing the legacy of eugenics in statistics will require asking many such difficult questions. Pretending to answer them under a veil of objectivity serves to dehumanize our colleagues, in the same way the dehumanizing rhetoric of eugenics facilitated discriminatory practices like forced sterilization and marriage prohibitions. Both rely on distancing oneself from the people affected and thinking of them as “other,” to rob them of agency and silence their protests.
How an academic community views itself is a useful test case for how it will view the world. Statistics, steeped as it is in esoteric mathematical terminology, may sometimes appear purely theoretical. But the truth is that statistics is closer to the humanities than it would like to admit. The struggles in the humanities over whose voices are heard and the power dynamics inherent in academic discourse have often been destructive, and progress hard-won. Now that fight may have been brought to the doorstep of statistics.
In the 1972 book Social Sciences as Sorcery, Stanislav Andreski argued that, in their search for objectivity, researchers had settled for a cheap version of it, hiding behind statistical methods as “quantitative camouflage.” Instead, we should strive for the moral objectivity we need to simultaneously live in the world and study it. “The ideal of objectivity,” Andreski wrote, “requires much more than an adherence to the technical rules of verification, or recourse to recondite unemotive terminology: namely, a moral commitment to justice—the will to be fair to people and institutions, to avoid the temptations of wishful and venomous thinking, and the courage to resist threats and enticements.”
The last paragraph is really telling, for it says one cannot be “objective” without adhering to the same “moral commitment to justice” as does the author. That is nonsense. Objectivity is the refusal to take an a priori viewpoint based on your political, moral, or ideological commitments, not an explicit adherence to those commitments.
But enough; I could go on forever, and my patience, and yours, is limited. I will quote two other scientists.
The first is A. W. F. Edwards, a well known British geneticist, statistician, and evolutionary biologist. He was also a student of Fisher’s, and has defended him against calumny like Clayton’s. But read the following article for yourself (it isn’t published, for it was written for his College at Cambrige, which was itself contemplating removing memorials to Fisher). I’ll be glad to send the pdf to any reader who wants it:
Here’s the abstract, but do read the paper, available on request:
In June 2020 Gonville and Caius College in Cambridge issued a press announcement that its College Council had decided to ‘take down’ the stained-glass window which had been placed in its Hall in 1989 ready for the centenary of Sir Ronald Fisher the following year. The window depicted the colourful Latin-Square pattern from the jacket of Fisher’s 1935 book The Design of Experiments. The window was one of a matching pair, the other commemorating John Venn with the famous three-set ‘Venn diagram’, each window requiring seven colours which were the same in both (Edwards, 2002; 2014a). One of the arguments advanced for this action was Fisher’s interest in eugenics which ‘stimulated his interest in both statistics and genetics’*.
In this paper I challenge the claim by examining the actual sequence of events beginning with 1909, the year in which Fisher entered Gonville and Caius College. I show that the historians of science who promoted the claim paid inadequate attention to Fisher’s actual studies in statistics as part of his mathematical education which were quite sufficient to launch him on his path-breaking statistical career; they showed a limited understanding of the magnitude of Fisher’s early achievements in theoretical statistics and experimental design, which themselves had no connection with eugenics. Secondly, I show that Fisher’s knowledge of natural selection and Mendelism antedated his involvement in eugenics; and finally I stress that the portmanteau word ‘eugenics’ originally included early human genetics and was the subject from which modern human and medical genetics grew.
Finally, I sent the article to another colleague with statistical and historical expertise, and he/she wrote the following, quoted with permission:
There is an authoritative history of statistics by Stephen Stigler of the UoC. There’s also an excellent biography of Galton by Michael Bulmer. Daniel Kevles’s book is still the best account of the history of eugenics, and he gives a very good account of how it developed into human genetics, largely due to Weinberg, Fisher and Haldane. Genetic counselling is in fact a form of eugenics, and only religious bigots are against it. Eugenics has become a dirty word, associated with Nazism and other forms of racism.
According to Stigler, many early developments, like the normal distribution and least squares estimation, were developed by astronomers and physicists such as Gauss and Laplace in order to deal with measurement error. Galton invented the term ‘regression’ when investigating the relations between parent and offspring, but did not use the commonly used least squares method of estimation, although this had been introduced much earlier by Legendre. Galton consistently advocated research into heredity rather than applied eugenics, undoubtedly because he felt a firm scientific base was needed as a foundation for eugenics.
Like Fisher, Galton and Pearson were interested in ‘improving the stock’, which had nothing to do with racial differences; even Marxists like Muller and Haldane were advocates of positive eugenics of this kind. I think there are many arguments against positive eugenics, but it is misguided to make out that it is inherently evil in the same way as Nazism and white supremacism.
No doubt Galton and Pearson held racist views, but these were widespread at the time, and had nothing to do with the eugenics movement in the UK; in fact, the Eugenics Society published a denunciation of Nazi eugenics laws in 1933 and explicitly dissociated eugenics from racism (see http://www.senns.uk/The_Eug_Soc_and_the_Nazis.pdf). People are confused about this, because the word ‘race’ was then widely used in a very loose sense to refer to what we would now refer to as a population (Churchill used to refer to the ‘English race’: he was himself half American).
Fisher’s work in statistics was very broadly based and not primarily motivated by genetics; he discovered the distribution of t as a result of correspondence with the statistician W.S. Gossett at Guinness’s brewery in Dublin, and his major contributions to experimental design and ANOVA were made in connection with agricultural research at the Rothamstead experimental station (who have renamed their ‘Fisher Court’ as ‘ANOVA Court’). Maybe everyone should give up drinking Guinness and eating cereal products, since they are allegedly contaminated in this way.