Principles of ethical investing

December 29, 2020 • 9:15 am

by Greg Mayer

Investment policies–that is, where and for what reasons individuals and organizations place their money–have been at times invoked as a way to either encourage good behavior or sanction bad behavior. The most prominent example I know of is the campaign for divestment of assets from companies that did business in South Africa, which took place during the late apartheid era. Apartheid of course did end; I don’t know what role the divestment campaign had in this outcome. I imagine it is something historians have studied, but I don’t know the conclusions reached. The most prominent such campaign now is directed against Israel.

For individuals, picking stocks, market timing, and day trading are generally bad investment strategies. (“Bad” in the sense that you can’t reliably make money that way.) But large institutional investors have enough money, time, and expertise to vet investments for the behavior of the companies and assets involved. CalPERS, the California state employee retirement and benefit system, which manages a portfolio worth more than $400 billion, for example, has a fairly detailed set of Governance & Sustainability Principles.

Last month, on November 13, I received the following message from the Society for the Study of Evolution (SSE), indicating that the Society wanted to develop its own set of principles. (I’ve removed links to SSE’s internal web pages and information about individuals.)

Dear Gregory,

Earlier this year, at the request of SSE Council and the Finance Committee, SSE formed a committee to develop a set of ethical principles to guide the Society’s investments, which have a current value of ~$4.3M (learn more on the Financial Reports page). The committee’s proposed ethical principles can be found in this document.

To develop this set of principles, the committee surveyed the principles of other societies and considered different elements of a policy that are consistent with the mission of the Society and the values of our community. The goal of the committee was to develop a set of principles, not to specify exactly how to invest, which will be determined based on these principles in consultation with our investment advisors. We also note that using ethical principles to guide investing does not necessarily reduce financial returns and may even increase them.

We value our members’ feedback on these proposed ethical investing principles. Your input will help Council decide how to manage SSE’s investments and overall financial plan moving forward. Would you be willing to review the 1-page principles document and provide feedback in a short survey?

Or click here to view the document:

Or click here to view the comments form:  We will be collecting comments until November 30th, 2020. Thank you for your participation!

The draft principles were as follows.

Ethical Investing principles

The Society for the Study of Evolution is committed to investing in ways that support companies whose business practices promote environmental sustainability and social justice and whose governance promotes transparency. The SSE will use these values to guide investment decisions.

Environmental values

The Society for the Study of Evolution favors investment in companies that

  • Foster the protection of biodiversity.
  • Contribute to preservation of a safe, liveable, and stable climate.
  • Avoid increasing the risks associated with climate change.
  • Limit the use and discharge of toxic chemicals and pollutants into the environment.
  • Encourage practices that avoid overexploitation of terrestrial, aquatic, and natural resources.

Social values

The Society for the Study of Evolution favors investment in companies that

  • Respect and promote human rights, dignity, and well-being.
  • Protect personal data and respect customer privacy.
  • Foster an inclusive environment for all workers regardless of background or personal characteristics.
  • Improve and adopt anti-racist policies.
  • Follow practices that respect the dignity and rights of workers, including the ability for collective action by workers.
  • Strive to reduce disparities in economic and educational opportunities.
  • Respect the autonomy and voices of local communities and indigenous people.

Governance values

The Society for the Study of Evolution favors investment in companies that

  • Have a Board of Directors reflecting the diversity of communities they serve.
  • Ensure that financial reports are regularly reviewed by external auditors overseen by Board members who are not employed by the company.
  • Follow international standards to guard against fraud and corruption.
  • Commit to fair wages and equitable sharing of profits.

I wrote back to the Society on December 23:

When I received your invitation to participate in a survey concerning proposed investment principles for SSE, I began filling out the survey, with my initial response being that it was OK, but with minor changes. But as I attempted to compose a response, my “but” overwhelmed my “OK”, to the point that it became hard for me to see how the entire notion of a set of principles could survive.

Some of the principles are straightforward accounting principles– using external auditors, for example. The “Environmental values” do touch on some areas in which SSE could be said to have expertise– evolutionary biology is, after all, the study of the temporal and spatial patterns of biodiversity and the processes that generate them. But I note that these values are expressed without any endorsement of specific policies. “Foster the protection of biodiversity” is sufficiently broad that it might be a worthy principle to endorse. Ariel Lugo, who maintains that introduced species foster biodiversity, and Peter Marra, who argues that introduced species are a great threat to biodiversity, could both agree. I happen to think Marra is nearer to right on this, but I wouldn’t want the SSE to proclaim one or the other. On biodiversity, evolutionary biology does have disciplinary expertise, and intellectual standards for the evaluation of evidential support for propositions, so I could see some reason for the SSE to have something to say in this area.

But many of the proposed principles lie far afield from the values of science in general and evolutionary biology’s particular expertise. And it’s in those areas where evolutionary biology has the least to say that the principles try to say the most. Three examples.
“Improve and adopt anti-racist policies.” Anti-racism, sadly, is not opposition to racism, but a particular racialist ideology espousing an essentialist view of “races”, according to the ideology’s own notions of what “race” means. Anti-racism, which is fundamentally racialist, is sometimes even overtly racist. SSE should have no position on “anti-racism” whatsoever.
“Follow practices that respect the dignity and rights of workers, including the ability for collective action by workers.” ​I personally believe that the decline of unions has been a major factor in the growing concentration of wealth in the hands of a very few in the United States, and that this lack of equitable distribution of the fruits of economic progress is the greatest problem facing the United States. Its solution is of the utmost consequence for the future of the country as a democratic republic. But why would SSE have a position on this? If an evolutionary biologist in the United States supports his state’s anti-union “right to work” law, this has no necessary relation to his work as an evolutionary biologist. Why should such a person have his political views on an issue unrelated to evolutionary biology be repudiated by SSE? I don’t expect or want SSE to endorse my views on “right to work” laws, and I equally don’t want it to condemn the opposite view.
“Respect the autonomy and voices of local communities and indigenous people.” ​This is perhaps the most bizarre of all. Why should SSE have a view about the proper relations between levels of government? Are local communities always, or even usually, right about political, social, and moral issues? South Carolina wanted local autonomy over slavery in 1860. During the civil rights era, it was Federal override of the autonomy of local communities that brought about social progress. And what about today? We’ve seen wealthy local communities put up gates and threaten protesters with weapons– is that what SSE respects? And what about the wider world? Does SSE have an opinion on the fate of Nagorno-Karabakh, many of whose local residents want to be annexed to Armenia? The indigenous people of Hungary seem eager to keep out refugees– it’s their blood and their soil– does SSE respect this, too? Whether local autonomy or higher-level intervention has done more to bend the moral arc of the universe towards justice has varied from time to time and place to place. SSE need not engage with the question of under what circumstances local autonomy promotes justice, because it should have no opinion or policy on the matter at all.
There is, of course, a sort of reductio ad Hitlerum– “You wouldn’t want to invest in Krupp gun works or the makers of Zyklon-B during World War II,”– and, of course, you wouldn’t. But extreme cases make bad law, and do not provide a basis for the proposed principles.
A scientific society may legitimately advocate for science and for scientific values, especially as they pertain to the disciplinary expertise and intellectual standards of the society. But it is no part of the remit of such societies to take up positions on general social, political, and moral issues. Indeed, one of the values of science is to not allow popular passions, fashions, and prejudices to color what should be epistemic considerations.
I realize that this comes after the deadline of November 30, and thus will not influence the form of whatever policy you adopt; the response timeline, coming as my campus prepared to move to all virtual instruction over Thanksgiving, did not leave time for a considered response given my other commitments at that time. With the semester now over and with time to address the issues, I share my views with you in the hope that they might add to your understanding of them; feel free to share this with your colleagues among SSE’s officers and committees.

I received a cordial reply saying that all feedback will be considered.

Some sort of policy, such as following accounting principles, is desirable, but that sort of policy almost goes without saying. (I belong to several scientific societies; I don’t know what, if any, formal policies the other societies have.) But such a policy should address universally accepted accounting and governance principles, and perhaps concerns for which the Society possesses special disciplinary expertise and intellectual standards.

Have any readers encountered similar quandaries with organizations and institutions with which they are involved?

Richard Dawkins on truth and “ways of knowing”

December 19, 2020 • 10:45 am

It’s been a while since I’ve seen an article by Dawkins appear in a magazine or newspaper, but now there’s a new one on the nature of truth and knowledge in The Spectator (click on screenshot for free access). Yes, it’s a rather conservative venue, but you’re not going to see The Guardian publishing critiques of theology and postmodern denial of objective truth. And Dawkins does take some pretty strong swings at Donald Trump, e.g. “For him, lying is not a last resort. It never occurs to him to do anything else.”

The article first defines what scientists mean by “truth”, and then attacks two areas that dismiss that definition—or at least offer alternative “ways of knowing”:

What is truth? Richard begins by analogizing scientific truth with the “the kind of truth that a commission of inquiry or a jury trial is designed to establish.” He adds this:

I hold the view that scientific truth is of this commonsense kind, although the methods of science may depart from common sense and its truths may even offend it.

I like that idea—though Massimo Pigliucci will be enraged—because it shows there’s no bright line between scientific truth and the kind of truth that people establish using “common sense”, which I take to mean empirical inquiry whose results people generally agree on. Truth is simply what exists in the universe and can be found by common assent. That’s with the proviso, of course, that there is a reality to be found, and one that’s independent of us. I’ll take that as a given, and don’t want to argue about it. And, of course “common assent” means, in science, the assent of those capable of evaluating data.

Finally, while truth is always “provisional” in science, there are some truths so well established that we can regard them as “not really that provisional”. These are the truths whose reality you’d bet thousands of dollars on. It’s unlikely, as I say in Faith Versus Fact, that normal DNA will some day be shown to be a triple helix, or a water molecule to have two atoms of hydrogen and two of oxygen. This is a point that Richard makes as well, and one we should keep in mind when we debate those who argue that, “Well, science is tentative, and can be wrong—and has been wrong.” To wit:

It is true that Newton’s laws are approximations which need modifying under extreme circumstances such as when objects travel at near the speed of light. Those philosophers of science who fixate on the case of Newton and Einstein love to say that scientific truths are only ever provisional approximations that have so far resisted falsification. But there are many scientific truths — we share an ancestor with baboons is one example — which are just plain true, in the same sense as ‘New Zealand lies south of the equator’ is not a provisional hypothesis, pending possible falsification.

Bad thinkers. Finally, the two groups Richard excoriates for rejecting the notion of scientific truths are the theologians on one hand and the PoMo-soaked philosophers and Critical Theory mavens on the other. First, the theologians, who by now are low-hanging fruit:

Theologians love their ‘mysteries’, such as the ‘mystery of the Trinity’ (how can God be both three and one at the same time?) and the ‘mystery of transubstantiation’ (how can the contents of a chalice be simultaneously wine and blood?). When challenged to defend such stuff, they may retort that scientists too have their mysteries. Quantum theory is mysterious to the point of being downright perverse. What’s the difference? I’ll tell you the difference and it’s a big one. Quantum theory is validated by predictions fulfilled to so many decimal places that it’s been compared to predicting the width of North America to within one hairsbreadth. Theological theories make no predictions at all, let alone testable ones.

Nor has theology ever, by itself, established a single truth about the universe. I keep asking people to give me one, but they either can’t or bring in truths that are empirical and can be verified not by revelation or dogma, but only by observation and testing. Ergo, theology is not a “way of knowing.”

And then the poor PoMos and Critical Theorists get their drubbing (remember, the roots of Critical Theory are in the filthy humus of postmodernism):

A more insidious threat to truth comes from certain schools of academic philosophy. There is no objective truth, they say, no natural reality, only social constructs. Extreme exponents attack logic and reason themselves, as tools of manipulation or ‘patriarchal’ weapons of domination. The philosopher and historian of science Noretta Koertge wrote this in Skeptical Inquirer magazine in 1995, and things haven’t got any better since:

Instead of exhorting young women to prepare for a variety of technical subjects by studying science, logic, and mathematics, Women’s Studies students are now being taught that logic is a tool of domination…the standard norms and methods of scientific inquiry are sexist because they are incompatible with ‘women’s ways of knowing’. The authors of the prize-winning book with this title report that the majority of the women they interviewed fell into the category of ‘subjective knowers’, characterised by a ‘passionate rejection of science and scientists’. These ‘subjectivist’ women see the methods of logic, analysis and abstraction as ‘alien territory belonging to men’ and ‘value intuition as a safer and more fruitful approach to truth’.

That way madness lies. As reported by Barbara Ehrenreich and Janet McIntosh in The Nation in 1997, the social psychologist Phoebe Ellsworth, at an interdisciplinary seminar, praised the virtues of the experimental method. Audience members protested that the experimental method was ‘the brainchild of white Victorian males’. Ellsworth acknowledged this, but pointed out that the experimental method had led to, for example, the discovery of DNA. This was greeted with disdain: ‘You believe in DNA?’

You can’t not ‘believe in DNA’. DNA is a fact. . . .

While different groups of people have different interests, and that may lead them to work on areas that reveal truths heretofore hidden, that doesn’t mean that there are different ways of knowing. Barbara McClintock, for example, was touted by her biographer Evelyn Fox Keller as having a special female-linked “feeling for the organism” that led to her Nobel-winning studies on mobile genetic elements. I don’t buy that thesis, but there may be some truth to the claim that female evolutionists helped emphasize the important role of female choice in sexual selection.  If so, that means that different aspects of a problem may appeal to different groups, but in the end the truth or falsity of ideas are established the same way by everyone. McClintock did her science the way everyone else did, as do those who study sexual selection. There may be many ways of thinking, but only one way of knowing. 

And that way of knowing is what I call “science construed broadly”: the use of observation, testing of hypotheses, attempts to falsify your theory, experiments, and so on. Science has more refined methods than, say, an electrician trying to find a glitch in house wiring, but in the end they both rely on a similar set of empirical tools.

Richard will of course be faulted for attacking the beloved notion of “other ways of knowing”, but in the end he’s right. And of course there are all those people laying for him, who will claim he’s arrogant in giving science such hegemony over truth. He attacks this head on:

Some of what I have claimed here about scientific truth may come across as arrogant. So might my disparagement of certain schools of philosophy. Science really does know a lot about what is true, and we do have methods in place for finding out a lot more. We should not be reticent about that. But science is also humble. We may know what we know, but we also know what we don’t know. Scientists love not knowing because they can go to work on it. The history of science’s increasing knowledge, especially during the past four centuries, is a spectacular cascade of truths following one on the other. We may choose to call it a cumulative increase in the number of truths that we know. Or we can tip our hat to (a better class of) philosophers and talk of successive approximations towards yet-to-be-falsified provisional truths. Either way, science can properly claim to be the gold standard of truth.

Amen! I’ll finish with a quote I used to begin Chapter 4 in Faith Versus Fact. It’s from Mike Aus, a former preacher who left the pulpit after admitting his atheism on television. Since then he hasn’t fared well, but he did produce one quotation that I think is telling:

When I was working as a pastor I would often gloss over the clash between the scientific world view and the perspective of religion. I would say that the insights of science were no threat to faith because science and religion are “different ways of knowing” and are not in conflict because they are trying to answer different questions. Science focuses on “how” the world came to be and religion addresses the question of “why” we are here. I was dead wrong. There are not different ways of knowing. There is knowing and not knowing, and those are the only two options in this world.

h/t: Eli

American scientists are mostly Democrats, with almost no Republicans. Is this lack of diversity a problem?

December 10, 2020 • 9:45 am

A letter to the editor appeared in the latest issue of Nature, decrying the political uniformity of scientists (click on screenshot below to access though I’ve put up the whole thing). And the link to the Nature poll described in the letter’s  first line is here, but the survey was not of Americans but of Nature readers from throughout the world.

However, there’s no doubt that, among American scientists, Democrats still greatly outnumber Republicans. The latest data I can find are in a 2009 Pew poll showing that not only are American scientists mostly liberal, but that there’s a huge disparity between the politics of scientists and of the American public in general. I suspect that, given what’s happened under Trump, this disparity has only increased. The data in 2009:

Most [American] scientists identify as Democrats (55%), while 32% identify as independents and just 6% say they are Republicans. When the leanings of independents are considered, fully 81% identify as Democrats or lean to the Democratic Party, compared with 12% who either identify as Republicans or lean toward the GOP. Among the public, there are far fewer self-described Democrats (35%) and far more Republicans (23%). Overall, 52% of the public identifies as Democratic or leans Democratic, while 35% identifies as Republican or leans Republican.

This disparity exercised Andrew Meso, a British computational neuroscientist (he’s also black), who wrote this letter:

Now Dr. Meso is mistaken that the Nature poll was of “US scientists”, but it doesn’t matter, for the “misalignment” he describes is still true. As an academic, I have long been aware of this, for the disparity exists not just in science, but in academia as a whole.

Meso’s implicit argument that we need to increase political diversity doesn’t carry near as much weight as an argument for greater gender and ethnic diversity, for there’s not a good argument that Republicans were oppressed in the past, nor that there is discrimination against Republican students being accepted to grad school or being hired as professors—at least in science. I’ve been on many hiring and student-acceptance committees, and not once have I ever heard of a candidate being touted or dissed because of their politics. Indeed, we never even know their politics! (This may not hold for faculty in areas like economics or sociology.) And I’ve never heard of a scientist being denied promotion or tenure on the grounds of their politics.

So it’s hard to make an argument that the dearth of Republicans in American science is due to bias or discrimination. Nor does the ideological slant seem likely to affect science: as I read somewhere (but can’t lay my hands on the reference), scientists’ politics don’t affect the nature or quality of their research.

Why the disparity between scientists and the public, then, if it’s not bigotry? Well, perhaps it’s preference.

For reasons we can speculate about, perhaps those with a conservative bent are less likely to go into science, or remain in science if they start studying this. Perhaps those with a liberal bent are more attracted to the empirical method and the techniques of science. I have no idea if this is right, but feel free to speculate.   But I’ll make one point: if people think that the differential representation is due to preference rather than bias, and it’s a preference based on political affiliation (which may be correlated with other traits), why are we so eager to assume that other differential representations, like those involving gender or ethnicity, are based solely on bias and bigotry rather than preference? As we know, this kind of representation is automatically assumed to be based on prejudice, but I’ve always said that we can’t assume that without the needed research.

Finally, is Meso right in raising the alarm that the Democratic “elitism” of American scientists could turn other Americans—many of whom are Republican—against science or against going into science? (He conflates “judging science” with “going into science” in his final paragraph.) If he were right, this in itself would be a form of preference, but could also involve bigotry if conservatives sense that scientists don’t like their politics.  And yes, Republicans are more anti-science than Democrats, though the difference has been exaggerated, but not to the extent that would explain the differential representation in science. To me, it seems more likely that the disparity is based on a preference connected to political affiliation, but that’s just a guess.

Finally, Meso’s conclusion—that liberalism in scientists turns others against science and against going into science, presumes that the public actually knows how liberal scientists are. But they don’t seem to, at least according to that Pew report:

Most Americans do not see scientists as a group as particularly liberal or conservative. Nearly two-thirds of Americans (64%) say they think of scientists as “neither in particular”; 20% see them as politically liberal and 9% say they are politically conservative.

If there’s no evidence in science of bias against Republican scientists or students, then there is no need to engage in affirmative action to bring them on board—unless they somehow bring a scientific point of view absent among more liberal scientists. But I can’t see one. (It’s not that evident amongst ethnic or gender groups, either.) But the reason I’m in favor of affirmative action is not so much to bring a diversity of ideas as to act as a form of reparations for those who were denied equal opportunity. And the reparations view, while holding for women and people of color, doesn’t seem to hold for conservatives.

But my dislike of affirmative action for Republicans in science doesn’t hold for college students, for I think ideological and political diversity is an innate good among undergraduates, as it stimulates discussion and exposes students to other ways of thinking. So while I can’t support a case for “affirmative action” for more Republicans in science, I can do so for college students. As for professors outside of science, I’m not so sure. It’s useful for students to be exposed to various political views, or lines of thought, from their professors as well. I can’t see hiring professors because they’re Republicans, but I can see making an effort to incorporate conservative points of view into academic departments.  Since we scientists are supposed to keep our politics out of the classroom, though, we don’t need to make this effort.

Should scientific journals strive for “diversity” of reviewers and authors?

November 17, 2020 • 12:00 pm

The New York Times recently had a piece by their new and woke science reporter, Katherine J. Wu, which is basically an indictment of science journals for not keeping track of the “diversity” of authors and reviewers of the papers they publish or reject. The implicit message is that science journals are racist, discriminating against papers by minoritized authors.

Click on the screenshot to read the article:

Wu’s implicit assumption is twofold. First, that a paucity of diversity—which of course means ethnic diversity, but minus Asians since they are surely overrepresented among authors—reflects racism on the part of scientific journals and reviewers.  There is no consideration of whether a lack of diversity may represent simply a paucity of minority authors and reviewers. That itself may reflect racism, past or present, that narrows the opportunities of would-be scientists, but the article implies that it’s racism acting on Ph.D. authors trying to submit papers.

The second assumption is that more ethnic diversity in journals means better science. Well, that’s true in the sense that the more people who get the opportunities to become scientists, the higher the average quality of the science that is published. But I’m not at all convinced that members of any group, be they groups involving genders, religions, incomes, or ethnicity, have a special “point of view” based on their group identity that makes them do science differently. Science is science, and I don’t feel that Hispanics, say, have a different “way of knowing.” (There may be one exception here, that I’ve mentioned before: I think women scientists are responsible for shifting the focus in sexual selection from male traits alone to female preferences as well. But many men were also involved in this shift). In the end, the best science comes from giving everybody equal opportunities, not practicing remediation based on race at the publication level.

But the question is whether journals should be publishing more papers by members of minority groups. That is, is there a bias against, say, black or Hispanic authors that needs to be rectified by that form of “affirmative action” on the publication level—taking steps to accept more papers by minority authors?

It’s my opinion that the answer is “no”. This presumes that a paucity of papers by such authors is prima facie evidence for bias, when it may reflect only a paucity of minority-group members in the field, or of minority scientists submitting papers, or submitting fewer papers,—rather than reviewers deliberately discriminating against papers by minority authors.

It may be worth investigating this issue, but I consider it hardly worthwhile for two reasons.  First, figuring out whether a paucity of papers from minority group members is due to racism at the reviewing level is very hard to do, though not impossible (see below). More important, it’s certainly true that the disparity between the proportion of minority-group members in the population and the number of papers published by members of that group is due largely not to racism but to an underrepresentation of Blacks and Hispanics in science. Figuring out why that disparity exists is the best way to achieve more proportionality in science, if that is your goal. And that’s really where our efforts should be going.

Here are the data given in the NYT piece from two scientific organizations showing disparities between population proportions and publication/reviewer proportions. The article makes the point that most journals, though, do not keep records of the ethnicity of authors and reviewers. (To clarify for non-scientists, when scientists submit papers to a journal, those papers are sent to several reviewers—usually two or three—who are experts in the area of research. Based on the reviewers’ assessment of the paper, the editor then decides whether or not to publish it. If the decision is “yes,” there is often some revision of the paper required, either in the discussion or the scientific analysis.)

I’m going to discuss authors here, not reviewers, because it is the quality of authors‘ work that, by and large, constructs the quality of the journal. How do we know if a journal is discriminating against minority authors? You can’t simply use a difference between the proportion of people in the field, or the proportion of people submitting papers on the one hand, and the proportion of papers published on the other, as a criterion for bias. That’s because members of different groups may submit papers less often, or of lesser quality, and that this would lead to differential representation that would not reflect racism. Bias must be proven, not assumed.

There are two ways to solve this problem. The first is the equivalent of doing “blind” auditions for orchestras—auditions in which those seeking an orchestral chair perform behind a curtain. That “blind” system removes all bias against sex or race. To do this with a paper, you simply remove the names of the authors, their institution, and the acknowledgments from the manuscript, so the reviewers don’t know who wrote the paper. (There are, of course, ways to guess, like if an author cites herself repeatedly, but in many cases this will indeed lead to quality appraisal ignorant of the author’s race or gender.)

I hit upon that system in the late 1970s when I was a postdoc, full of piss and vinegar and concerned that papers were getting preferentially published not because of race, but because of reputation. My idea was that famous people had an easier time publishing their papers than small fish (like me!). I wrote letters—real letters—to the editors of about 30 journals in my field, proposing that manuscripts be reviewed blind this way. I got only one response, and that was from an editor who said that he preferred knowing the authors, because famous authors were more likely to submit better papers! That may be true on average, but it’s not the best way to ensure the quality of papers in a journal! In fact, famous authors may get by more easily with shoddier work because of their reputations.

At any rate, some journals have now wisely decided to adopt the blind-author technique, and more power to them! It seems to me a step in the right direction to eliminate animus not just against groups of people, but against your scientific “enemies” or in favor of your scientific “friends”. (Believe me, this kind of bias is rife in science.) While you can get around this system by guessing, I think it does help ensure objective reviewing and thus higher-quality papers. (I should add that the NYT music critic opposed blind auditions because he said that while it increased the proportion of women in orchestras, it didn’t eliminate racial inequities; his view was clearly that equity trumped orchestral quality.)

The other way would be to do an experiment submitting identical sets of manuscripts with fake names that give clues to the gender or ethnicity of the authors. If manuscripts with women or minority authors are rejected more often than the same manuscripts with “white” or “male” names, that surely indicates bias. This was what was done in a laborious study of grant reviewing, using made-up “black”, “white”, “male” and “female” names on identical proposals. The study showed no evidence of racial or gender bias in grant evaluation. Needless to say, you don’t hear much about this study, even though it was a good one, as the results went against people’s certainty that there must be sex and gender bias in reviewing.

That experiment could be done with paper reviewing too, and really must be done before you can start making implicit accusations of bias.  But I favor the blind-reviewing technique. You don’t have to do any experiments to see if that one makes things more equitable because, by eliminating a source of bias from the outset, it almost has to. It is my feeling that a “fake name” study wouldn’t show evidence of bias in pubication, but that’s my feeling alone. Better just to practice blind reviewing rather than speculate or do experiments.

In the end, my feeling is that affirmative action should not be applied to reviewing papers by people who already have doctorates, and, while I believe in affirmative action, I think it has to stop at some point in the hierarchy. My point comes after faculty hired hiring. I think it’s okay and useful to take race and gender into account when hiring junior faculty, as well as in college and grad-school admissions, but that’s where it stops. Ethnicity and gender should not be a consideration in getting tenure, full professorships, or in getting papers published—areas where merit alone should be the only criterion. Again, this is my view, and others may disagree.

Some of those who disagree think the whole system of a scientific meritocracy is flawed—that there isn’t even a scientific meritocracy. The NYT article says that:

Publishing papers in top-tier journals is crucial scholastic currency. But the process is deeply insular, often hinging on personal connections between journal editors and the researchers from whom they solicit and receive manuscripts.

“Science is publicized as a meritocracy: a larger, data-driven enterprise in which the best work and the best people float to the top,” Dr. Extavour said. In truth, she added, universal, objective standards are lacking, and “the access that authors have to editors is variable.”

To democratize this process, editors and reviewers need to level the playing field, in part by reflecting the diversity that journals claim they seek, Dr. Kamath said. “People think this is a cosmetic or surface issue,” she said. “But in reality, the very nature of your scholarship would change if you took diversity, equity and inclusion seriously.”

This whole section is to imply that there is little correlation between the merit of a paper and the chance of its being published. I think that’s a foolish conclusion, with the caaveats that Wu gives meant to imply a weak correlation at best. This is not my experience in reviewing papers or assessing published papers. Yes, sometimes a terrible paper gets published in a good journal, and a great paper gets rejected by a good journal, but there is surely a correlation between the quality of a paper and the chance that a. it will get published, and b. that it will get published in a prestigious journal.

No, to democratize the process, just do blind reviewing. That will go a ways toward eliminating bias. But even in the absence of that procedure, journals would be hard pressed to construct a system that would give preferential publication to papers by ethnic minorities. Regardless of what Katherine Wu thinks, science largely is a meritocracy, at least when it comes to publication, and I don’t think it would be good for science as a whole to bump papers up or down based on the race of their authors.


New article: coronavirus lingers on surfaces longer than we thought

October 14, 2020 • 9:45 am

While most cases of Covid-19 are surely contracted via interperson contact (hugging, respiratory droplets, talking next to someone, handshakes, and so on), this new article from Virology Journal, produced by five Australian researchers, suggests that the virus can linger on various surfaces substantially longer than we suspected, and those infection-bearing surfaces (called “fomites”) can carry a viral load large enough to cause infection. Remember when you thought that paper and cardboard could be “disinfected” by leaving it untouched for 24 hours, so that the virus would all die? That doesn’t seem to be the case, at least according to this paper.

Click below to read the screenshot; the pdf is here , and the reference is at the bottom.

The results can be conveyed briefly. The researchers inoculated live virus onto six types of surfaces that might be encountered by people on a daily basis: Stainless steel (cookware, etc.), polymer currency (used in Australia), paper currency (no longer used in Australia but used in many other places), a glass surface (cellphones, touchscreens, etc.), vinyl, and cotton fabric.  The materials were incubated at three temperatures (20, 30, and 40 Celsius, corresponding respectively to 68, 86, and 104 degrees Fahrenheit, respectively), and incubation was in the dark, as UV light kills the virus more quickly (hint, put your envelopes and packages in the light when disinfecting them).  The relative humidity was 50%, though higher humidity also decreases viral survival.

The virus titer is said by the researchers to “represent a plausible amount of virus that may be deposited on a surface”. Samples were taken over 28 days, and the amount of living (i.e., infectious) virus measured by standard methods.

The attrition of the virus due to death over time was measured in three ways: the D value (time at which only 10% of the original sample remained), the half life (time at which half the original sample remained), and Z values (the increase in temperature required to reduce the D value by 90%, in other words to kill 99% of the inoculate).

The table below tells you everything you need to know: the D values and half lives (latter in parentheses) for all six materials at three temperatures, as well as the Z values:

Now what we don’t know about these values, and what is really important, is how much virus has to remain on the surface before it loses its ability to infect you (remember, probability of getting infected is proportional to the amount of virus you pick up and transfer to your nose, mouth, or eyes). This isn’t discussed in the paper, but I’d say a reasonable precaution is the D value: 90% loss of titer.  Perhaps readers in the know can tell us after they’ve read the paper.

But even if you use the half life, at 20°C, two days is a minimum for any surface save cotton (1.7 days). Paper loses half its virus load in three days, and glass in two. But remember, this is in the dark, and half-lives will be shorter in sunlight. Half-lives and Z values decrease dramatically at higher temperatures, though I think 20°C is what we should pay attention to because it’s close to room temperature.  If 10% of the original titer is not enough to infect you, you’ll have to wait 10 days for paper and 6 days for glass. Surprisingly, cotton cloth was the material that retained viable virus for the shortest amount of time.

The Z values show that an increase in temperature of about 15°C is enough to kill 99% of the virus existing at a given temperature.

The researchers also found that except for cotton, viable virus was still found on all surfaces after 28 days.

What’s the lesson for us? Well I can’t say (nor do I wish to purvey public-health advice!), because the crucial information—the amount of virus normally deposited on a surface, and how much of that must remain to give you an appreciable chance of getting infected if you pick it up—is missing. What the authors conclude is this:

The data presented in this study demonstrates that infectious SARS-CoV-2 can be recovered from non-porous surfaces for at least 28 days at ambient temperature and humidity (20 °C and 50% RH). Increasing the temperature while maintaining humidity drastically reduced the survivability of the virus to as little as 24 h at 40 °C. The persistence of SARS-CoV-2 demonstrated in this study is pertinent to the public health and transport sectors. This data should be considered in strategies designed to mitigate the risk of fomite transmission during the current pandemic response.

I guess we’ll have to leave it to the “considerers”, i.e., medical researchers and public health experts, to translate these results into recommended behaviors. But I think it’s smart to disinfect paper for two days instead of one after getting it, and use as little currency as possible (currency is like a circulating Petri dish, carrying E. coli as well as coronavirus. Use your credit card instead, and wipe it off with ethanol or wash it with soap and water after you use it. Put it in the machine, and don’t hand it to anyone unless you have to. Oh, and don’t let anybody use your cellphone.


Riddell, S., Goldie, S., Hill, A. et al. The effect of temperature on persistence of SARS-CoV-2 on common surfacesVirol J 17, 145 (2020).

Once again: the supposed need for the self-justification of science

September 23, 2020 • 12:00 pm

Reading the latest edition of The Chicago Maroon, our student newspaper, I saw an op-ed about self care by Ada Palmer, an associate professor of History. I’m not going to write about that; her piece is pretty straightforward and empathic towards our students, who will be having a rather stressful semester. Rather, when I looked Palmer up, I saw that she’d written a review two years ago in Harvard Magazine of Steve Pinker’s Enlightenment Now: The Case for Reason, Science, Humanism and Progress. Always interested in how my colleagues regard Pinker, in arguments for empiricism and rationality, and intrigued by the title of her piece, I read her piece. You can, too, by clicking on the screenshot below.

It turns out that Dr. Palmer likes Steve’s book, but has two reservations. The first is that Steve argues that humanism, which is a handmaiden of atheism, is the way forward, and that religion has only been an impediment to moral and material progress. I think he’s pretty much right on that one. But Palmer doesn’t like the atheism bit:

Pinker reviews what he sees as humanism’s intellectual adversaries, such as those who caricature it as cold utilitarianism, those who suggest that humans have an innate need for spiritual beliefs, and the classic accusation, ubiquitous in the Renaissance and Enlightenment, that there cannot be good or virtue without God. For some readers, it will be frustrating that 350 pages of useful and cheering data, the majority of which one could call faith-neutral, culminate in the declaration that only triumphant atheism can ensure that scientific progress will help instead of harm. But Pinker’s secular humanism is less militant than that of many contemporary atheist voices; he focuses on the benefits of caring about the earthly world, rather than on condemning religion. His conclusion, that progress simply requires us to value life over death, health over sickness, abundance over want, freedom over coercion, happiness over suffering, and knowledge over superstition, is one numerous theisms can and have embraced.

Thank God he’s not as militant as Dawkins! God forbid that anyone should condemn religion.

Yes, but of course many theisms have impeded science, reason, and morality, and continue to do so (I’m looking at you, Vatican), while atheism hasn’t impeded those things one bit. After all, atheism is simply lack of belief in gods. The lucubrations above look like either religion osculation or accommodationism. I doubt that anyone could argue cogently that science would be more advanced if everyone became religious. Palmer also mentions “secular evidence” below, as if there was a kind of “nonsecular evidence” for science.

But the main problem with her piece is a recurrent trope that we see among those who wish to minimize the importance of science. It’s the claim that reason itself, or logic, or science itself, cannot prove that science can actually help us understand the universe in a useful way. For philosophers and some in the humanities, the lack of a priori justification that reliance on empirical methods will work is somehow an indictment of science. Here’s how Palmer goes at it:

Pinker briefly reviews efforts to value other factors—love, passions, feeling—above reason, but declares such efforts self-defeating: as soon as they attempt to justify themselves, the very act of providing reasoned arguments for their beliefs admits that reasoned arguments are the strongest grounds for belief. Yet, as I reflect on this argument, I am reminded how science, during a critical moment in its history, was self-defeating in much the same way.

Why was it self-defeating? Because there was no a priori justification for going ahead with empirical observation, hypothesis-making and -testing, and so on as a way to understand nature:

Progress in the modern sense, as an intentional and human-driven process, was first fully articulated by Francis Bacon early in the seventeenth century, when he suggested that a collaborative community of empirical inquiry would uncover useful truths that would radically transform human civilization and make each generation’s experience incrementally better than that of the generation before. This was not the easy sell it seems, since Bacon had no evidence that this unprecedented project could wield such power—and even if he had found evidence, one can’t use reasoned evidence to prove that reasoned evidence can prove things. New discoveries were frequent—the moons of Jupiter, the magnification of insects, the circulation of the blood—but practical benefits were slow in coming.

Well, that’s not exactly true, because people had been using what I call “science broadly construed” to understand nature for millennia. I was impressed, on reading Beryl Markham’s West With the Night, how local trackers used scientific observation to find game: the depth of the tracks, how dry they were, where waterholes were, and so on. There was in fact every reason to think that empirical inquiry would lead to understanding, while prayers and revelation, which any chowderhead would know didn’t help much, weren’t a good way to find animals or decide which plants were edible vs. poisonous.

As for the “practical benefits being slow in coming”, well, I take issue with that. Is improved understanding of the world “practical”. Maybe it won’t make you richer or healthier, but it makes you wiser and more appreciative of the marvels of nature.

In the end, though, I don’t care if you can’t use reason to prove that reason and empiricism “can prove things”. (Actually, they can’t: science doesn’t speak of “proof” but of more or less confirmed hypotheses.) What’s important is that, as Richard Dawkins said pungently, “Science works, bitches!”  The justification of empiricism, reason, and science is in its results: we find out what makes people sick, how to get to the Moon, how to cure disease, and so on. Only somebody hogtied with the strictures of philosophy could see a lack of a priori justification as an argument against the methods and validity of science. Yet we hear this all the time—often from theologians.

Palmer goes on:

 Yet Bacon did succeed in awakening a groundswell of enthusiasm (and funding) for reason and science, through an argument that often surprises my students: he appealed to the personality of God, arguing that a good Maker would not send humans out into the wilderness without the means to achieve the desires implanted in us. Thus, because reason is God’s unique gift to humankind, it must be capable of all we desire.

From time to time, particularly in the aftermath of the French Revolution, champions of secularized science have been embarrassed by this comment from Bacon—worrying what would happen if their atheist followers realized that science, at its inception, had no secular evidence to support its own faith in the power of evidence.

Well, the important thing is that nobody’s embarrassed by this argument any more, for the majority of scientists, and nearly all “elite ones” neither believe in gods nor worry about “the lack of secular evidence” to support the power of evidence. As I noted above, long before Bacon we knew that we could understand things without needing “divine evidence.”

Palmer makes one more dig at atheism:

But with Pinker’s entire book in hand, Bacon would also have felt the tension between two arguments running through it: the inclusive argument that reason, science, humanism, and progress have made our present better than our past, and can make our future better still; and the less inclusive argument, however eloquently and intelligently presented, that the humane and empathetic humanism capable of turning our powers to good and away from evil must be secular.

Frankly, I don’t care what Bacon would think about the lack of need for “divine” as opposed to secular evidence for science, or about the power of humanism. There’s not an iota of evidence that religion makes people behave better, and often it makes them behave palpably worse. (Remember Steve Weinberg’s dictum: “With or without religion, you would have good people doing good things and evil people doing evil things. But for good people to do evil things, that takes religion.”) And of course the more atheistic a country, the better off it is—by nearly any measure: gender equality, happiness, prosperity, well being, and so on.

But it doesn’t matter, for her main argument, which she reprises in her last paragraph, is both philosophical and a non-starter. Note what I see as a snarky bit in the following (I’ve bolded it):

Pinker is no more successful than Bacon at justifying science and reason without a recursive appeal to science and reason. Yet for those already confident in the persuasive force of evidence, it would be hard to imagine a more encouraging defense than Pinker’s of the reality and possibilities of progress.

What? Is there a large segment of humanity that isn’t confident in the persuasive force of evidence? If so, they shouldn’t be trusting any court decisions, or even their own observations, much less taking planes or swallowing antibiotics.  In my view, nearly everyone is confident in the persuasive force of evidence about most things, though some fraction of humans are confident in things that lack evidence. They include religious people, conspiracy theorists, and cranks. (Oh, and Donald Trump.)

Why does this argument against science keep coming up? It’s worthless!

Why science needs philosophy: an op-ed in PNAS

September 11, 2020 • 1:00 pm

Although some scientists (I believe Lawrence Krauss is one) have said that philosophy is useless to scientists, I’m not one of these miscreants. Although I recognize that philosophy can’t find out truths about the real world as opposed to “truths” within logical systems, it can certainly be an aid to thinking about science. Two examples are Dan Dennett’s ideas about consciousness (I don’t think his lucubrations about free will, though, have been helpful to science as opposed to philosophy itself) and Phil Kitcher’s critique of sociobiology (now “evolutionary psychology”) in his book Vaulting Ambition: Sociobiology and the Quest for Human Nature. 

Further, philosophers have been instrumental in helping discredit Intelligent Design theory and creationism; I’m thinking in particular of Rob Pennock’s book Tower of Babel: The Evidence Against the New Creationism and Kitcher’s anti-creationist book Abusing Science: The Case Against Creationism.  Surely dispelling an “alternative” theory to evolution is a real contribution to science and to science education.

My Ph.D. advisor, Dick Lewontin, was a big fan of philosophy, and some of his scientific papers, like the one on the units of selection, sit at the border of science and philosophy. We often had philosophers spending sabbaticals in our lab (Eliott Sober, one of the authors of the paper below, was one of them), and their presence was stimulating.

Now several scientists and philosophers have teamed up to once again make the case for the value of philosophy in science in this paper in the new PNAS. Click on the screenshot to read the piece, or download the pdf here.

It’s a short piece—3.5 pages long—and gives several examples, new to me, about how philosophers have helped guide research, mainly by clarifying concepts. Not all of the “helpful” aids from from philosophy seem to have been all that helpful, though, including debates about the “modularity” of the brain, or emphasis on the importance of microbes in the biosphere, which seems to me to have come from science, not philosophy. This is what the piece says about brain modularity, for instance:

Philosophy had a part in the move from behaviorism to cognitivism and computationalism in the 1960s. Perhaps most visible has been the theory of the modularity of mind, proposed by philosopher Jerry Fodor (10). Its influence on theories of cognitive architecture can hardly be overstated. In a tribute after Fodor’s passing in 2017, leading cognitive psychologist James Russell spoke in the magazine of the British Psychological Society of “cognitive developmental psychology BF (before Fodor) and AF (after Fodor)” (

Modularity refers to the idea that mental phenomena arise from the operation of multiple distinct processes, not from a single undifferentiated one. Inspired by evidence in experimental psychology, by Chomskian linguistics, and by new computational theories in philosophy of mind, Fodor theorized that human cognition is structured in a set of lower-level, domain-specific, informationally encapsulated specialized modules and a higher-level, domain-general central system for abductive reasoning with information only flowing upward vertically, not downward or horizontally (i.e., between modules). He also formulated stringent criteria for modularity. To this day, Fodor’s proposal sets the terms for much empirical research and theory in many areas of cognitive science and neuroscience (1112), including cognitive development, evolutionary psychology, artificial intelligence, and cognitive anthropology. Although his theory has been revised and challenged, researchers continue to use, tweak, and debate his approach and basic conceptual toolkit.

Well, modularity could have been true in principle, and surely the idea of brain modularity has stimulated a lot of discussion. But in the end, it hasn’t led anywhere, largely because the actions of the brain don’t seem to be separated into distinct, quasi-independent moieties but seem to be diffuse—and plastic enough to be influenced by other parts of the brain. You can read about this diffuseness in Matthew Cobb’s new book, The Idea of the Brain. But even the precise definition of modules isn’t sufficiently specific that philosophers have been able to propose good experiments to test it.

In the end, the authors offer some suggestions for how to make science and philosophy more of BFFs, and they’re reasonable but nothing that doesn’t come to mind—or haven’t come to mind—to others. For what they’re worth, here they are (my emphasis):

  • i) Make more room for philosophy in scientific conferences. This is a very simple mechanism for researchers to assess the potential usefulness of philosophers’ insights for their own research. Reciprocally, more researchers could participate in philosophy conferences, expanding on the efforts of organizations such as the International Society for the History, Philosophy, and Social Studies of Biology; the Philosophy of Science Association; and the Society for Philosophy of Science in Practice.

  • ii) Host philosophers in scientific labs and departments. This is a powerful way (already explored by some of the authors and others) for philosophers to learn science and provide more appropriate and well-grounded analyses, and for researchers to benefit from philosophical inputs and acclimatize to philosophy more generally. This might be the most efficient way to help philosophy have a rapid and concrete impact on science.

  • iii) Co-supervise PhD students. The co-supervision of PhD students by a researcher and a philosopher is an excellent opportunity to make possible the cross-feeding of the two fields. It facilitates the production of dissertations that are both experimentally rich and conceptually rigorous, and in the process, it trains the next generation of philosopher-scientists.

  • iv) Create curricula balanced in science and philosophy that foster a genuine dialogue between them. Some such curricula already exist in some countries, but expanding them should be a high priority. They can provide students in science with a perspective that better empowers them for the conceptual challenges of modern science and provide philosophers with a solid basis for the scientific knowledge that will maximize their impact on science. Science curricula might include a class in the history of science and in the philosophy of science. Philosophy curricula might include a science module.

  • vRead science and philosophy. Reading science is indispensable for the practice of philosophy of science, but reading philosophy can also constitute a great source of inspiration for researchers as illustrated by some of the examples above. For example, journal clubs where both science and philosophy contributions are discussed constitute an efficient way to integrate philosophy and science.

  • vi) Open new sections devoted to philosophical and conceptual issues in science journals. This strategy would be an appropriate and compelling way to suggest that the philosophical and conceptual work is continuous with the experimental work, in so far as it is inspired by it, and can inspire it in return. It would also make philosophical reflections about a particular scientific domain much more visible to the relevant scientific community than when they are published in philosophy journals, which are rarely read by scientists.

The first two are fine; as I said, Lewontin’s lab always had a philosopher about. Co-supervision of Ph.D. students would be practical only if one’s thesis had a big philosophical component. #4, a curriculum balanced in science and philosophy, sounds good but there is little time in graduate school for courses outside one’s area, so a roughly equal “balance” would be impractical. A single course in philosophy of science, however, would be useful for Ph.D. candidates, at least in evolutionary biology. Reading groups are great if they’re well supervised, and many science journals already adhere to #6, having some bits about philosophy.

In the end, philosophy is an extremely valuable adjunct to science, but useful largely for getting us to think hard and avoid blind alleys, not so much in providing answers or suggesting experiments. Giving answers to empirical questions is not, of course, the job of philosophy, which is why Francis Crick is supposed to have made this statement, which may be apocryphal:

“Listen to philosophers’ questions, but not to their answers.”

h/t: Bryan

Can scientific theories be falsified? One scientist says no

September 8, 2020 • 10:15 am

The provocative title of the Scientific American Article below, by physicist Mano Singham, is, I think, deeply misleading.  The idea that science progresses by eliminating incorrect explanations, which is what falsification is all about, seems to me not only a good strategy, but one that’s historically worked very well. To say it’s a myth is not even wrong.

But let’s hear why Singham says that falsification can’t work. Click on the screenshot to read his piece.


Before we get to Singham’s argument, we notice that we can immediately think of scientific theories that have been definitively falsified. One is that the Earth is flat. That has been falsified by any number of observations, and now nobody except loons accepts a flat planet. Alternatively, the Genesis story of creation, once a “scientific” explanation for the origin of life and, especially, humans, has also been falsified, also by any number of observations.  It was replaced by a better theory: evolution, and you can see the process of falsification by reading The Origin, as everyone should.  Darwin not only adduces evidence for evolution from biogeography, embryology, the fossil record, vestigial organs, and so on, but at the same time notes how these observations do not comport with creationism, the main competing hypothesis at the time. The falsification of creationism is why Darwin was so worried that religious people would reject his theory.

For if observations comport with both of two competing theories, this gives us no way to determine which is the better one. Darwin shows in his biogeography chapters, for example, how the distribution of animals and plants on Earth jibes with an evolutionary theory combined with the idea that organisms disperse, but cannot be explained by creationism. (Why would the creator not put native mammals, freshwater fish, and amphibians on oceanic islands?) The book’s falsification of creationism combined with its support of evolution meant that, within about a decade after 1859, nearly all educated people accepted that Biblical creationism had been falsified.

Why, then, given the above, does Singham think that falsification—the classic strategy of scientific advance limned by Karl Popper—is a “myth”?  He gives two reasons (he’s referring to Haldane’s “Precambrian rabbit” as a proposed falsification of evolution):

1.) Falsification is complicated. Singham says this:

But the field known as science studies (comprising the history, philosophy and sociology of science) has shown that falsification cannot work even in principle. This is because an experimental result is not a simple fact obtained directly from nature. Identifying and dating Haldane’s bone involves using many other theories from diverse fields, including physics, chemistry and geology. Similarly, a theoretical prediction is never the product of a single theory but also requires using many other theories. When a “theoretical” prediction disagrees with “experimental” data, what this tells us is that that there is a disagreement between two sets of theories, so we cannot say that any particular theory is falsified.

Fortunately, falsification—or any other philosophy of science—is not necessary for the actual practice of science.

I don’t quite get this. If many lines of evidence (or many scientific fields) converge on a conclusion that contradicts an existing theory (evolution in this case), that doesn’t mean that falsification doesn’t work, just that sometimes it’s not so easy. In fact, in the case of a Precambrian rabbit, scientists wouldn’t take a single observation as overturning a theory supported by so much evidence in favor of a theory—creationism—supported by none. Scientists would work hard to make sure that date wasn’t an anomaly, whether the rabbit somehow got itself insinuated in Precambrian sediments, and so on. Further, we’d like more than one fossil, for a theory as well established as evolution would require a multiplicity of “wrongly placed” fossils to make us question it. This doesn’t mean that falsification is a myth, just that when you use it against a theory that’s very well supported, you have to use it many times.

And sometimes an experimental result is indeed a “simple fact” obtained directly from nature. The idea of a pancake Earth is simply refuted by sending a satellite around the planet and not finding an edge. This case also shows that Singham’s claim that “a theoretical prediction is never the product of any single theory” is wrong as well. A flat earth (some get the idea from the Bible) is a single theory, not depending on “many other theories.”

Another example is Meselson and Stahl‘s lovely and definitive refutation of two models of DNA replication (“conservative” and “dispersive”), confirming the “semiconservative” model with a simple and beautiful experiment involving centrifugation of radioactively labeled DNA as it replicated.  Just because radiochemistry, centrifugation, and biochemistry were involved doesn’t make the experiment any less of a falsification. And there were only two other credible theories being tested, not “many other theories.” Since then, the entire science of molecular genetics has depended on their 1958 result, and it’s held up. If this isn’t an instance of verification of a true theory by falsifying alternatives, I don’t know what is.

Here’s Singham’s second reason why falsification is a “myth”:

2.) Pseudoscientists, cranks, and enthusiasts claim that they have data falsifying “consensus” theories, and this tactic makes falsification a dubious strategy. Again, I don’t quite get this, but here’s what Singham says:

A knowledge of the historic and philosophical background gives that kind of independence from prejudices of his [Einstein’s] generation from which most scientists are suffering. . .

. . . this knowledge equips people to better argue against antiscience forces that use the same strategy over and over again, whether it is about the dangers of tobacco, climate change, vaccinations or evolution. Their goal is to exploit the slivers of doubt and discrepant results that always exist in science in order to challenge the consensus views of scientific experts. They fund and report their own results that go counter to the scientific consensus in this or that narrow area and then argue that they have falsified the consensus. In their book Merchants of Doubt, historians Naomi Oreskes and Erik M. Conway say that for these groups “[t]he goal was to fight science with science—or at least with the gaps and uncertainties in existing science, and with scientific research that could be used to deflect attention from the main event.”

But this no more refutes the value of falsification than it refutes the value of science itself. For the same zealots and pseudoscientists who use the idea of falsification also pretend to use the methods of science. I don’t think I need say more about this.

Finally, near the end of his article Singham comes close to admitting that yes, falsification works:

Science studies provide supporters of science with better arguments to combat these critics, by showing that the strength of scientific conclusions arises because credible experts use comprehensive bodies of evidence to arrive at consensus judgments about whether a theory should be retained or rejected in favor of a new one. These consensus judgments are what have enabled the astounding levels of success that have revolutionized our lives for the better.

But how do you go about rejecting a consensus theory, like creationism, in favor of a new one? You have to find evidence that comports with the new one and not with the consensus. And that is falsification.

Now it’s possible that there is no competing theory, and you’re just looking for evidence that comports with the only theory you have. But even that is, in some sense, falsification: falsification of the idea that your theory is wrong, even if you don’t have an alternative. If you think benzene has six carbon atoms, then your alternative theory is that benzene doesn’t have six carbon atoms, but more or fewer, and you look for evidence for falsifying one or the other of these theories.

I think Singham intended to support the value of science studies—the history and philosophy of science—at a time when some people denigrate them. Richard Feynman famously said “philosophy of science is as useful to scientists as ornithology is to birds”.  I don’t agree with him on either count—ornithology is useful to birds, by helping conserve them, and philosophy of science can help us think more clearly about our problems and methods. But sometimes science studies can be impediments by confusing people about the nature of science or making insupportable or useless statements, like there’s no external reality independent of our senses. And one of these impediments is the claim that falsification is a myth.

h/t: Barry

An innocent joke about worms triggers a scientific firestorm on Twitter

August 3, 2020 • 9:00 am

I’d heard about this kerfuffle, and wrote it off as a tempest in a petri dish until I saw this article in the Daily Beast. Surprisingly, the Beast, which I thought was on the liberal side of the spectrum,  took sides against the Perpetually Offended, as it should have given the ridiculous nature of the fracas.

You can read about it at the website below or just peruse my short take her (click on screenshot):

The ignition: Michael Eisen, a well known professor of genetics at UC Berkeley, an advocate for “open” science publishing, and editor of the respected journal e-Life, answered a Twitter question about the most overhyped animal.  He was clearly joking, as you can see below (Eisen’s also known for his sense of humor). Eisen suggested Caenorhabditis elegans, a roundworm that has been immensely useful in unraveling the genetics of development. It’s a “model organism,” which means that it’s studied in the lab rather than the wild.

This kind of mock dissing is applied to other “model organisms”, like the Drosophila I work on. That species, too, has taught us an immense amount about genetics and development, but throughout my career I’ve had to endure jokes about it not being a “real” species. I always laughed these off because a). it is a real species found in nature (it’s now a human commensal) and b). starting with T. H. Morgan in the early 1900s, it’s been the insect species used to study classical genetics, molecular genetics, and now evolutionary developmental biology (“evo devo”). From that species we’ve learned, for instance, about sex chromosomes, about gene duplication, about the linkage of genes on chromosomes, and so on—and that’s just the classical-genetics stuff.

I don’t think Eisen knew what he was getting into with his humorous response. (The worm is also a self-fertilizing hermaphrodite, which is what he means by “occasionally they fuck themselves”.)

The pushback began immediately, as if Eisen somehow didn’t realize the importance of the worm. He quickly made it clear that he was joking:

But he had to clarify himself again, for one clarification only leads to another if you’re facing the Woke.  Although scientists have previously not been that immersed in Wokeness, they’re starting to become that way big time, buffeted by the winds of social change and perhaps a bit peevish and restive from the pandemic.

Eisen even got faulted for using the word “fuck,” for his “frat boy humor” and for having a bit of fun on the Internet:

Some people, like Coleen Murphy, took umbrage because they had “grants and paper rejected based on *exactly* this reason.” I seriously doubt that this is literally true. Perhaps the rejections were based on a perceived lack of generality from results in C. elegans to other metazoan species, but they could have been rejected for other reasons. At any rate, that’s no reason to dump on Eisen. What we see here is animus aimed at editors and reviewers directed instead at Eisen:

It wasn’t long before the specter of racism insinuated itself into the discussion. But even black scientists pushed back:

The Beast gives a bit more information. (Ahna Skop’s tweets are now hidden.) The invocation of marginalized people is the new version of an old rule—I can’t remember its name—which said something like “Any Internet argument will eventually devolve to comparisons with Hitler.” Now it’s “systemic racism” instead of Hitler.

By far the most prolific poster in this vein was Ahna Skop, associate professor of genetics at the University of Wisconsin-Madison and previous recipient of a Diversity, Equality and Inclusion-based award in 2018. Dr. Skop—who did not respond to a request for comment by The Daily Beast—argued extensively that making jokes about worms was merely the tip of the iceberg when it came to making jokes about marginalized identities, or an example of a ‘bystander effect’, a psychological theory arguing that individuals are less likely to offer help to a victim in a crowd. (For is it not said: First they came for the worm people, and I said nothing, as I was not a worm person?)

In the resulting threads, Dr. Skop—who identifies as “part Eastern Band Cherokee” and “disabled with EDS”—and others consistently failed to publicly respond to Black scientists like herpetologist Chelsea Connor, who tried to point out that this was a ridiculous conflation.  In a private communication Connor shared with The Daily Beast, Skop doubled down, arguing that as she had previously been harmed by entrenched sexism, her concerns regarding the worm joke were justified.

Oy!  But sensible people like Dr. Berg tried to defuse the crisis with the correct claim “it was only a joke”. She included screenshots of Skop’s tweets:

Let us bring this ludicrous squabble to an end with a quote from the Beast (criticizing the Offended) and a cartoon encapsulating the gist of the battle:

In falsely equating the real oppression of people belonging to marginalized groups to a Twitter joke about a roundworm, Wormageddon 2020 offers a clear example of how white and white-passing women misuse the language of diversity, equality and inclusion, with little accountability and self-awareness, and without any interest in the hurt that such frivolous invocations cause the people they’re theoretically defending. Someone who took the struggles that marginalized people face in academia seriously, after all, would not invoke them to win a Twitter argument about whether a worm joke is rude. “That comparison should never have been the knee-jerk reaction for them,” Connor said. “And then the response [to criticism] should have been better… The harm done stays with us and they get to log out and forget that this ever happened and let it ‘blow over’ meanwhile we have to work to fix what they did.”

My take: Eisen and Connor 42, Offended Worm People 0.  In this case Eisen properly refused to be mobbed, and the attempts to demonize him backfired, so that people like Skop have come off looking ridiculous. I’m just wondering if this episode shows a pushback against cancel culture, as did Trader Joe’s refusal to eliminate the brand names of its ethnic foods.

It was just a worm joke!

h/t: John, Peter

Boudry on scientism and “ways of knowing”

July 27, 2020 • 10:30 am

It’s been a while since we’ve discussed either scientism or “ways of knowing” on this site (the two ideas are connected). I’ll reiterate my views very briefly. “Scientism” has two meanings, as Maarten Boudry notes in his piece below, but the most common non-pejorative meaning is that of science making claims outside of its ambit, something that almost never happens these days.

I’m more interested in the idea whether there are “ways of knowing” beyond those involving science or “science broadly construed” (“SBC”, i.e., any profession, including plumbing and car mechanics, that uses the empirical method and relies on hypotheses, tests, and confirmation as ways of understanding the cosmos). As far as I can see—and I’ve asked readers about this—I’ve found no way beyond SBC to ascertain what’s true about our universe.

The most common area to claim that there are ways of knowing beyond the empirical is of course religion, but theology has never found a single ascertainable truth about the Universe that hasn’t been confirmed (or disconfirmed, as in the Exodus) by empirical research. You can’t find out what’s true about the Universe by reading scripture or waiting for a revelation. Even “scientific revelations” like Kekulé’s dream of a snake biting its own tail, which supposedly gave rise to the ring structure of benzene with alternative single and double bonds, had to be confirmed empirically.

Maarten Boudy has a new blog piece that discusses these ideas, but also highlights a new paper that, he says, puts paid to the notion that there are ways of knowing beyond science. Click on the screenshot to read it. (His piece has a good Jewish title though Boudry is a goy.) As you can see from the title, Maarten tells it as it is:

Boudry, by the way, is co-author of this collection of essays, which, though mixed in quality, is generally good and gives a good overview of the “scientism” controversy. (Click screenshot for Amazon link.) The co-author, Massimo Pigliucci, absolutely despises my including stuff like plumbing in “science construed broadly,” and has said so many times. Massimo is deeply preoccupied with demarcating “science” from “nonscience,” and sees me as having messed up that distinction.

Here’s Maarten’s link to the new paper and a useful classification of four flavors of scientism:

Now yesterday I read a clever new paper in Metaphilosophy – yes, there really is a journal by that name – in defense of scientism, which follows the second strategy. The Finnish authors, known as the Helsinki Circle, present a neutral definition of “scientism”, distinguishing between four different flavors represented by the quadrant below. The four positions follow from two simple choices: either you adopt a narrow or a broad definition of science, and either you believe that science is the only valid source of knowledge or that it is simply the best one available.

The differences between “natural sciences” and “sciences” here, as Maarten wrote me, is this:

“Natural sciences” is just physics, chemistry, biology, etc.

“Sciences” includes the human and social sciences, (like “Wissenschaft” in German).

But I’d prefer the distinction to be between “science” (what is practiced by scientists proper) and “SCB”, or the use of the empirical method to ascertain truth (SCB includes the human and social sciences). Given that slight change, I’d fall into the lower-left square. The upper left square, says Maarten, is occupied only by the hard-liner Alex Rosenberg.

But never mind. Boudy and I are more concerned with the criticisms of science that fall under the rubric of “non-pejorative scientism”, and he mentions two:

The authors want to draw attention to the other three versions of “scientism”, which are more defensible but nonetheless interesting and non-trivial. In the rest of the paper, they discuss how the different interpretations of scientism fare under two lines of criticism: (a) that scientism is self-defeating because the thesis itself cannot be demonstrated by scientific means; (b) that science inevitably relies on non-scientific sources of knowledge, such as metaphysical assumptions or data from our senses.

I’ve addressed both of these, but Maarten concentrates on the second. (My criticism of [a] is that you don’t need to demonstrate a philosophical or scientific underpinning of the methods of science to accept it, because science works—it enables us to understand the Universe in ways that both enable us to do things like cure smallpox and send rovers to Mars, and to make verified predictions, like when an eclipse will occur or the light from stars might bend around the Sun). Justification of science by some extra-scientific method is not only futile, but unnecessary.

Maarten refutes (b) handily:

Here I want to focus on the second objection. Does science “presuppose” the existence of an external world, or lawful regularities, or the truth of naturalism, or other metaphysical notions? No it doesn’t. These are merely working hypotheses that are being tested as we go along. I’ve argued for this position at length myself, in a paper with the neurologist Yon Fishman and earlier with my Ghent colleagues. As the authors write:

“One does not have to assume that science can achieve knowledge of the external world. Science can merely start with the hypothesis that some kind of knowledge could be achievable. For all practical purposes, this hypothesis would merely state that there are at least some regularities to be found. This hypothesis could be tested by simply attempting to obtain empirical knowledge with scientific means. If it is impossible to achieve this kind of knowledge, then the efforts would just be in vain. But hoping that something is the case is not the same as believing that it is the case.”

Second, does the fact that scientists rely on their sense organs invalidate scientism? No, because that’s a trivial point. It’s obviously true that science could not even get off the ground without sensory data, but this input too is being refined and corrected as we go along.

All these arguments about science being “based” on some extra-scientific assumption or source of knowledge are guilty of what I call the “foundationalist fallacy”. The mistake is to think that knowledge is something that needs to be “grounded” in some solid foundation, and that if this foundation is not completely secure, the whole edifice will collapse. But this metaphor is deeply misguided, and it inevitably leads to infinite regress. Whatever ultimate foundation you come up with, you can always ask the question: what is that foundation based on? It cannot be self-evident, floating in mid-air. This reminds one of the old Hindu cosmology according to which we live on a flat earth supported by four big elephants. Pretty solid, but what are the elephants standing on? On the back of a giant turtle. And that turtle? On the back of an even larger turtle. And so it’s turtles all the way down, ad infinitum.

Boudry’s Argument from Turtles also goes, I think, for (a): if you must justify using scientific methods through philosophy, how do you justify the value of philosophy in settling such a question? But never mind. If people dismiss science as an activity because philosophy (or science itself) provides no foundation for the empirical method, I’ll just ask them, “Have you ever been vaccinated or taken antibiotics?” If they say “yes,” then they already trust in science regardless of where the method came from. (It comes, by the way, not from a priori justification, but through a five-century refinement of methods to hone them down to a toolkit that works. Remember, science used to include aspects of the Divine, as in creationism as an explanation for life on Earth or Newton’s view that God tweaked the orbits of the planets to keep them stable.)

I’ll be reading the Metaphilosophy paper (click on screenshot below to access and download it), but let me finish by self-aggrandizingly saying that Boudry does agree that SCB is part of the nexus of empirical methodology that includes “real science”

For me, an essential part of scientism is the belief in one unified, overarching web of knowledge, which was defended most famously by the philosopher Willard V.O. Quine. Take an everyday form of knowledge acquisition such as a plumber trying to locate a leak (I believe this analogy is due to the biologist Jerry Coyne). Now plumbing is not usually regarded as a “science”, but that doesn’t mean that my plumber is engaged in some “different way of knowing”. He’s also making observations, testing out different hypotheses, using logical inferences, and so on. The main difference is that he is working on a relatively mundane and isolated problem (my sink), which is both simple enough to solve on his own, and parochial enough not be of any interest to academic journals. Plumbing is not a science, but it is continuous with science, because it makes use of similar methods (observation and logical inference) and is connected with scientific knowledge, for example about fluid dynamics. The plumber or detective or car mechanic is not doing anything radically different from what the scientist is doing.

Take that, Massimo!

And here’s a reading assignment: