The two articles I want to discuss today are fascinating, for they raise a problem that’s now vexing many scientists (especially physicists)—the problem of testability. (Thanks to reader Mark H. for calling my attention to them.)
It all goes back to the philosopher Karl Popper (1902-1994). Popper’s views about what made a theory “scientific” were immensely influential. They’re summed up in the Wikipedia piece on him:
A theory in the empirical sciences can never be proven, but it can be falsified, meaning that it can and should be scrutinized by decisive experiments. If the outcome of an experiment contradicts the theory, one should refrain from ad hoc manoeuvres that evade the contradiction merely by making it less falsifiable.
In other words, a theory that can’t in principle be shown to be wrong isn’t a scientific theory. But I disagree with that characterization, and the one from Wikipedia, in two ways. First, a theory might appear at present to not be “falsifiable”, but it can still be considered “scientific” in that it explains a variety of phenomena and, more important, someday we might find a way to test it. Without supercolliders, we had no way to test the prediction of the existence of the Higgs boson, for existence. When we managed to build that machine, we could look for its existence, and we found it. That, more or less, is the current position of string theory and multiverse theory in physics. They are elegant, can explain some phenomena (the unification of the four forces for the former; the vaunted “fine tuning” of the constants of physics in our universe for the latter), but neither can yet be tested. But I still see them as scientific theories.
Second, a theory can be “patched up” and still retain its integrity, so some ad hoc-ing is acceptable. For example, we now know that some environmentally induced changes in the DNA can be inherited for one or a few generations, which appears to contradict an important tenet of neo-Darwinism. But all we need to do is realize that these changes are temporary and haven’t contributed to organismal adaptation in the long term. So “epigenetics” in this sense simply broadens a theory that initially maintained that no environmental modification could be passed on. But it doesn’t destroy that theory. It’s only when the patches on a theory become seriously pervasive that the theory must be abandoned. That is what happened to cold fusion, or the theory that vaccines caused autism. Both of those went away because they were falsified by data.
The way I construe falsifiability is like this: A theory for which there are no conceivable observations that could show it to be wrong is not a theory in which you can place much confidence (i.e., regard it as provisionally “true”).
I know I’m treading on dangerous ground here, as philosophers will circle that sentence like orcas around a wounded seal, but in fact that is the way that science really works. String theory is elegant, a lot of people are working on it, and someday we may find a way to “test” it, but for the present it’s not regarded as a “true” theory—not in the way that the Standard Model of physics is, a theory that has predictions that could be have been falsified, but haven’t been.
Nevertheless, there’s a lot of Popper-dissing going around. I’ve already taken issue with two of his points, but I firmly adhere to the way I’ve construed falsifiability (in italics) above. Scientists still behave as if they don’t accept something as “true” unless it’s passed tests that could have shown it to be false.
As expected, most of the Popper-dissing comes from physics, which now has theories that are so hard to test—as they involve things occurring on scales too small to observe—like the “strings” of string theory—that they are resorting to other ways to confirm such theories. One is “beauty”: a theory which is explanatory and beautiful, like string theory, can provisionally be seen to be correct. I disagree. Einstein’s general theory of relativity was a beautiful theory, and Einstein seemed to regard it as correct on that count alone, but the physics community didn’t take it as true until it made predictions that could be tested, and were verified. These included the bending of light by celestial bodies (as in Eddington’s experiment) and the precise quantification of the advance of Mercury’s perihelion. Since then, other tests have amply confirmed both the general and special theories.
Other ways to “confirm” theories include Bayesian approaches that are said to give greater or lesser confirmation to a theory. I’m not familiar with how Bayes’ Theorem is used in this way, but I know that many physicists think that this isn’t a proper way to test something like string theory.
In 2014, a mathematician and physicist, George Ellis and Joe Silk, went after the increasing tendency of physicists to validate theories without any empirical observation. In their article, “Defend the integrity of physics,” they said this, concentrating on string theory and multiverse theory (all bolding in this post is mine):
This year, debates in physics circles took a worrying turn. Faced with difficulties in applying fundamental theories to the observed Universe, some researchers called for a change in how theoretical physics is done. They began to argue — explicitly — that if a theory is sufficiently elegant and explanatory, it need not be tested experimentally, breaking with centuries of philosophical tradition of defining scientific knowledge as empirical. We disagree. As the philosopher of science Karl Popper argued: a theory must be falsifiable to be scientific.
Chief among the ‘elegance will suffice’ advocates are some string theorists. Because string theory is supposedly the ‘only game in town’ capable of unifying the four fundamental forces, they believe that it must contain a grain of truth even though it relies on extra dimensions that we can never observe. Some cosmologists, too, are seeking to abandon experimental verification of grand hypotheses that invoke imperceptible domains such as the kaleidoscopic multiverse (comprising myriad universes), the ‘many worlds’ version of quantum reality (in which observations spawn parallel branches of reality) and pre-Big Bang concepts.
These unprovable hypotheses are quite different from those that relate directly to the real world and that are testable through observations — such as the standard model of particle physics and the existence of dark matter and dark energy. As we see it, theoretical physics risks becoming a no-man’s-land between mathematics, physics and philosophy that does not truly meet the requirements of any.
Ellis and Silk name Richard Dawid and our Official Website Physicist™ Sean Carroll among those asking to weaken the “testability” criterion for theories of physics. Ellis and Silk’s piece, which is easily readable by non-specialists, ends like this:
What to do about it? Physicists, philosophers and other scientists should hammer out a new narrative for the scientific method that can deal with the scope of modern physics. In our view, the issue boils down to clarifying one question: what potential observational or experimental evidence is there that would persuade you that the theory is wrong and lead you to abandoning it? If there is none, it is not a scientific theory.
Such a case must be made in formal philosophical terms. A conference should be convened next year to take the first steps. People from both sides of the testability debate must be involved.
In the meantime, journal editors and publishers could assign speculative work to other research categories — such as mathematical rather than physical cosmology — according to its potential testability. And the domination of some physics departments and institutes by such activities could be rethought.
The imprimatur of science should be awarded only to a theory that is testable. Only then can we defend science from attack.
I’m in nearly complete agreement with this, except that I’d still award the imprimatur of “science” to string and multiverse theories. They are scientific theories—just not ones that we can have any confidence in. They’re not even close to being as “true” as, say, Einstein’s theories of relativity or the theory of evolution.
At any rate, the conference called for by Ellis and Silk has taken place this month—at Ludwig Maximilian University in Munich. It’s described in Natalie Wolchover’s article in Quanta, “A fight for the soul of science“:
Once again there was the customary Popper-bashing:
But, as many in Munich were surprised to learn, falsificationism is no longer the reigning philosophy of science. Massimo Pigliucci, a philosopher at the Graduate Center of the City University of New York, pointed out that falsifiability is woefully inadequate as a separator of science and nonscience, as Popper himself recognized. Astrology, for instance, is falsifiable — indeed, it has been falsified ad nauseam — and yet it isn’t science. Physicists’ preoccupation with Popper “is really something that needs to stop,” Pigliucci said. “We need to talk about current philosophy of science. We don’t talk about something that was current 50 years ago.”
I have to disagree with Pigliucci here; I haven’t yet seen a theory accepted as true that hasn’t survived a test that could show it to be wrong. And yes, I do think that at one time astrology was a scientific theory: a theory claiming that one’s personality could be affected by one’s time of birth, connected with the configuration of stars and planets at that time. Astrology, like creationism, was once a scientific theory, but now it’s a falsified scientific theory, and can’t be accepted as true. It isn’t science any more, but it once was. To say that the falsification of astrology completely refutes the value of falsificationism or Popperianism seems to me a circular argument. At any rate, I will agree with Massimo that if a reasonable theory can’t yet be falsified, it can still be seen as “scientific”; but as the years pass and one can’t find a way to test that theory, it eventually passes into the hinterlands of “nonscientific.” What Pigliucci appears to be doing here (and I’ll grant that I haven’t seen his paper) is conflating what we regard as “scientific theories” with what regard as “true scientific theories.”
I won’t recount the to-and-fros that occurred at this meeting, as you can read Wolchover’s piece for yourself, but it does give three alternatives to falsification, at least for string theory. Here are the first two: 1. no other theory that explains “everything” has yet been found; and 2. string theory came from the Standard Model, which for a long time itself had no alternatives, buttressing the possibility that a “no-alternatives” theory could be right on that ground alone.
I find neither of these arguments convincing, if for no other reason than a good alternative theory might some day surface. How can we have so much hubris that we think that no human will ever devise an alternative to string theory? And, of course, by now the Standard Model has been tested many times, and passed the tests. It wasn’t taken to be true until these confirmations. Why, then, do we behave differently with string theory? Only because it’s much harder to test.
Finally, string theory is said to have provided explanations for previously inexplicable phenomena, like the entropy of black holes. To me that is something worth considering, but not enough to confirm the theory. Explaining phenomena is one way to validate a theory (after all, that’s what Darwin did when using evolution to explain the peculiar distribution of species on oceanic islands), but you must be able to devise tests that could show a theory is wrong before you accept it as correct. To disprove neo-Darwinism, for instance, you might find a lack of genetic variation in species, fossils in the wrong places, one species with adaptations that increase the fitness only of a second species and not itself, and so on. These are potential falsifiers, but none of them have been seen. We have no similar falsifiers for string theory.
As for Bayesian ways of getting greater confidence in theories, I haven’t read anything about them, and so can’t weigh in here, but I have to say that I’m dubious.
If you have any interest in the history and philosophy of science, I’d recommend reading both the Nature and Quanta articles, as the debate is not only fascinating, but goes to the very heart of science: how we decide what is provisionally true, and how much confidence to apportion to our beliefs. I would add that we should be wary of those like Pigliucci who, on weak grounds, claim that “Popperism is dead.” It’s telling that the Quanta piece describes the outcome of the Munich conference like this (my emphasis):
The Munich proceedings will be compiled and published, probably as a book, in 2017. As for what was accomplished, one important outcome, according to Ellis, was an acknowledgment by participating string theorists that the theory is not “confirmed” in the sense of being verified. “David Gross made his position clear: Dawid’s criteria are good for justifying working on the theory, not for saying the theory is validated in a non-empirical way,” Ellis wrote in an email. “That seems to me a good position — and explicitly stating that is progress.”
In considering how theorists should proceed, many attendees expressed the view that work on string theory and other as-yet-untestable ideas should continue. “Keep speculating,” Achinstein [Peter Achenstein, a historian and philosopher of science] wrote in an email after the workshop, but “give your motivation for speculating, give your explanations, but admit that they are only possible explanations.”
“Maybe someday things will change,” Achinstein added, “and the speculations will become testable; and maybe not, maybe never.” We may never know for sure the way the universe works at all distances and all times, “but perhaps you can narrow the live possibilities to just a few,” he said. “I think that would be some progress.”
What I’ve put in bold is a tacit admission that string theory has not been verified in the way that the Standard Model, or the theory of evolution, has. Yes, string theory is still a scientific theory, and maybe we’ll find a way to test it. I certainly don’t think physicists should stop working on it just because they haven’t yet found a way to falsify it. That would be premature. But until they do find a way to test it, I don’t see it as a scientific theory in which we should place a lot of confidence; i.e., I don’t see it as “true.” Neither, apparently, do the participants in that conference! Karl Popper’s ideas are regularly declared dead, but they refuse to lie down.
I feel the pain of these physicists, for their theories are now so elegant and abstruse that it may be impossible to test them properly. They involve strings that are impossibly tiny, and theories with so many parameters that the notion of testability is elusive. In other words, their success has painted them into a corner. By changing the rules of science, they’re trying to make a virtue of necessity. But the way out of their corner isn’t to change the rules—the way we establish things as true. If physicists can’t find a way to falsify their theories, in that corner they shall stay. After all, every scientist admits that there are some things about the Universe that we simply will never know.
134 thoughts on “Is falsifiability essential to science?”
Great post. Many thanks!
As a person interested in science, but very far from an expert, I thank you for clearly laying out the issues of this major controversy in the field.
Yes, a fascinating and important topic; many thanks for a very clear presentation of the issues. It would be interesting to get Sean Carroll’s own thoughts.
Here’s a piece that Sean wrote setting out his views, at least for physics:
Many thanks Jerry; much appreciated.
If they think Popper is dead, imagine what they must think of Aristotle! For example, what about this:
1. truth: a property of statements, i.e., that they are the case.
2. validity: a property of arguments, i.e., that they have a good structure.
(The premises and conclusion are so related that it is absolutely impossible for the premises to be true unless the conclusion is true also.)
3. soundness: a property of both arguments and the statements in them, i.e., the argument is valid and all the statement are true.
Sound Argument: (1) valid, (2) true premises (obviously the conclusion is true as well by the definition of validity).
The fact that a deductive argument is valid cannot, in itself, assure us that any of the statements in the argument are true; this fact only tells us that the conclusion must be true if the premises are true.
Also, what ever happened to the label “hypothesis?” It has been disappeared, like a dissident in Argentina. Now we have “true or untrue theories.” This is a tragedy. A conjecture ought not be elevated to the title of theory until proven true. Until then, if interesting or beautiful, it can only reach the level of hypothesis.
I blame Popperism, but from the other direction. In the interest of upholding Hume’s attempted annihilation of induction, Popper created the idea of “falsifiability,” which attempts to declare deduction the only good truth test. This makes Popper vulnerable, since theories are proven with both induction and deduction. With induction dead, who can blame the radical string people for scoffing at the ‘old fogginess’ of falsifiability/deduction.
“A conjecture ought not be elevated to the title of theory until proven true.” Absolutely. This is what allows us to counter the objection that evolution “is only a theory.”
I was always taught that an ugly fact could destroy a beautiful theory. In general this followed a demonstration that someone in the lab had a nice idea that was clearly wrong when hypothesis would have been more correct word than theory. Haldane’s flippant “precambrian rabbit” example applied this at a larger level to a true theory – evolution.
It would seem to me, as a outside observer of physics, that an idea or construct of ideas based upon observations but not practically tested does not formally pass the level of hypothesis. I would think (this is pre-breakfast thought of limited value) that the level of confidence in such an idea might be proportional to the number, nature and variety of observations that are explained – i.e. how easy or difficult is it to eliminate confirmation bias. I don’t think that theoretical physicists are going to change their descriptor to “hypothetical physicists” any time soon. More broadly, I’m glad that there are still some disciplines where one can look for the beauty in nature and try to explain it.
a link that complements this post quite well is here:
It’s theoretical physicist Ethan Siegel briefly recounting current alternatives to String Theory (I’ll be working on number 3 starting next January 🙂 ).
I have a volume by Popper titled, ‘Conjectures and Refutations.’ So can we not simplify and say there are Scientific Theories and theories in science? Conjectures.
Individual properties of natural categories (e.g., science, non-science) are rarely necessary and sufficient, unlike such artificial categories as triangle. A more current theory in cognitive psychology is the idea of categories viewed as prototypes or collections of probabilistically occurring properties. For example, flying is characteristic of most but not all birds, and is not exclusive to birds. Similarly, might it be that falsifiability is one of a number of properties that collectively distinguish science from non-science?
Very interesting. Like many here, I have ~ 0 understanding of string theory but that has never prevented anyone from having an opinion about it. I always felt uncomfortable with it having such great explanatory power if one assumes a specific but rather large number of extra dimensions. That has a post hoc taste to it, and what it also seems to mean is that we have an untestable string theory that depends on another untestable many-dimensions theory.
But extra dimensions are already well established in the Standard Model of physics. Forces arise because of symmetry breaking at high energies in the history of the universe that “eat” dimensions, leaving us with the four dimensions of space-time at human energy scales.
In that sense, string theory simply extends the Standard Model by postulating further dimensions.
Well, hmm, I am not sure if you are saying I am wrong. Are extra dimensions a known thing, like the moon and time and key lime pie?
Can you give a source for this?
I’m not quite sure what the “Standard Model” encompasses at this point, but it was my understanding that nothing in the SM involves extra dimensions. I think they arise in attempts to extend it to a theory of everything (Kaluza-Klein, string theory, etc).
In any event, I’m pretty sure no physicist would claim that there’s yet any empirical evidence for extra dimensions. See for example
See also here
So I think all of the stuff involving extra dimensions is considered beyond the SM, and there is no empirical evidence for any of it yet.
My unsophisticated understanding is that certain interpretations of QM entail additional dimensions (I mean, all those “many worlds” have to occur someplace besides the 3+1 dimensions we experience, right?) — but that SM theory itself, and the calculations that underlie it, don’t.
I could be mistaken, though. (As happened once before, a long, long time ago.)
No, QM interpretations have nothing to do with the extra dimensions hypothesized in Kaluza-Klein, SUSY, string theory. These are “compactified” extra dimensions.
QM “many worlds” is infinitely many parallel universes, whatever that means, not the same kind of thing as extra spacetime dimensions in this universe. But, in any case, an interpretation is by definition non-empirical, right?
Oh, I get the “compactified” distinction — the other dimensions in string theory are wrapped up tighter than Kelsey’s nuts, such that they are rendered too tiny to be detected — I just didn’t understand your “other dimensions” statement to be thus limited.
The alternate histories and futures in Everett MWI must occur somewhere if the wavefunction does not collapse, and it can’t be in another corner of the multiverse — at least not without requiring superluminal speeds, right? So, assuming for the sake of argument that the Everett interpretation is correct, these “parallel worlds” must occur in our universe, although not in the 3+1 dimensions that we experience. That suggests “additional dimensions” (although, indisputably, dimensions of a different sort than the 10 or 11 or 26 dimensions found in the various permutations of string theory).
And, yes, an “interpretation” is by definition non-empirical — otherwise it wouldn’t be an interpretation, but a theory (just as, if my aunt had testicles, she would be my uncle). 🙂
“So, assuming for the sake of argument that the Everett interpretation is correct, these “parallel worlds” must occur in our universe”
No, explicitly not. Each of the “many worlds” of Everett are separate universes, with a shared history up to the point of divergence, then mutually inaccessible.
This has no connection at all to the possibility of our universe having more than the canonical 3+1 dimensions, whether compactified under string theory or in any other sense.
So to return to Mark’s original question –
“Are extra dimensions a known thing…?”
So far as I’m aware, for any hypothesis that incorporates extra dimensions in any sense, and for all reasonable values of “known”, the answer is no.
OK, so wrong twice in my life. I can live with that … but no more than that. 🙂
Just recalled, Ralph, having read a few years ago this paper by Max Tegmark, wherein he offers a taxonomy of parallel/multiple universes. MWI is a Tegmark Level III, that much even I get. The string-theory landscape seems to spawn the Tegmark Level II type (bubble universes) and, maybe, Tegmark Level IV type(even more-speculative stuff). Tegmark suggests an approach in the paper for a way forward that might possibly harmonize these these various types of parallel multiverses (although the pope will canonize Aleister Crowley before I could give you a cogent explanation of what it is).
Anyway, the reason I’m revisiting this now is that it refreshed my recollection as to where, per MWI, all those alternate histories and futures of the wavefunction disappear to when the wavefunction doesn’t collapse: something called “Hilbert space” — which, best I can tell, is where the live cat goes searching for its Whiskas and kitty litter when you find a dead feline in Schrödinger’s box.
Be that as it may, I’ve got to go now and feed my own live cat who’s busy scratching at my backdoor (as opposed to the other, parallel live cat that’s scratching next door at Hilbert’s house).
OK, it seems, like Ken, I am fallible.
My thought had been that the extra dimensions in the SM was something that I remembered from when I did my thesis (30 years ago! and it frightens my that the maths is quite opaque to me now), but rereading that and my professor’s text (Quarks & Leptons, Halzen & Martin) turns up no specific references.
Maybe I was thinking of the scalar degrees of freedom that are “eaten” in symmetry breaking as extra dimensions (influenced by KK?) and, or, maybe I was glomming on the idea of extra dimensions from more recent reading about string theory.
I sit corrected.
I think string theory is close to being rejected on Popperian terms. It requires (and therefore predicts) supersymmetry, and the LHC has not yet found any SUSY particles when it should have. Proponents keep raising the energy level required to see these particles, but that is getting old.
There are hints of a possible new particle, but this one may lie outside of what was expected in the Standard Model. The particle could also be a statistical fluke at this point.
To the best of my understanding, the 750 GeV bump, even if real, is not a particle predicted by SUSY in its current form. Most speculation I have read is that it might be a heavier boson. But I am sure string theory will soon be rerigged to “explain” it. The string theory community is anxious for something to take to the bank.
Why “should” SUSY particles have been found?
SUSY predicts partners (sparticles) for every particle in the standard model. None of these sparticles have been detected, but the simple or natural versions of SUSY predict some of the sparticles are light enough that they “should” be detected at energy levels accessible at the LHC (but haven’t been).
However, SUSY is a many-headed monster and may never be fully ruled out because other versions can be constructed for which the sparticles would be detected only at still higher energy levels. This game can be played practically to the Planck scale, making the sparticles all but impossible to detect, and raising the testability question.
OK, then, what’s the reason to prefer the “simple or natural versions of SUSY”?
I sense I am being baited here :). But if a theory allows its proponents to keep moving the goal posts, it is not likely to be a productive (or, I daresay, a correct) theory. I won’t say it is like the epicycles. Epicycles shifted the goal posts to make the Ptolemaic system consistent with observations. SUSY shifts the goal posts to make it consistent with non-observations.
I think (neutrally) it would be fair to say that string theory consists of a large amount of coherent and elegant mathematics with immense potential for a formulation that has all of the good qualities that you describe. However, I think the two-edged sword is that the mathematics is so compelling and powerful that it admits many possible formulations, and with most of it currently inaccessible to experimentation, it’s hard to know which formulation is the best candidate apply to this universe.
I’m not quite sure what to think about the lack of SUSY particles. The positive-PR spin is to see string theory as a class of theories, some of which are falsified as energies are pushed higher.
by “good qualities” I was referring to your other post about DD, i.e.
“Good explanation–hard to vary in response to new data, simple and elegant. (No moving the goal posts.) Bad explanation–theories easily variable to accommodate any and all phenomena.”
A word in defence of physicists: Very few physicists are actually arguing for “other ways to confirm” theories. Most physicists accept that String Theory has not been established as correct and won’t be without much better empirical verification (which may be hard, or a long time coming, or impossible).
What physicists are thus saying is actually something very similar to what Jerry is saying here, namely that things like string theory are still scientific, even if we have no current way of testing them (and maybe no medium-future way either).
The argument is thus that Popperian falsification should not be interpreted too narrowly — for some of the reasons that Jerry explains here.
The upshot is that having no way of testing string theory makes it *unproven* (we can’t claim that it has been validated, but then string theorists do not claim that anyhow), but it does not make it unscientific.
In other words the standards for what makes a theory “proven true” and what makes a theory “scientific” are not the same.
I’d thus suggest that many of the people mentioned above — Jerry, Sean Carroll, Popper, Ellis, Pigliucci — are actually not that far from each other’s positions.
Yes, we should over-interpret this debate in physics to mean that string theorists are suggesting that we have all been doing science wrong all these years, and experiments are overrated!
Physicists are just struggling with the pragmatic question of how to move forward now that we’re into scales and energies where experiments are impossible.
I think if you look at it from too narrow a “falsificationist” perspective, you might only pursue theories where a real experiment is conceivable in the forseeable future.
But is that a good approach? It brings to mind the joke about the drunkard looking for his car keys under the streetlamp, because the place where he dropped them is too dark.
(we should *NOT* overinterpret, obviously)
I seem to recall comments to the effect that it is likely that to fully examine subatomic reality, the energy of atom smashers would have to be so enormous that it is probable we will never be able to get there.
Right now China is about to build an enormous collider, bigger than CERN. We can probably expect more discoveries, but maybe there is a practical limit. Then, we’ll be stuck with pure untestable speculation.
<10^80 particles, fer shur.
Indeed, a lot of the disagreements seem to come from not fully distinguishing between being “scientific” and being “proven true”.
Plus of course in science, as against pure maths, “proven true” just means something like “accounts for the available evidence and makes predictions to a reasonable level of approximation”.
Many of the anti-Popper comments seem rather straw-mannish, IIRC Popper offered it as a back-of-an-envelope way of spotting pseudoscience, by its repeated goalpost-moving, rather than as some formal definition of science.
However, I would certainly agree that the potential for falsifiability, along with the potential for out-of-sample prediction, are two of the hallmarks of anything we might want to call scientific.
In my opinion one should also take into account that the animosity against Popper among professional theorists is probably related to the widespread criticism of their work by non-scientists who typically turn out to be Dunning-Kruger heroes; the internet of full of them. These people regularly use Popper to discard whatever they do not understand or like.
In Germany we have a guy working as a physics teacher who publishes articles and books (even Springer Verlag) against abstract science such as string theory, cosmology etc. He wishes back Einstein, whom he declares to be handsome and even an ideal researcher, never minding that his own arguments of inappropriate and even crazy abstractness have been put forward against Einstein at his time. Such guys even enter top scientific conferences and preach in the discussion after a presentation in favor of a simple-minded physics, which to formulate they are, however, completely uncapable (this happened in Munich a few years ago after a lecture of Edward Witten).
In my own working field (different from physics) I have also to deal with Dunning-Kruger heroes. Their typical procedure is to substitute specific knowledge by general arguments, and that is the point where Kuhn and Popper regularly enter. I therefore can understand when a philosophy which incompetent people use as an ever available weapon is not highly estimated among experts. This probably does not facilitate the discussion of the criticisms made by competent scientists.
I’d be sorely disappointed to learn that top physicists spent any time concerning themselves with what Dunning-Kruger-inflated numbskulls said in online comment sections.
(… says a Dunning-Kruger-inflated numbskull.)
Sam Harris’ latest Waking Up Podcast is an talk with David Deutsch – Surviving the Cosmos. Deutsch is a physicist who supports string theory and multiverse. As a small part of the episode, Deutsch discusses Popper and testability. He claims science does not depend on empirical testing but is primarily a rational process. He also has an interesting view on knowledge and the future. Well worth listening to.
I listened to that, too. I’m wondering how Deutsch differentiates his “by reason alone vs empiricism” methodology from that of Thomas Aquinas and modern Thomists who think that their armchair reasoning alone is enough to constitute credible evidence for the existence of god.
I got from the podcast that he just a rationalist. Maybe, like Jerry, he figures an untestable hypothesis could be considered part of science, just nothing to feel confident about (if I have that right).
I think you’d have to read his latest book to get a closer understanding.
I meant – he isn’t just a rationalist.
Form The Fabric of Reality (and I think he goes into this further in The Beginning of Infinity) on of his “reasonable” criteria is “explanatory power”; given two competing models, the one that can explain more should be treated as the more likely true, until contradicted by the data. I think this is where string theory wins over alternatives that Ethan Siegel discusses (link above).
Misrepresentation. Deutsch distinguishes good and bad explanatory power on pages 22-25 of Fabric. Good explanation–hard to vary in response to new data, simple and elegant. (No moving the goal posts.) Bad explanation–theories easily variable to accommodate any and all phenomena. Like string theory.
No; you need to be on p. 66 (Penguin ed.). And I don’t think string theory is as “elastic” as you assert.
Can you quote something? I have the Viking edition.
“theories that are capable of giving more detailed explanations are automatically preferred”
Since Deutsch is a proponent of string theory, I’d doubt that he thinks it fails to meet his own criterion.
And string theory has ~10^500 compactifications. If that is not elastic, what is?
*Both* rationalism and empiricism are needed, to various degrees. For one thing, rationalism emphasizes that all hypotheses (and hence also systems of same) are *inventions*, though ones that should be (empiricism) constrained by the world.
Falsifiability is not sufficient but it is necessary for *low level* hypotheses. All high level ones are “organizing principles” and need specific other hypotheses to produce consequences that can be confirmed or refuted. Existing big theories in physics are already like this; string theory as far as I can tell is no different. One simply must be clever to find appropriate consequences (or prove a theorem that it has none, which is hard).
Reblogged this on SelfAwarePatterns.
On the last day of the Atheist Alliance of America convention this year in Atlanta, iirc there was a panel of scientists and philosophers addressing (and arguing) on this issue. I asked them a question during Q & A:
“What would it take to make String Theory a supernatural belief?”
That made them all laugh and think out loud. Yes, I was tossing my usual “insert pure mentality = supernatural” rock, but Taner Edis gave an interesting answer which most seemed to like:
“I’d call String Theory ‘supernatural’ if its proponents said you had to believe it ‘on faith.'”
I’ve encountered a fair share of very intelligent and scientifically astute theists using the old Argument from Analogy by comparing God to high level physics and suggesting that believing in God was therefore not much different than believing in String Theory. Soft apologetics (“it’s not unreasonable to believe”) as opposed to Hard apologetics (“belief is the most reasonable option.”)
But even a potentially unfalsifiable science theory is, as you point out, only potentially unfalsifiable. And the lack of testability is considered a flaw in the theory, not a glorious virtue which now separates those who can dream, feel, and think “deeply” about physics from those who are still stuck in the cold, empty, and superficial need for falsification. We know what discoveries would have or could have blown String Theory away into historical No-Man’s Land: even one critical misstep in the math would have been enough. But God is supposed to have existed before the universe existed — and it somehow “grounds” the entire field of logic and reason. So there’s not even a potential state of evidence or existence in the past which would have or could have falsified God as a hypothesis, even in principle. We’re looking at an entirely new level then of unfalsifiability.
Moreover, what constitutes “beauty” and “elegance” in science is not the same as what constitutes “beauty” and “elegance” in religion/spirituality. God is considered a beautiful hypothesis because it connects humanity to the cosmos and reassures us that we are fundamentally loved and valued. It’s considered “elegant” because it explains everything so simply and easily. “Like comes from like and we can’t inquire why or how.”
Science doesn’t interpret those terms that way.
Wonderfully said. What I consider ‘beauty’ in a scientific theory is one that is both simple in that it contains few components, and yet it also has great explanatory power. It seems especially beautiful if it comes out just when a scientific field might be getting a little bogged down. The Watson Crick DNA model stands out as an example.
It’s a great example. I think the beauty and power of a scientific idea is often proportional to the degree of modesty with which you can understate the implications!
The antepenultimate paragraph of the one-page paper:
“It has not escaped our notice that the specific pairing we have postulated immediately suggests a possible copying mechanism for the genetic material.”
Popperian falsificationism is widely considered to be obsolete because of Popper’s denial of the value of inductive inference, which is “the logic of science” – as E. T. Jaynes put it.
Any modern defense Popperian falsificationism, should probably address the main criticism of it.
For those who might want some insight into Richard Dawid’s ideas, Sabine Hossenfelder has a review of his book here
There are follow-up comments from physicists and from Richard Dawid himself below the review.
And don’t miss her 15-min video, featuring Odie the physicist and Garfield the philosopher.
Sabine also covers the convocation: “Why Trust A Theory? Physicists And Philosophers Debate The Scientific Method” in Forbes.
Re Pigliucci on Astrology:
“To say that the falsification of astrology completely refutes the value of falsificationism or Popperianism seems to me a circular argument….What Pigliucci appears to be doing here… is conflating what we regard as “scientific theories” with what regard as “true scientific theories.”
To paraphrase what you’ve said, Jerry – isn’t Pigliucci’s mistake an elementary failure to distinguish that falsifiability is necessary, but not sufficient?
(I withdraw my ignorant comment, see kelskye below.)
Regarding: “I haven’t yet seen a theory accepted as true that hasn’t survived a test that could show it to be wrong.”
In 18th century, Laplace calculated orbit of Uranus and it deviated from expectation of Newtonian mechanics. Following Popper’s falsifiability, they should have rejected Newton’s theory of celestial mechanics on the basis that it didn’t account for orbit of Uranus. However, astronomers in mid-19th century “invented” a new planet, Neptune, and calculated its orbit assuming that this orbit would cause deviations seen in orbit of Uranus.
If you think about it, this is really a spectacular example of scientific investigation; you use a theory to predict existence of another entity that no one new existed. So, Popper goes to the bin, right?
Well, the same problem occurred with orbit of Mercury. And astronomers were so sure that the same trick would work, that they calculated orbit of the new planet and even named it Vulcan. However, in this case Popper was right and Newtonian mechanics did fail, since Mercury was so close to the Sun that relativistic effects become significant.
I think that this raises an interesting question: what do we do with Newtonian mechanics? Is it science or non-science? It’s clear that relativity accounts much more precisely for movement of planets, but in certain cases Newton is good enough and it’s significantly easier to do apply Newtonian mechanics to practical problems than general theory of relativity.
Another example of inadequacy of Popper’s falsifiability is Copernician heliocentrism. When Nikolaus Copernicus first came up with this model, scientist said “OK. Prove it.” According to Copernician model, astronomers of that time should detect stellar parallax (i.e. position of a referent star, say Sirius, changes against the backdrop of distant stars as Earth orbits around the Sun, thus creating angular difference called stellar parallax). When astronomers looked for stellar parallax they found nothing. No angular difference. Moreover, geocentric model gave better predictions than heliocentric one, for it helped merchants navigate across the seas for centuries. So, Copernician model was false? In hindsight we can say that astronomers made two ancillary hypothesis. First, they assumed that Earth’s orbit is much larger than it was and the second was that their telescopes were sensitive enough. Both of them were false.
This is really the core of Duhem-Quine thesis: when testing a theory, you are also testing a number of ancillary hypothesis. Hence, it is impossible to test scientific theory in isolation as Popper thought.
And finally, although it is a popular opinion that Richard Feynman despised philosophers (which is partially true), you can see in this video that he was aware of philosophical implications when choosing what theory is the correct one (he gives quite entertaining example about Mayan astronomy).
Among the many reasons to question Popper, I’m puzzled that you would choose the Uranus/Neptune example. This seems to me the perfect example of when the principle of falsification DOES work extremely well.
Under the Newtonian model, observations of Uranus produce a strong prediction: an undiscovered planet Neptune MUST exist. If it does not exist, Newton is falsified absolutely. The strength of that potential falsification lends proportionate weight to the provisional confirmation of Newtonian mechanics when Uranus is subsequently discovered.
As I’ve stated, if you apply the same rule to the case of Mercury, you would expect to find another planet that accounts for this peculiar orbit. Newton’s model also gives a strong prediction, just as in the case of Uranus, that Vulcan MUST exist. But Vulcan doesn’t exist and Newton’s model is inadequate in explaining peculiarities of Mercury’s orbit.
Although this implies that Newtonian celestial mechanics is false, we still use Newton’s law of universal gravitation because it is more applicable than Einstein’s general theory of relativity. So is Newtonian mechanics science or non-science? Does it reflect reality or is it just a nice proxy for reality (i.e. provides good enough predictions)?
In my opinion, we have to settle down for some form of pluralism, but that sounds more like Feyerabend and less like Popper.
But I think that the key question is this: If you were an astronomer just a day after Le Verrier announced the possible existence of Vulcan, would you spend your time looking for it or give up saying that the theory has been falsified? Applied to the case of Mercury, Popper would lead you to the right conclusion, but in the case of Neptune it would lead you off the path. Remember that these two cases both provide strong evidence for existence of another planet, and that you simply cannot say “Well, Mercury data seems flimsy and probably not as strong as in the case of Neptune”. The bottom line is that Popper’s criterion doesn’t work in both cases under equal initial evidence.
“Applied to the case of Mercury, Popper would lead you to the right conclusion, but in the case of Neptune it would lead you off the path. Remember that these two cases both provide strong evidence for existence of another planet, and that you simply cannot say “Well, Mercury data seems flimsy and probably not as strong as in the case of Neptune”. The bottom line is that Popper’s criterion doesn’t work in both cases under equal initial evidence.”
I’m sorry, but that just seems nonsensical to me.
In either case, the prior data under the Newtonian model leads to a strong and specific prediction.
In either case, you obviously go and look for another planet in the predicted position. Non-discovery constitutes falsification. Therefore, in either case, since falsification is a possible outcome, discovery of the predicted planet constitutes Popperian provisional scientific confirmation of the theory.
In the Uranus/Neptune case, Newtonian mechanics was provisionally confirmed. In the Mercury/Vulcan case, Newtonian mechanics was falsified.
This is archetypal, powerful, Popperian science in action!
You seem to be confusing the idea of a “scientific theory” with “scientific theory that is provisionally true”. There are also many scientific theories which have been falsified, but which are nevertheless scientific. Newtonian mechanics was one such scientific theory.
(There’s a separate question about whether you want to call Newtonian mechanics “false”, or “incomplete”, but that really has nothing to do with the falsification principle.)
The problem here seems to be that truth comes in degrees – a notion that Popper himself explored. (And did not succeed – his friend and my teacher, Mario Bunge, has worked on it since, to little recognition; many philosophers are only aware of Popper’s and its refutation.)
This debate holds immense significance for the future of physics and cosmology. It’s not just an arcane abstract philosophical discussion – ultimately it’s about money, potentially vast sums of money.
In the short term, the only question may be which theoretical physicists get grants to continue their theoretical work. That’s important, of course, but the sums are trivial for society as a whole (and in my opinion, theoretical physicists are so cheap that society should be funding everyone with the requisite skills and motivation!)
But we’ve already seen that evidence for the Higgs, and exploring new physics, required financing the LHC. I think within the physics community, the LHC was relatively uncontroversial – I’m not aware of any other propositions to spend that money that held any comparable promise to advance important areas of physics.
But that may not always be true. When we do reach a point of developing a potential test for some aspect of these ideas, it is highly likely that it will one or more orders of magnitude more than the LHC. So the questions will be:
(1) Can we convince society that it’s worthwhile to spend this money?
(2) If there are multiple experimental ideas, all grounded in as-yet untested theories, which one gets the trillion-dollar funding?
I’m not convinced, since, as you say, it’s not the theory, but rather the big machines such as the LHC that cost money. And, more or less by definition, these are not built to test theories that are not testable in the relevant regime.
Machines like the LHC are actually pretty general purpose, and so could probe all such theories that made predictions in the relevant energy range.
The headline about “built to find the Higgs boson” is rather a simplification to provide a simple hook for the public.
We’re not in disagreement about the LHC, perhaps you missed what I wrote:
“the LHC was relatively uncontroversial – I’m not aware of any other propositions to spend that money that held any comparable promise to advance important areas of physics.”
With the LHC, we had theories with varying degrees of empirical support from preexisting data, and there was broad consensus that the way to distinguish between them, and the best prospect to discover new physics, was to build the LHC.
My point was that the future may be different, and much more controversial. We may be entering a realm where we have to justify whether it’s worthwhile at all spending vast sums to test theories with (as-yet) no empirical support whatsoever; or, if society agrees to spend money, but there is no consensus on the best experiment, how best to decide between the experiments in the absence of any empirical justification?
So these philosophical debates on how we evaluate scientific theory may have very substantial real-world implications in the near future.
Isn’t a degree of confirmation required as well? Confirmation bias aside, we don’t just test new medicines to see that they don’t make you ill; we look for signs that they improve patient welfare. Most of the safeguards of scientific testing – double-blinding, representative samples, p-value judgements, etc. – are there just as much to single out the cause you want as they are to remove the causes you don’t want, with those unwanted causes including the research team’s expectations. There’s also the issue of measuring a continuum or shades rather than absolute, yes-or-no results: how does falsification work when you’re dealing with fluid and non-linear systems rather than measuring “is it not this?”
Lastly, falsificationism is surely only useful to the extent that it guides an investigation towards the correct answer. When we ask nature for cures for specific diseases, a long list of what doesn’t work is part of that answer – at least to stop us making the sickness worse – but at some point, you want the gold itself.
I don’t think that your understanding of what Popper meant about falsification is quite right here. All research methods that you describe would be part of the Popperian scientific process. Popper did not mean that the only valid research agenda is to approach the correct answer by trying to falsify everything else. He meant that a process of confirmation must entail the possibility, in principle, of an opposite result, otherwise the confirmation is not scientific.
Under Newton, the anomalous orbit of Uranus could only be explained by the presence of (as-yet undiscovered) Neptune. If no Neptune were ever found, Newton would be absolutely falsified; therefore the discovery of Neptune constituted valid provisional scientific confirmation of Newton.
In that example, no alternative hypothesis was proposed or falsified.
After eighty years looking for dark matter, and not finding it, but instead conjuring up gravitational waves, dark energy, inflation and other unseen, undefined, untested theoretical entities, the queen science of cosmology is a patchwork of ad hoc theory and failure.
All “proofs” of General Relativity can be explained alternatively: lensing by refraction through plasma that surrounds stars and galaxies – not gravitation. Red shift was disproved by multiple observations of high red shift quasars in front of low red shift galaxies. Orbit of Mercury can be defined without GR.
GR is full of paradoxes, it can’t be rectified with quantum physics and it tells us 96% of the universe is stuff we can’t find, and can’t even define the state of the missing matter, or energy. What bunk! We need Popper today more than ever!
I find GPS works very well.
After it is corrected for the ionosphere’s electromagnetic influence, it does work well.
But that’s Maxwell, not Einstein.
fyi I assume that this is Andrew Hall, an Electric Universe crank, see here:
Anything beyond gentle mocking is unlikely to be productive.
You’re right, Snidely. I’m not out to get mocked – just wanted to express my opinion. I think EU is emerging as credible. It is based on classical physics, not woo-woo. Take an open-minded view at Birkeland Currents and the plasma physics from applied science being used in EU and you might find it has physics behind it Popper would have appreciated.
But didn’t Einstein use equations derived from Maxwell’s? Just because the math works, it doesn’t mean the constants and variables are understood.
Actually, it really does.
“Just because the math works, it doesn’t mean the constants and variables are understood by engineers.”
Have you personally been looking for dark matter for 80 years? If so, I can certainly understand why you’re a little tired, and, well, cranky.
I wouldn’t waste my time looking for it. I don’t think it exists.
Find your score here.
In fact, the “A fight for the soul of science” article says this:
“Nowadays, as several philosophers at the workshop said, Popperian falsificationism has been supplanted by Bayesian confirmation theory, or Bayesianism”.
Just so. Saying “I haven’t read anything about [Bayesianism], and so can’t weigh in here” …doesn’t cut it – when you have internet access.
Isn’t this just a pathetic smoke and mirrors backlash by living philosophers against the realisation of most scientists that “Philosophy is dead”?
Surely a more pressing issue for living philosophers is whether university philosophy departments should join with their colleagues in theology and become part of the history faculty?
What am I missing on the Pigliucci comments about astrology – it has been falsified, the way untrue things may be, and that’s a problem why?
I often don’t understand him, like also his dismissal of Sam Harris’ argument that moral public policies can be achieved scientifically to promote thriving. Here again, the problem is what? I don’t think he favors religion as a guide, so what then?
Jerry’s take on it all seems right to me.
“What am I missing on the Pigliucci comments about astrology – it has been falsified, the way untrue things may be, and that’s a problem why?”
It’s about what should count for the demarcation problem (what separates science and pseudoscience). The original formulation of falsifiability was that scientific ideas could be in principle falsified – something not true of pseudosciences like the Freudian account of mind. As the astrology example shows, you can have predictions that can be tested yet that still doesn’t make it a science.
What Pigliucci lays out in The Philosophy of Pseudoscience is a number of different criteria that are commonly found on science but generally lacking on pseudoscience. So a scientific activity isn’t just about whether it’s falsifiable, but also has other markers that indicate a successful science. That’s what he’s getting at.
“Here again, the problem is what? I don’t think he favors religion as a guide, so what then?”
Pigliucci’s answer there is that we’ve got 2500 years of people who have provided a secular alternative on the question of morality. That the work of philosophers and other thinkers throughout the ages have given multiple ways to think about and advance moral questions, and that science is just one of many tools in the toolkit for examining moral questions. That Harris’ example doesn’t offer anything new, that it’s speculative (and conveniently answerable by Harris’ own field of neuroscience), and full of outlandish statements.
Thanks, this is helpful. (I really should go and read it.) I could not really believe that Pigliucci was making a necessity/sufficiency error – so in fact, the necessity/sufficiency question is precisely what he was highlighting with the astrology example.
Maybe I’m dense but I’m still not understanding. Astrology has testable claims, as most any ideas do, but it fails the test (so it isn’t science). Are you saying that because it is testable, like science is, that it’s kind of special (though wrong)?
On the morality issue, Harris’ example is to apply science to the question with thriving as the desired result. That basic idea seems unassailable, except maybe to a religious person or someone political or with a philosophy that’s not evidence based.
Let’s have a system designed so people do objectively well.
No, it’s not.
Astrology was failed science. Since its failure, its proponents have pushed it into the realm of pseudoscience.
“Are you saying that because it is testable, like science is, that it’s kind of special (though wrong)?”
No, I’m saying that it’s virtue of being falsifiable doesn’t make it a science. The demarcation criterion would have astrology sit on the side of science by virtue of it being subject to testing. Pigliucci’s point (and I’ll fully back this up) is that there are many more things besides falsifiability that go into making something a science – so the criterion of falsifiability isn’t sufficient to establish astrology as science.
Pigliucci goes further in The Philosophy of Pseudoscience where what constitutes a scientific theory is a case of family resemblance, where there are multiple traits that make something a science, but no one trait is necessary nor sufficient to make it a science. (think what makes a game – what do chess, car racing, solitaire, football, etc. all have in common? We’d call them all games, get each of them is very different.)
“That basic idea seems unassailable”
No idea should ever seem unassailable, especially not when trying to get into a problem that’s meant to have been intractable for the last 2500 years. You’d have to either bet that Harris is a super-genius that has surpassed all the other super-geniuses, or perhaps there are certain caveats and flaws in the account.
Some basic questions:
1. How does Harris’ account of morality differ?
2. Which are the paradigmatic cases that set apart the judgements from other moral theories?
3. How does Harris’ account intersect with our intuitions?
4. How does Harris’ account deal with the often immediacy needed for moral judgements?
5. What areas does Harris’ account need improvement on?
6. Is there anything in the various other schools of metaethics that might apply to Harris’ line of work?
(bonus question): Does the way the brain processes fact and values dissolve the conceptual distinction between facts and values?
We agree that an idea isn’t scientific just by being testable. I am surprised that anyone would think a definition that brief would be sufficient and worth arguing about. Obviously the tests need to be passed, for instance.
Regarding morality through science, I remain baffled by MP’s scorn toward SH on it. What if it was suggested that medicine be based on science, would that be a bad idea too? My sense is that MP objects to the very idea of science as a guide to morality, not just issues about how to do it. Everything is difficult.
Your six questions are over my head. None contain the word science or any other form of inquiry, like religion or philosophy, so I’m at a loss as to what the objections to science are after this understanding-free exchange we’ve had.
“I am surprised that anyone would think a definition that brief would be sufficient and worth arguing about.”
The demarcation problem is real, and falsification is an attempt to answer that problem. As the subsequent 50 years of inquiry into the matter have shown, that answer is inadequate.
By no means do you have to get into the demarcation problem, but you need to understand the history of the problem to understand Pigliucci’s reply.
“Regarding morality through science, I remain baffled by MP’s scorn toward SH on it. [..] Your six questions are over my head. None contain the word science or any other form of inquiry”
Again, if you want to understand Pigliucci’s objections to Harris, then you need to be able to understand such questions. Those questions are at the heart of the objections given to what Harris is advocating, and why Harris’ book is seen by philosophers like Pigliucci as a poor attempt to weigh in on what makes for moral norms.
The bonus question, by the way, comes straight from Harris’ book. Page 10 on the first edition if I’m not mistaken. Do you worry that called his account ‘unassailable’ when you’re not able to answer basic questions about his account?
Another way to look at it is that your own inability to engage this topic in a straight foreword way, instead of deferring always to foggy philosophical points, keeps us having a meaningful exchange.
I said his account seems unassailable. Nothing you’ve said makes me worry about that.
“Nothing you’ve said makes me worry about that.”
And that’s really sad. If you want to understand the objections, then you have to understand where Pigliucci is coming from, and that involves engaging in the 2500 year history on the topic. If it’s not straight-forward enough for you, then perhaps you should stick to Harris’ account for the same reason a creationist should stick to a literal reading of genesis when talk of evolution goes over their head.
All I did was appeal to you to try to understand where Pigliucci is coming from. Yet, evidentially, that is asking too much of you. Pathetic.
I think kelskye here describes Popper’s position well, and I think Pigliucci’s observations too. Popper thought that falsifiability was the defining characteristic of science; he was trying to escape the problem of induction, and the practice of accumulating confirmations, by using an apparently deductive argument (falsification).
I don’t want to diss Popper at all, since he was a brilliant man, but this particular attempt to demarcate science from pseudoscience doesn’t work, sadly. There’s a lot of to and fro in the subject. I wrote an essay on it a little while back with extensive links to source material, which I hope gives a flavour of that to and fro, and why it doesn’t work (at least, in my view!):
Jerry recasts falsifiability so that it’s not a deductive argument, so any problems with induction would still need addressing (many have addressed these problems and no doubt many will continue to do so!). Since Popper’s idea doesn’t work, I see no problem with that, and it seems sensible that some form of falsification be included in a list of things that point to a theory being scientific, whilst not defining it as scientific.
Thanks for the essay.
I’m starting to question whether I really understood how radical Popper’s position was.
I never saw him as rejecting inductive reasoning altogether, or claiming that in practice we only make truly “scientific” progress by progressive falsification.
Of course, there are many great examples where a powerful falsification supported dramatic progress – the UV catastrophe, precession of Mercury.
But surely Popper didn’t dismiss induction entirely – he didn’t think that intermediate periods of progressive confirmation that flesh out the domain of validity of a theory were NOT doing science?
I understood it more that he felt we should be just be inherently skeptical of induction. That we should be careful to treat all confirmatory evidence as provisional, and that our practical agenda should be, if you like, to constantly try as hard as we can to break our pet models and theories. And that we should weigh the inductive value of any confirmation according to how potentially effective our attempt to break the model really was.
I mean, I think that’s what good scientists actually do in practice. Do you think that conforms with Popper, or was he was saying something stronger?
Yes, I think you’re quite right about Popper’s view of induction. He did not dismiss confirmative evidence altogether; but he thought it only had value as part of a genuinely risky test of a theory. The theories he worried about had an explanation for every outcome. Hence the theory had to be falsifiable to be considered scientific. A theory that explains everything fails this test (which raises the question of what a scientific theory of everything would look like, but that’s another story).
He was also aware that a deductive argument to validate science would be extremely satisfying, and we don’t have one; falsification might have provided that deductive argument.
Kuhn described the accumulation of evidence (to support a paradigm) as ‘normal science’ and Popper acknowledged its existence, but said it was bad science. Good scientists would indeed do as you describe. In practice I wouldn’t know how much ‘good’ science is done compared with ‘bad’ science, though!
Mario Bunge has written for at least 30 years on how one can look at “degrees of pseudoscience (or scientificity in general) as a *vector* quantity. Plus ca change …
I’ll look into him. Thanks for the heads-up.
The most accessible version is in _Finding Philosophy in Social Science_ (1996.)
Thank you for a very interesting article. I read most of Popper’s work before I started my science degree and found his work remarkably approachable for a philosopher. The only other philosopher’s work that I have found to be so straight forward and honest is that of Bertrand Russell and Peter Singer (I am sure that there are others out there.)
There are many points to be discussed in this post but one quote that had been high lighted stood out for me:
“As we see it, theoretical physics risks becoming a no-man’s-land between mathematics, physics and philosophy that does not truly meet the requirements of any.”
Is this a problem? On what grounds do we demand that a particular human endeavour must conform to certain requirements all the time? By doing so surely we would not have a chance of expanding our knowledge and abilities.
New fields are created where the boundaries of old ones are broken. Wether those fields of endeavour are worthwhile is undecided of course until they prove to be useful. The edges of modern physics may be pushing up against the barriers of philosophy and science (I don’t think they break the rules of mathematics as we generally just add more rules to mathematics in order to make a new field fit it’s requirements) but I think that is a good thing.
Nice piece (another to add to #TIP reference base http://www.tortucan.wordpress.com along with the Wolchover piece as relevant to the falsification issue antievolutionists often bang the dum on).
The crux involves distinguishing theories that cannot be falsified by any observation or circumstance (and wouldn’t those be rather meaningless, since they’d need to contend that A & !A are both consistent with their theory?) and theories that are provisionally or experimentally unresolvable (a point noted in the Wolchover chart) because no practical tool can be contrived to investigate the issue (too small, fast, high energy or big).
As the long-delayed proof of Fermat’s Last Theorem suggested (though this is a purely mathematical, not physics case) a proof can come from subsequent work not known in Fermat’s day (leading to suspicion that Fermat himself couldn’t have had a correct proof of the theorem at the time). My suspicion is that a lot of these rarified physics arguments will seem old hat and well, duh, should more be learned and resolved about, say, dark matter or dark energy.
For me, falsification is a cousin of utility. If a theory ultimately bump into observations, say something practically useful about that, then it is only trivially true, not helpful. Relativity and quantum theory and evolution all bump into the universe in ways that lead to new observations and discoveries.
It’s that “ring of truth” about it that “unfalsiable” theories never end up sounding. And it will take some time before the chime of string theory either makes it through the din or not.
I agree with the argument that string theory or multiverse theory are ideas that should not be dismissed. They are legitimate subjects of scientific inquiry. But calling them theories is a stretch. I think a better label is hypothesis. I taught high school science for 22 years. In the nature of science unit one of the reoccurring problems was in getting students to understand the meaning of theory as used in science. I found this task complicated by the sometime casual use of the term by scientists. It is problematic to try to convince a student that a theory is one of the big ideas in science, supported by a large body of empirically verified evidence and then have to try to explain why scientists call the idea of strings or the multiverse a theory when both lack a body of empirically derived and verified evidence.
String theory qualifies from the richness of its flourishing mathematics.
From here, for example
“Over the years string theory has been able to enrich various fields of mathematics…. In fact, one can argue that this stimulating influence in mathematics will be a lasting and rewarding impact of string theory in science, whatever its final role in fundamental physics.”
So, a mathematical theory that is comprehensive and coherent enough to build a universe; but only yet qualifies as a hypothesis for the way OUR universe is built.
String “Theory” is most decidedly *not* a mathematical theory. If it were, we would not be having this discussion. There’s a fundamental difference between mathematics (formal proofs) and science.
We’re not saying dissimilar things, perhaps I could have been clearer:
Separately from any consideration of how our universe works, I just wanted to point out that really rich and important mathematics has been developed in string theory. I think there’s a very strong case that depth of the mathematics justifies the use of the word “theory”, if *only* in the sense of an abstract formally-derived “mathematical theory”.
I agree that it’s much more dubious to call it a “scientific theory” absent any empirical support. That’s pretty much what this whole debate is about, right? What do we do when we get into realms where empirical tests seem impossible? I think recategorizing as a hypothesis may be right, but there are more complex issues here than the theory/hypothesis distinction.
“That is what happened to cold fusion, or the theory that vaccines caused autism. Both of those went away because they were falsified by data.”
Yes, both “went away because they were falsified by data.” But I question your description of both of these as theories. Seems to me that they were at best hypotheses. It is not uncommon for hypotheses to be falsified. The University of California, Berkley defines a theory as “a broad, natural explanation for a wide range of phenomena. Theories are concise, coherent, systematic, predictive, and broadly applicable, often integrating and generalizing many hypotheses.” (http://undsci.berkeley.edu/glossary/glossary.php?start=s&end=z) I accept this as a rather good description of a theory. It is the one I used in teaching science for 22 years. I fail to see how vaccines as the cause of autism or cold fusion ever rose to a level that justifiably comports with this description of a scientific theory. I question even the claim that strings are the fundamental root of matter complies with this description of a theory.
The problem with the demarcation issue – where is the line between what qualifies as science and that which is pseudoscience or non-science – is a thorny one that philosophers of science have been arguing about for decades now. Popper’s contribution to the debate is but one of many. When I taught science I found it more useful, though more complicated, to teach about the concept of scientific theories using a list of characteristics, rather than claiming that one characteristic alone defined it. Falsifiability is but one of the criteria that can and should be used in evaluating whether an idea is a scientific theory.
I think the question of how to proceed when hypotheses become very difficult to test is a legitimate part of science. Scientists don’t just do experiments and write papers, we design experiments and decide what counts as publication-worthy.
If current cosmological and particle physics hypotheses are untestable, then what scientists will work on is other analyses of those hypotheses, and that’s okay. I’m not supportive of the notion of beauty or elegance being taken as measures of credibility, but I am supportive of theorists exploring theory-space to discover novel predictions that may be testable. I’m also supportive of research into experimental methods that may be relevant to future testing.
Not every idea needs to be kept alive while we work; some (bad ideas) can just be thrown out. But IMO its perfectly fine to continue to explore some of our better untestable ideas, muddling away at questions of how to do better experiments and what else do those ideas imply.
TL:DR version: when the science is too immature to test X, it remains scientific and a part of science to think, publish, and experiment on methods that might lead to a ‘how to test X’ in the future.
Re your first quote I would ask the questioners what conceivable observations would falsify QM, or GR or the Standard Model but not string theory?
If there are none, then string theory is equally falsifiable. If there are some, then string theory is empirically testable against QM, GR and the SM – in principle.
This does not seem to follow logically the way you have laid it out.
Are you making other assumptions?
Am I? Could you elucidate?
“If there are some, then string theory is empirically testable against QM, GR and the SM – in principle.”
The falsification of X, without falsifying Y, only tells us that Y does not entail X. It tells us nothing else about the properties of Y. It does not exclude the possibility that Y is non-empirical and untestable.
Unless, perhaps, there are other unstated assumptions about the relationship between X and Y – hence my question.
So consider potential observation x which would falsify theory A but not theory B.
To say that x is potentially observable is to say that x is a potentially demonstrable empirical truth.
Theory A is duly falsified by x. Theory B is not.
GR or QM or SM is empirically falsified whilst ST is not. Empirical victory for ST.
No, because it could be that theory B is consistent with observation x, and also consistent with all other conceivable observations – i.e. untestable, unfalsifiable.
It seems to me that B is empirically in better shape than A if it can do everything that A can do but not vice versa i.e. remain unfalsified.
Explicitly not, unless you are begging the question (in the formal sense of assuming the moot issue) that theory B was, in principle, falsifiABLE by the experiment that generated observation x.
Otherwise, everything that you’ve said is consistent with theory B being, say: “Ineffable spirits move the cosmos in ineffable ways that humans cannot comprehend”.
In other words, what you’ve said is consistent with theory A being a scientific theory that was just falsified; and theory B simply being unscientific.
This is the core point of Popper, right? That confirmatory inductive evidence should only be given weight in proportion to the jeopardy of potential falsification to which the theory was truly exposed under the experiment.
Thanks Ralph. I admit to pushing the boat out wondering if it could float.
String theory should be grounded upon the falsifiability of GR and QM. The maths should guarantee that if either one of these is ever falsified then ST is falsified. If the maths doesn’t guarantee it then there is a serious problem.
Our discussion has set my mind off musing. One last thought.
Is “Ineffable spirits move the cosmos in ineffable ways that humans cannot comprehend” really in the same class of theory as one which may encompass all conceivable potential empirical truths mathematically?
String theory comprehends that the cosmos operates, at least partially, as described by the equations of QM and GR. We may not comprehend what the equations of QM mean beyond their predictive power but we comprehend precisely what Einstein’s equations mean.
Does string theory express scientifically the ultimately utter factness of the nature of things? Things just are the way they are and this is how they are, may be the ultimate scientific truth. If string theory is falsifiable by way of observations which falsify QM or GR, perhaps so.
Yes, I should say I was not expressing any opinion about string theory in my comments above, I just was addressing the narrow point of your syllogism.
The point about the apparent richness and power of the math is an important one, I think. It’s a double-edged sword, in that it lends plausibility and potential, but it may point to a reality that will frustrate empirical investigation. It may be that string theory correctly describes the way all universes are built, that there are 10^500 valid formulations, all those universes do exist, and the reason we have our version is the annoyingly unsatisfying anthropic principle.
Falsifiability is a mechanism to keep us grounded in reality. I use the example of irrational numbers. If after defining i we were to go off mathematically and never touch reality again, then that is the realm of pure mathematics. The fact that irrational numbers can be used to address real problems tells us that the “irrational” aspect of those numbers is an illusion, that they are rational in some way shape or form (not according to mathematical convention, but to human rationality).
We can have our cake and eat it too. We can pursue theories that do not seem to be testable in the hope that they may become testable after a while. Or they may teach us different ways to look at nature that will prove useful in other areas, or … There is just no reason to restrict ourselves one way or another.
“We can have our cake and eat it too…. There is just no reason to restrict ourselves one way or another.”
Unfortunately there is: resources.
I don’t think physicists are having these discussions because they think philosophy is more interesting than physics. It’s because physics is into a realm where major funding decisions will depend upon the relative evalution of competing untested theories.
And it’s not just money, of course – it’s also a battle for the hearts and minds of the brightest students and young researchers.
Why not just call “String Theory” what it actually is: a scientific hypothesis — the “String Hypothesis.”
What drives physicists to need to call it a theory? Ego? (e.g., can’t win a Nobel with “just” a hypothesis)
Am I missing something?
“Theory” is often used to mean “hypothetico-deductive system” (or, in the usual concrete form, set of propositions closed under a consequence relation), which is what it means in logic and pure mathematics. Physicists often use it this way (e.g., theory of relativity) as well. String theory is a theory in that sense.
A somewhat different reason for excluding the “prediction” of the precession of the perihelion of Mercury by GR from examples of effective Popperian falsification examples : Einstein and his collaborators (I forget their names, but he didn’t do GR alone) actually used the known precession of Mercury as one of their input data points for trying to work out what the theory should be. So, really, you shouldn’t use it as an example of a prediction of the theory that it was instrumental in the construction of. You get one bite at the cherry, not two.
Yes, the Mercury anomaly was discovered in 1859, so it should more properly be considered a motivating factor, not a test, for GR. But there was a whole lot more than data-fitting here. It wasn’t as though GR had a free parameter that was tweaked to fit the Mercury data. GR elegantly united Newton with SR, the precession of Mercury popped out quantitatively, and it didn’t mess everything else up as other proposed solutions had turned out to do.
In fact, I think everyone (not just Einstein) loved GR so much on aesthetic grounds that the accuracy and objectivity of Eddington’s celebrated eclipse data have been challenged for confirmation bias.
The wiki is a pretty good read, it takes you through all the other ways its funky predictions been tested since.
About philosophy. Book “Facing Up”, by Steven Weinberg.
“I think few philosophers of science take it (discussing questions
about scientific knowledge) as part of their job description to help
scientists in their research. . . . . why this should be? Why should
the philosophy of science not be of more help to scientists? I raise
this question here not in order to attack the philosophy of science,
but because I think it is an interesting question – perhaps even
/ page 84 /
“ . . . it’s not the job of physicists or other scientists to define truth;
that is the job of philosophers. If they haven’t done that job, too bad
/ page 104 /
“My point is rather that no sense can be made of the notion of reality
as it has ordinarily functioned in the philosophy of science”
“Fortunately we need not allow philosophers to dictate how
philosophical arguments are to be applied in the history
of science, or in scientific research itself, . . . .”
“Certainly philosophers can do us a great service in their attempts
to clarify what we mean by truth and reality,”
We know that “truth” and “reality” mean in our everyday life
(for example we have no trouble to use these words in a supermarket).
But can we explain “truth” and “reality” in science / physics on
the logical “supermarket” level? Einstein, Rutherford, Bohr and
other physicists were sure that it is possible.
“Truth is ever to be found in the simplicity, and not in the multiplicity
and confusion of things.”
/ Isaac Newton /
“If you can’t explain it simply, you don’t understand it well enough”
/ Albert Einstein. /
“A theory that you can’t explain to a bartender is probably no damn good.”
/ Ernest Rutherford /
“It is often claimed that knowledge multiplies so rapidly that
nobody can follow it. I believe this is incorrect. At least
in science it is not true. The main purpose of science is simplicity
and as we understand more things, everything is becoming simpler.
This, of course, goes contrary to what everyone accepts.”
/ Edward Teller /
It seems that philosophers haven’t done their job.
“As we see it, theoretical physics risks becoming a no-man’s-land between mathematics, physics and philosophy that does not truly meet the requirements of any.”
O Feynman, where art thou? Doth thy ghost haunt these proceedings?
Consider a circle (according to Venn) infinitely large. Or at least unbounded. Or at least with no known limits. Inverted, something like a worm-hole.
Draw some circles of limited dimension. Outside the large circle lies falsity. Some of the circles encroach upon the truth/is circle slightly, some more than slightly. In theory (testable?), even a circle, if one exists, that is wholly within the infinitely large one, representing all true theories of theoretical “knowledge,” does not span the infinitely large circle representing all possible knowledge, not to mention knowledge that is not within the compass of human understanding.
Look at the diagram in the third and fourth dimensions.
I’d like mention philosophy of Imre Lakatos-Popper’s student. He claimed that any theory may be really falsified by new, better theory. Early popperian fallibilism was called in this situation-“naive’, Lakatos’ fallibilism is called “sophisticated”. I would prefer to call former rather “fallibilism-due-to-scope-of-tolerance”, latter-“fallibilism-in-condition-of-competition”. Popperism, in little bit improved version, is still the best way for distinguishing reliable theory from unreliable one. Anyway-science needs also metaphysic in popperian way of understanding it as set of axiom being fundamental for any reasoning, even if they are unprovable. Importantly-they should be limited to most needed, unavoidable set of rules. In “metaphisic” Ockham razor must be even sharper than in “scientific theory”. Is string theory metaphisician? I don’t know. Popper made difference between never falsifiable phrase (due its nature) and temporary unfalsifiable one.