Why Evolution is True is a blog written by Jerry Coyne, centered on evolution and biology but also dealing with diverse topics like politics, culture, and cats.
We have two batches left and this is one of them (i.e., send in your photos). Today we have some photos of the Aurora Borealis (and a few other things) taken by Ephraim Heller:
I was fortunate to be in Alaska on the night of Nov. 11-12 when the aurora borealis put on the best show of the year, widely visible across North America. I photographed the event at Birch Lake, about 50 miles southeast of Fairbanks. I shot the aurora from 10pm – 3am as the temperature registered -16° F (-27° C) and I coughed with a cold. It was worth it.
“Aurora borealis” is Latin for “northern dawn.” Despite my wasted BA in Physics, I have never understood what causes the aurora. I decided to look it up and share what I learn with readers of WEIT. Real physicists and astronomers should feel free to correct my errors. Thanks to NASA and to Akari Photo Tours for nice explanations on their websites, from which I have borrowed liberally.
Our sun continuously emits a stream of charged particles (the solar wind) which flows outward through the solar system. When this plasma reaches Earth, it interacts with our planet’s magnetic field (the magnetosphere), depositing and accumulating energy there. During a geomagnetic storm, much of the accumulated energy in the magnetosphere flows down along Earth’s magnetic field lines. As these accelerated particles descend into the atmosphere they collide with oxygen and nitrogen molecules at altitudes between 100 and 300 kilometers. These collisions excite the atmospheric gases, causing them to emit photons.
The detailed version: coronal mass ejections (CMEs) are the eruptions of plasma and magnetic field from the Sun’s atmosphere that drive the most intense geomagnetic storms. If directed at Earth, fast-moving CMEs can reach our planet in as little as 15 hours traveling at ~6.2 million mph (10 million kph).
The critical process triggering auroras is called magnetic reconnection. The solar wind flows around Earth’s magnetosphere like a river rushing around a rock. This onrush of charged particles stretches Earth’s magnetosphere away from the Sun, creating a long wake known as the magnetotail. The magnetic shields of the Sun and Earth are polarized. The polarity of Earth’s magnetic shield is mostly stable, but the Sun’s can vary rapidly. When the polarity of the solar wind is opposite that of Earth’s magnetosphere, the field lines of the Sun and Earth interact strongly. The solar wind then pushes these connected field lines around Earth’s magnetosphere. Eventually, these field lines reach their limit and snap back, releasing energy. This energy accelerates charged particles – primarily electrons and protons – along Earth’s magnetic field lines toward the polar regions, injecting millions of amps into the atmosphere.
Oxygen excited to different energy levels can produce green and red. Green occurs roughly between 60 to 120 miles (100-200 km) altitude, and red occurs above 120 miles (200 km). Excited nitrogen gas from about 60 to 120 miles (100-200 km) glows blue. Depending on the type and energy of the particle it is interacting with, nitrogen can give off both pink and blue light. If it is below about 60 miles (100 km), it gives the lower edge of the aurora a reddish-purple to pink glow.
Aurora brightness depends on (i) the intensity of solar activity and (ii) the efficiency of energy coupling into Earth’s magnetosphere. The standard measurement for the intensity is the Kp index, which ranges from 0 to 9, derived from 13 magnetometer stations globally that monitor Earth’s magnetic field disturbances. A Kp index of 0 represents quiet conditions; a Kp of 9 represents an extreme geomagnetic storm capable of producing auroras near the equator.
Unfortunately, the Kp index has a serious limitation: it reports a 3-hour global average so it is often too slow and too generalized to capture short-lived auroral activity, such as auroral substorms (which typically last 10 – 30 minutes). These powerful bursts can produce the brightest northern lights displays, yet they are frequently missed when relying only on KP forecasts.
Aurora intensity also depends critically on the interplanetary magnetic field’s Bz component. When the solar wind’s magnetic field points southward (negative Bz), conditions favor magnetic reconnection and enhanced energy coupling, producing increased aurora activity. When it points northward (positive Bz), reconnection is suppressed, resulting in reduced activity. Bz values below -10 nT are considered effective for driving geomagnetic storms, and values in the range of -20 to -30 nT usually indicate strong conditions for auroral activity.
Unfortunately, there is no way to predict the Bz. So even when the Kp is high there will be no aurora if the Bz is positive… and you just won’t know until you’re standing next to a lake outside of Fairbanks in -16° F weather with a respiratory infection.
So what happened on the night of November 11-12? Two coronal mass ejections reached earth, the Kp spiked to 8.67 (almost at the maximum value of 9), while the Bz reached a remarkable nadir of -55 nT!
Finally, I have been asked why photographs of auroras often appear more colorful and detailed than what observers perceive with the unaided eye. This is largely an artifact of low-light aurora viewing conditions. Auroras typically produce illumination levels comparable to moonlight, near the threshold of human vision. As you all know, there are two types of photosensitive cells on your retina: cones and rods. Under well-lit conditions, your eyes use cones to process light, known as photopic vision. The cones are responsible for color vision but are not responsive to low light. In very dark environments, rod cells dominate what you see, known as scotopic vision. Colors are barely perceptible under scotopic vision, leading to almost black-and-white, desaturated vision. Your eyes adapt to a low-light environment by transitioning sensitivity from the cones to the rods, which takes 20–30 minutes to complete.
While the eye’s photoreceptors are sensitive, they cannot accumulate light over time in the manner that digital sensors can. A camera sensor employs multiple mechanisms to collect more light than the eye: high ISO sensitivity amplifies the signal, long exposures integrate light over several seconds, and the full spectral response captures wavelengths across the visible spectrum.
Consequently, a faint aurora appearing as a low-contrast feature to the eye reveals more defined coloration in a photograph. As aurora intensity increases, the eye’s color perception approaches what cameras record. Additionally, wide-angle lenses compress perspective differently than human peripheral vision, altering the apparent geometry and apparent motion of auroral structures.
For the photography geeks among you, all of these photos were taken with a Nikon Z8 camera and NIKKOR Z 20mm ƒ1.8 S lens at ISO 3200, ƒ1.8, and shutter speeds of 1.0 second (occasionally up to 2 seconds).
And since this is supposed to be a post of readers’ wildlife photos, here are two images that I made near Fairbanks:
I am in fact surprised that two Iranian philosophers (yes, from the Department of Philosophy of Science, Sharif University of Technology, Tehran, Islamic Republic of Iran) are even allowed to publish this paper, which refers to God, not Allah, and doesn’t mention the Qur’an. Well, that’s a good question, but not the question masticated in this paper in the journal Open Theology (click title to read or see the pdf here.
What we have is the usual kind of Sophisticated Theology™: a paper raising a question based on unsupported premises (there is a god that is kind, omnipotent and loving), and which then goes on to make up an answer about how certain baffling phenomena in the Universe can comport with such a god. Normally the topic of such inquiry is theodicy: why there is evil (especially “natural evil,” like childhood cancer or earthquakes) in a world made and run by such a god. This time, though, the topic is randomness. How, the sweating pair of theologians ask, can true randomness, untouched by God, exist in his Universe? More than that: how can true randomness, as part of the evolutionary process, unerringly wind up producing a species made in God’s image. As the authors ask, pretending to be puzzled:
. . . . from a theological perspective, the randomness and lack of purpose in the evolutionary process appear to conflict with God’s power, sovereignty, and wisdom.
Theologians cannot let this stand, for nothing can be allowed to conflict with God’s assumed wonderfulness and power. Nor do they assume that the randomness and lack of purpose in evolution comes from—could it be?—Satan. No, in the end it’s all part of God’s plan.
The authors first discuss two types of randomness: stuff that appears to be random to us but in reality could be understood, or even predicted, if we had perfect knowledge. Whether a coin comes up heads or tails (or edge!) is this type of randomness.
The other type, which the authors take it upon themselves to comport with God, is fundamental, unpredictable (“ontological”) randomness—chance inherent in a system that cannot be predicted, even with perfect knowledge. Quantum-mechanical “randomness”, or quantum probabilistic outcomes, are of this type. As the authors say:
In contrast, the real challenge for the relationship between God and the world lies in the existence of ontological or metaphysical randomness, which suggests that chance is an inherent aspect of the world’s structure and is inseparable from its dynamic nature. Ontological randomness cannot simply be viewed as a reflection of our inability to gain a certain understanding or a cognitive deficiency in comprehending the physical world. In other words, ontological randomness suggests a type of randomness inherent in the fundamental indeterminacy of the natural world. When every explanation of cosmic, macroscopic, and even biological phenomena relies on the principles of particle physics – which itself is characterized by intrinsic indeterminacy and stochastic events – it appears that we are confronted with ontological randomness.
. . . Ontological randomness. . . refers to events that cannot be predetermined in principle. Contrary to the views of proponents of ID, evolutionists argue that randomness is inherently non-purposeful. It is not merely a matter of attributing randomness to mutations due to our limited epistemic capacity to analyze the complex systems involved in the causal processes – similar to our inability to fully understand the causes of earthquakes or the movement of airborne particles. Rather, the fundamental indeterminacy of these processes means that no one can predict when they occur, much like our lack of access to the origins of nuclear emissions from Uranium-238.
Now the authors assume that evolution is driven by ontologically random mutations (“random” meaning, in the evolutionary sense, that the chance that a mutation will occur has nothing to do whether it will increase or decrease the bearer’s reproduction). This itself may not be a good assumption, for, if we had perfect knowledge, we might be able to predict when and where a change in the DNA might take place. The role of quantum phenomena in mutation (if there is such a role) is still unknown.
But let’s be charitable and assume that yes, mutations in the evolutionary process are like movements of electrons: ontologically unpredictable. How could such a process not reflect decisions of God and yet wind up with his most desired of all “creations,” Homo sapiens.
Here’s the authors’ answer:
Our preferred reconciliation does not view the relationship between God and the natural world as a dualistic one. Any dualistic perspective ultimately leads to the problem of interaction and, consequently, the “God of the gaps” fallacy. Instead, we embrace the open theistic view, which holds that the world exists within God. Although the divine transcends the natural world, it is also immanent within it; thus, the evolutionary process occurring in the world unfolds as a manifestation of God’s self-expression and self-consciousness.
The world is progressing toward God’s self-consciousness through the evolutionary process, which has culminated in human beings who exist within the natural world, are part of nature, and possess awareness of both their surroundings and of God Himself. In this perspective, the process of evolution becomes a revelation of God’s nature. God reveals Himself in the universe by becoming increasingly self-conscious, and this self-consciousness fosters freedom; true freedom arises from autonomy rather than heteronomy, and autonomy is rooted in self-awareness. The divine is indeed the sovereign designer and intelligent architect of the world, but does not merely create from a position of supreme distance. As Carl Schmitt notes, “The sovereign, who in the deistic view of the world, even if conceived as residing outside the world, had remained the engineer of the great machine, has been radically pushed aside. The machine now runs by itself.“
If you detect a whiff of pantheism here, you’re right, and the authors admit it (bolding is mine):
According to our panentheistic and open-theistic view, God is the designer of the world, which serves as a revelation of the divine mind and nature. God does not reside outside the world; rather, the divine is immanent within it and transcendent of it. The world does not operate on autopilot. The randomness we observe in the world signifies divine sovereignty and omnipotence, granting the world the necessary freedom to reveal its nature, which simultaneously unveils the nature of God.
Of course that last bit is totally made up, for the authors have no way of knowing that this is true of God (remember, they can’t even show us that there’s a God). This disproves the idea that the “clash” of ideas instantiated by freedom of speech will eventually arrive at the truth. Theology is one disproof of that idea, for it and its understanding of gods haven’t advanced one iota despite many clashes of ideas.
But of course God being all-knowing, somehow must have realized that the randomness He himself created would produce, with the help of natural selection, a creature made in His own image. Isn’t that special?
These lucubrations are part of what is called “open theology,” in which God grants the world freedom. Not just physical freedom, but its result, real free will (which of course the authors see as ontologically unpredictable, though it isn’t). In their drive to make up a concept of God that comports with ontological randomness, they hit on an answer that isn’t new: God wanted a world with maximal freedom because such a world is the best of all possible worlds:
The traditional view of divine sovereignty is often characterized by the notion of God having full control over every event, leading to the idea of eternal predetermination. This dominant perspective in the history of Abrahamic religions posits that the existence of ontological randomness implies that the entire system is not under God’s control, allowing for procedures that operate without purpose under divine sovereignty. However, according to open theism, we should comprehend God’s sovereignty in harmony with divine mercy. Thus, divine sovereignty does not imply a paternalistic control over all things; rather, it embodies the granting of freedom. The truly powerful agent bestows life and freedom, enabling others to flourish instead of confining and controlling them. The Almighty is not merely an omni-controller or authority but a liberator, allowing all creatures to choose their own paths according to their inherent potential and encouraging them to reveal their capabilities. This process of world disclosure is itself a manifestation of God and contributes to divine self-consciousness.
So God’s at the wheel after all, and the freedom he bestowed on the world includes the freedom of children to die of cancer and of the tectonic plates to cause death-dealing earthquakes and tsunamis. (This kind of theodicy the authors don’t explain.)
I can’t bear to go on much longer as I watch the sweat-sodden authors make a virtue of necessity, but I’ll quote one more bit to show how they do this. As one sees so often in Sophisticated Theology™, they simply attribute their solution to another theologian, as if citing yet another shill somehow justifies their own “solution”:
As Bradley eloquently explains, power, when understood in the context of mercy and love, does not necessitate complete control; rather, it signifies the full endowment of freedom and life. The omnipotent is the one who most effectively enables creatures to experience life freely, filled with love and happiness. Certainly, God has a distinct plan, a desired program, and a unique teleology for creation; however, this teleology unfolds through its manifestation in nature, as the natural world evolves through its history.
. . . From this perspective, the randomness present in mutations reflects the freedom that God grants to all creatures. Through the evolutionary process, the world progresses toward an outcome of self-consciousness. Consequently, human beings emerge as the result of this evolutionary journey, possessing the capacity to understand their place in the world and, as part of the natural order, becoming aware of the world itself.
It always amazes me that theologians who can offer no convincing proof of a god’s existence are so sure about god’s nature and his methods. How do they know this stuff? The answer is that they don’t: they are either making stuff up or stealing ideas from their predecessors.
You may have noted that yes, there is teleology here. There is surely not complete freedom, as a rerun of evolution, if quantum mechanics has any effect on mutations, would not necessarily produce either consciousness or humans. And yes, the randomness isn’t true ontological randomness because it is biased towards getting what God wants (my bolding):
. . . . if we see God as immanent in the world and, so, in a panentheistic view according to which God is transcendent of the world but is not separated from nature, then we can explain why nature is biased toward the marvelous. The reason is that nature is manifesting God’s marvelous beauty.
To that, all I can say is “oy vey!”
In the end, then, the authors have produced nothing new. They’ve espoused pantheism, in which the God-who-is-in-everything has set up the world so it produces “the marvelous”, i.e. H. sapiens. This is not novel, and it’s not even ontological randomness. It is hooey. And two Iranian philosophers of science have gotten paid to produce it. The only question is not why they go on about this stuff at such length, but how the journal Open Theology was willing to publish a paper with such a mundane answer. Do they apply no critical standards? The answer is in the second word of the journal’s title.
The biggest question, though, is how I can be on an Arctic trip and have time to go after such bushwah. The answer to that one is that today is a sea day, and I don’t have a book to read or wish to watch television.
Here’s an amazing video sent to me by reader Bryan Lepore. I didn’t quite understand what it showed, and he explained:
I think it is simply this:
1. Create a soap bubble from a soap solution that is sitting in a speaker/woofer.
2. Shine a light on the bubble. Here, you can see a ring of dots—that is simply a strip of LEDs in a ring. I have a light strip like this, and it produces unexpected results compared to an incandescent light.
3. Activate the speaker with different frequencies. This vibrates the bubble and the reflected image of the LED light strip.
Today we have a guest post from reader Coel Hellier, who does this kind of stuff for a living. His text deals with the recent kerfuffle about whether a nearby planet shows an atmospheric gas indicative of life. I particularly like the details about how scientists go about analyzing a question like this. His text is indented, and he’s added the illustrations.
Is the dimethyl sulphide in the atmosphere of exoplanet K2-18b real?
Everyone is interested in whether there is life on other planets. Thus the recent claim of a detection of a biomarker molecule in the atmosphere of an exoplanet has attracted both widespread attention and some skepticism from other scientists.
The claim is that planet K2-18b, 124 light years from Earth, shows evidence of dimethyl sulfide (DMS), a molecule that on Earth arises from biological activity. Below is an account of the claim; I try to include more science than the does mainstream media, but do so largely with pictures in the hope that the non-expert can follow the argument.
Transiting exoplanets such as K2-18b are discovered owing to the periodic dips they cause in the light of the host star:
And here is the lightcurve of K2-18b, as observed by the James Webb Space Telescope, showing the transit that led to the claim of DMS by Madhusudhan et al.:
If we know the size of the star (deduced from knowing the type of star from its spectrum), the fraction of light that is blocked then tells you the size of the planet.
But we also need to know its mass. One gets that from measuring how much the host star is tugged around by the planet’s gravity, and that is obtained from the Doppler shift of the star’s light.
The black wiggly line in the plot below is the periodic motion of the star caused by the orbiting planet. Quantifying this is made harder by lots of additional variation in the measurements (blue points with error bars), which is the result of magnetic activity on the star (“star spots”). But nevertheless, if one phases all the data on the planet’s orbital period (lower panel), then one can measure the planet’s mass (plot by Ryan Cloutier et al):
So now we have the mass and the size of the planet (and we also know its surface temperature since we know how far it is from its star, and thus how much heating it gets). Combining that with some understanding of proto-planetary disks and planet formation. we can thus dervise models of the internal composition and structure of the planet.
The problem is that multiple different internal structures can add up to the same overall mass and radius. One has flexibility to invoke a heavy core (iron, nickel), a rocky mantle (silicates), perhaps a layer of ice (methane?), perhaps a liquid ocean (water?), and also an atmosphere.
This “degeneracy” is why Nikku Madhusudhan can argue that K2-18b is a “hycean” planet (hydrogen atmosphere over a liquid-water ocean) while others argue that it is instead a mini-Neptune, or that it has an ocean of molten magma.
But one can hope to get more information from the detection of molecules in the planet’s atmosphere, a task that is one of the main design goals of the James Webb Space Telescope [JWST]. The basic idea is straightforward: During transit, some of the starlight will shine through the thin smear of atmosphere surrounding the planet, and the different molecules absorb different wavelengths of light in a pattern characteristic of that molecule (figure by ESA):
So one observes the star both during the transit and out of transit, and then subtracts the two, and the result is a spectrum of the planet’s atmosphere.
If the planet is a large gas giant with a fluffy, extended atmosphere and is orbiting a bright star (so that a lot of photons pass through the atmosphere), the results can be readily convincing. For example, here is a spectrum of exoplanet WASP-39b with features from different molecules labelled (figure by Tonmoy Deka et al):
[I include a plot of WASP-39b partly because I was part of the discovery team for the Wide Angle Search for Planets survey, but also because it is pretty amazing that we can now obtain a spectrum like that of the atmosphere of an exoplanet that is 700 light-years away, even while the planet itself is so small and dim and distant that we cannot even see it.]
The problem with K2-18b is that the star is vastly fainter and the planet much smaller than WASP-39b. This is at the limit of what even the $10-billion JWST can do.
When you’re subtracting two very-similar spectra (the in- and out-of-transit spectra) to look for a rather small signal, any “instrumental systematics” matter a lot. Here is the same spectrum of K2-18b, as processed by several different “data reduction pipelines”, and as you can see the differences between them (effectively, the limits of how well we understand the data processing) are similar in size to the signal (plot by Rafael Luque et al):
The next problem is that there are a lot of different molecules that one could potentially invoke (with the constraint of making the atmospheric chemistry self-consistent). For example, here are the expected spectral features from eight different possible molecules (figure by Madhusudhan):
To finally get to the point, I show is the crucial figure below. Nikku Madhusudhan and colleagues argue — based on an understanding of planet formation, and on arguments that planets like K2-18b are hycean worlds [with a liquid water ocean under a hydrogen-rich atmosphere], and from considerations of atmospheric chemistry, in addition to careful processing and modelling of the spectrum itself — that the JWST spectrum of K2-18b is best interpreted as follows (the blue line is the model, the red error bars are the data):
This interpretation involves large contributions from DMS (dimethyl sulphide) and also DMDS (dimethyl disulphide) — the plot below shows the different contributions separated — and if so that would be notable, since on Earth those compounds are products of biological activity—mainly from algae.
In contrast, Jake Taylor analysed the same spectrum and argues that he can fit it adequately with a straight line, and that the spectral features are not statistically significant. Others point out that the fitted model contains roughly as many free parameters as data points. Meanwhile, a team led by Rafael Luque reports that they can fit the spectrum without invoking DMS or DMDS, and suggest that observations of another 25 transits of K2-18b would be needed to properly settle the matter.
There are several distinct questions here: Are the details of the data processing sufficiently understood? (perhaps, but not certainly); are the relevant spectral features statistically significant? (that’s borderline); and, if the features are indeed real, are they properly interpreted as DMS? (theorists can usually think of alternative possibilities). Perhaps a fourth question is whether there are abiotic mechanisms for producing DMS.
This is science at the cutting edge (and Madhusudhan has been among those emphasizing the lack of certainty, though the doubts have not always been in news stories), and so the only real answer to these questions is that things are currently unclear. This is a fast-moving area of astrophysics and we’ll know a lot more in a few years.
Although I’ve read quite a few books on quantum mechanics—popular books, not books intended for physicists—I still don’t understand it. That is, I can understand the history, the controversies and some of the phenomena, as well as the various interpretations of quantum mechanics. But when it comes to stuff like entanglement, I’m baffled—not just by its existence, but what it really means physically and how it could be possible.
Sean Carroll (the physicist) has just published a paper in Nature that is about as clear an explanation of the weirdness of quantum mechanics as I can imagine. I still don’t understand entanglement, but Carroll does point out why people like me have difficulty grasping some of the concepts and predictions.
Since, as Carroll notes, Heisenberg “first put forward a comprehensive version of quantum mechanics” in 1925, it is in one sense the 100th anniversary of quantum theory:
Click below to read for free:
I’ll give a few quotes under headings that I’ve made up:
Why quantum mechanics is qualitatively different from classical mechanics.
The failure of the classical paradigm can be traced to a single, provocative concept: measurement. The importance of the idea and practice of measurement has been acknowledged by working scientists as long as there have been working scientists. But in pre-quantum theories, the basic concept was taken for granted. Whatever physically real quantities a theory postulated were assumed to have some specific values in any particular situation. If you wanted to, you could go and measure them. If you were a sloppy experimentalist, you might have significant measurement errors, or disturb the system while measuring it, but these weren’t ineluctable features of physics itself. By trying harder, you could measure things as delicately and precisely as you wished, at least as far as the laws of physics were concerned.
Quantum mechanics tells a very different story. Whereas in classical physics, a particle such as an electron has a real, objective position and momentum at any given moment, in quantum mechanics, those quantities don’t, in general, ‘exist’ in any objective way before that measurement. Position and momentum are things that can be observed, but they are not pre-existing facts. That is quite a distinction. The most vivid implication of this situation is Heisenberg’s uncertainty principle, introduced in 1927, which says that there is no state an electron can be in for which we can perfectly predict both its position and its momentum ahead of time.
On entanglement.
The appearance of indeterminism is often depicted as their [people like Einstein and Schrödinger’s] major objection to quantum theory — “God doesn’t play dice with the Universe”, in Einstein’s memorable phrase. But the real worries ran deeper. Einstein in particular cared about locality, the idea that the world consists of things existing at specific locations in space-time, interacting directly with nearby things. He was also concerned about realism, the idea that the concepts in physics map onto truly existing features of the world, rather than being mere calculational conveniences.
Einstein’s sharpest critique appeared in the famous EPR paper of 1935 — named after him and his co-authors Boris Podolsky and Nathan Rosen — with the title ‘can quantum-mechanical description of physical reality be considered complete?’. The authors answered this question in the negative, on the basis of a crucial quantum phenomenon they highlighted that became known as entanglement.
If we have a single particle, the wavefunction assigns a number to every possible position it might have. According to Born’s rule, the probability of observing that position is the square of the number. But if we have two particles, we don’t have two wavefunctions; quantum mechanics gives a single number to every possible simultaneous configuration of the two-particle system. As we consider larger and larger systems, they continue to be described by a single wavefunction, all the way up to the wavefunction of the entire Universe.
As a result, the probability of observing one particle to be somewhere can depend on where we observe another particle to be, and this remains true no matter how far apart they are. The EPR analysis shows that we could have one particle here on Earth and another on a planet light years away, and our prediction for what we would measure about the faraway particle could be ‘immediately’ affected by what we measure about the nearby particle.
The scare quotes serve to remind us that, according to the special theory of relativity, even the concept of ‘at the same time’ isn’t well defined for points far apart in space, as Einstein knew better than anyone. Entanglement seems to go against the precepts of special relativity by implying that information travels faster than light — how else can the distant particle ‘know’ that we have just performed a measurement?
Yes, I know that this cannot be understood in terms of everyday observation, but what I fail to understand—and perhaps some reader can explain this to me—is exactly what properties of a particle can be affected by ascertaining properties of another particle light years away.
I’ll leave you to read the various interpretations of quantum theory, the most trenchant involving whether it actually represents physical reality or is merely a theory meant to explain experimental results. I’m not sure where Carroll fits on this spectrum, but I do see that while he describes another interpretation, the “Everttian or many-worlds interpretation,” I thought that Carroll used to favor this explanatin, which of course is deeply, deeply, weird, creating a new but unobservable universe each time an observer measures something. His summary of the state of the field is this:
So, physicists don’t agree on what precisely a measurement is, whether wavefunctions represent physical reality, whether there are physical variables in addition to the wavefunction or whether the wavefunction always obeys the Schrödinger equation. Despite all this, modern quantum mechanics has given us some of the most precisely tested predictions in all of science, with agreement between theory and experiment stretching to many decimal places.
The big remaining problem. If you read even a bit about quantum physics, you’ll know this:
Then, there is the largest problem of all: the difficulty of constructing a fundamental quantum theory of gravity and curved space-time. Most researchers in the field imagine that quantum mechanics itself does not need any modification; we simply need to work out how to fit curved space-time into the story in a consistent way. But we seem to be far away from this goal.
What good is quantum mechanics? But of course quantum mechanics, even if not comprehensible by the standards of everyday experience, has been immensely useful, for we’ve long known that its predictions match observations about as closely as any theory can. Here are the benefits:
Meanwhile, the myriad manifestations of quantum theory continue to find application in an increasing number of relatively down-to-Earth technologies. Quantum chemistry is opening avenues in the design of advanced pharmaceuticals, exotic materials and energy storage. Quantum metrology and sensing are enabling measurements of physical quantities with unprecedented precision, up to and including the detection of the tiny rocking of a pendulum caused by a passing gravitational wave generated by black holes one billion light-years away. And of course, quantum computers hold out the promise of performing certain calculations at speeds that would be impossible if the world ran by classical principles.
And don’t ask me what “quantum chemistry” is, as I know it not.
These are just small excerpts. Go read about the theory in its centenary year.
I’ve mentioned before Robert Sapolsky’s recent book Determined: A Science of Life Without Free Will, a 528-page behemoth that at times is a bit of a slog and at other times an inspiration. (See here, here, here, and here for previous posts about it.) I found his argument against libertarian free will convincing, but of course I already believed that there is no good argument for libertarian (“you-could-have-done-otherwise”) free will (LFW), so I was on his side from the outset. I’m a hard determinist, and that’s based on seeing that the laws of physics obtain everywhere. But people are still maintaining not just that we can confect some form of free will despite the truth of determinism (these people are called “compatibilists”), but that we have real libertarian free will. They are wrong.
The video below, arguing for LFW, came in an email from Quillette touting their most popular articles of 2024. But this was a short (4.5-minute) video, not an article, and I don’t think the video was one of the top items. Perhaps the note referred to a Quillette article by Stuart Doyle (below) on which the video is based, but that article was published in 2023.
At any rate, listen to the video first, and then, if you want to see what I consider an unconvincing argument against free will (though it does make some fair criticisms of Determined), click on the headline below to read Doyle’s argument that we have “not disproven free will.”
The narrator of the video isn’t named, but she pretty much parrots what’s in Doyle’s essay, emphasizing an argument for free will that Doyle considers dispositive, but to me seems irrelevant.
You may notice some problems with the “rebuttal” described in the video. For example, it seems irrelevant to argue that “just because a neuron doesn’t have free will doesn’t mean that the bearer of a collection of neurons (a person) doesn’t have free will.” This is an argument that the emergent property of LFW can still appear even if neurons themselves behave according to physical law (a large argument in Sapolsky’s book). Also, if quantum physics is truly and fundamentally unpredictable (and we don’t know this for sure), that itself, says the narrator, poses a problem for free will, because it means that, at any given moment, a quantum event may change your behavior.
There are two problems with the quantum-indeterminacy argument. First, nobody ever maintained that quantum events like the movement of an electron can result from one’s volition (“will”), so unpredictability at a given moment does not prove volition. Further, we don’t even know (and many of us doubt) that a quantum event can change human behavior or decisions on a macro level. Some people have calculated that it can’t. So the whole issue of quantum unpredictability is irrelevant to the main problem: whether, at a given moment, you can, through your own agency, have behaved or decided differently.
This brings up the problem of predictability. The narrator’s (and Doyle’s) argument is that if you cannot predict someone’s behavior or decision—even with perfect knowledge of everything—then we have free will. As I just said, quantum physics may cause such fundamental unpredictability, but doesn’t support the notion that we have LFW Yet the video and Doyle suggest there is another form of fundamental unpredictability that can cause a lack of predictability despite perfect physical knowledge: computational undecidability. Both the narrator and Doyle accuse Sapolsky of complete ignorance of this concept, which, they say, constitutes “a major flaw in Sapolsky’s argument.” The narrator says that if human behavior is fundamentally unpredictable, then it supports the idea that free will exists. The premise of this criticism is, of course, is that if you can’t predict human behavior and decisions, even with perfect physical knowledge, then you can’t say that we lack free will. But these arguments using predictability are flimsy arguments against determinism, and, in fact, we’ll never have the perfect knowledge we need to predict behavior.
The problem is that quantum mechanics can in principle wreck perfect predictability of behavior, but that possibility doesn’t support free will. So does “computational undecidability”, another thing that impedes prediction, leave room for free will? I don’t think so (see below).
The essay by Stuart Doyle on which this video is based can be accessed by clicking the link below, or you can find it (archived here). Doyle is a graduate student in psychology at the University of Kansas.
Let me start by saying that Doyle’s essay, while it makes its points clearly and strongly, seems almost mean, as if Doyle takes great joy in telling us how stupid Sapolsky is. And this is coming from someone (me) who’s been accused of the same thing. (I plead not guilty, at least for my published work.). But for a scholar publishing a rebuttal on a major site, it seems to me uncharitable to say stuff like this:
Sapolsky’s conclusions about morality and politics stand on nothing beyond his personal tastes. His book was marketed with such authoritative headlines as “Stanford scientist, after decades of study, concludes: We don’t have free will.” In contrast to the hype, Determined is ultimately a collection of partial arguments, conjoined incoherently. And Robert Sapolsky is to blame.
Sapolsky is to blame? Well, yes, of course he is, he’s the author, but the concept of blaming someone for writing a book they don’t like, and and accusing them of incoherence (I disagree) is not civil discourse. But let’s move on.
The observation that every object in the universe obeys physical law does directly imply that there is no amorphous “will” that can affect the laws of physics, something that physicist Sean Carroll (a compatibilist) has emphasized. To me, this puts the onus on those who accept LFW to tell us what aspect of human volition is independent of the laws of physics.What form of nonphysical magic can change the output of our neurons? So far, nobody has done this. Thus, to a large extent, I think, one can tentatively accept determinism simply from knowing that every physical object obeys well-known laws and, as Carroll has written, “The laws underlying the physics of everyday life are completely understood.” Carroll:
All we need to account for everything we see in our everyday lives are a handful of particles — electrons, protons, and neutrons — interacting via a few forces — the nuclear forces, gravity, and electromagnetism — subject to the basic rules of quantum mechanics and general relativity. You can substitute up and down quarks for protons and neutrons if you like, but most of us don’t notice the substructure of nucleons on a daily basis. That’s a remarkably short list of ingredients, to account for all the marvelous diversity of things we see in the world.
So yes, Carroll is a determinist in a way that refutes libertarian free will, but in the link saying he’s a compatibilist, you’ll see that he says that we have a sort of free will instantiated in the emergent properties of humans acting as agents and expressing preferences. (Of course our tastes and preferences are also formed in our brain by the laws of physics.) Well, there is no real emergence that defies the laws of physics: emergence may not be predictable from lower-level phenomena, but it is consistent and derives from lower-level phenomena. Saying, as Doyle does, that “The ‘mechanism’ that produces deliberative choices is the whole person” is to say nothing that refutes determinism.
As I reread Doyle’s paper, I realized that although he does point out some contradictions in Sapolsky’s arguments, Doyle does nothing to dispel determinism. What appears to be the central contention of his essay is that there is another way that physical objects can behave unpredictably beyond quantum mechanics, and that way is computational decidability. But that supports LFW no more than does any unpredictability of quantum mechanics.
Here’s what Doyle says:
So what could give us the ability to surprise Laplace’s demon? Computational undecidability. This is a term describing a system that cannot be predicted, given complete knowledge of its present state. This fundamental unpredictability shows up in algorithmic computation, formal mathematical systems, and dynamical systems. Though an unpredictable dynamical system may evoke the concept of chaos, undecidability is a different sort of unpredictability. As described by one of the greatest living information theorists, C.H. Bennett:
For a dynamical system to be chaotic means that it exponentially amplifies ignorance of its initial condition; for it to be undecidable means that essential aspects of its long-term behavior—such as whether a trajectory ever enters a certain region—though determined, are unpredictable even from total knowledge of the initial condition.
If a system exhibits undecidability, then it is unpredictable even to Laplace’s demon, while a system that is merely chaotic is perfectly predictable to the demon. Chaos is only unpredictable because the initial conditions are not perfectly known. So it would be fair to dismiss that kind of unpredictability as mere ignorance—an epistemological issue, not an ontological reality. But the delineation between the epistemic and the ontic falls apart when we talk about what Laplace’s demon can’t know. An issue is “merely” epistemological when there is a fact of the matter, but the fact is unknowable. There actually is no fact about how an undecidable system will behave until it behaves. For a fact to exist, it must be in reference to some aspect of reality. But nothing about present reality could ground a fact about the future behavior of an undecidable system. In contrast, the exact actual state of present reality grounds facts about the future of chaotic systems. We just can’t know the exact actual state of present reality, thus unpredictability is “merely” epistemological in the case of chaos, but not in the case of undecidability.
Arguably, human behavior is undecidable, not just chaotic. And that would mean that human choice is free in exactly the way we’d want it to be; determined—by our own whole selves, with no fact of the matter of what we’ll choose before we choose it. But Sapolsky seems unaware of undecidability as a concept. He mislabels cellular automata as chaotic, rather than recognizing the truth that they exhibit undecidability. This is a major factual error on Sapololsky’s part.
First of all, from what I’ve read of computational undecidability, it is a phenomenon not of physical objects, but of philosophy combined with mathematical concepts and models. As Wikipedia says (and yes, I’ve read more than that article):
There are two distinct senses of the word “undecidable” in contemporary use. The first of these is the sense used in relation to Gödel’s theorems, that of a statement being neither provable nor refutable in a specified deductive system. The second sense is used in relation to computability theory and applies not to statements but to decision problems, which are countably infinite sets of questions each requiring a yes or no answer. Such a problem is said to be undecidable if there is no computable function that correctly answers every question in the problem set. The connection between these two is that if a decision problem is undecidable (in the recursion theoretical sense) then there is no consistent, effective formal system which proves for every question A in the problem either “the answer to A is yes” or “the answer to A is no”.
Two points here. First, Doyle gives not one example of a biological system in which “computational undecidability” would obtain. If there was one, why didn’t he mention it? It seems to me solely a mathematical/logical concept, and my (admittedly cursory) readings have turned up nothing in biology or physics that seems “computationally undecidable”, much less in a way that would give us free will.
Second, even if there is a fundamental and non-quantum form of unpredictability in physics and biology, that doesn’t open up the possibility of free will. That would depend on whether our “will” could, in some non-physical way, affect the behavior of molecules. If it cannot happen with quantum mechanics, then how can it happen with computational undecidability? Unless Doyle tells us how this mathematical/logical idea can somehow affect our behavior according to our “will”, he has no argument against determinism and thus has no argument for free will.
Now it’s true that belief in “physical determinism—folding into that term quantum and other unpredictable effects not affected by our volition)—is largely a conclusion from observing nature. But just because we cannot absolutely prove determinism of behavior from science, we can still increase determinism’s priors by various experiments. These include recent studies showing that you can predict, using brain scanning, binary decisions that people make before they are conscious of having made them. For example, if people are given a choice of adding or subtracting two numbers, scanning their brains shows that you can, with substantial probability (60-70%), predict whether they’ll add or subtract up to ten seconds before they are conscious of having made a choice. And this is from crude methods of measuring brain activity (e.g., fMRI). Perhaps by measuring individual neurons or groups of neurons we could predict even better. But the experiments so far imply that decisions are made before people are conscious of them, and that raises the Bayesian priors that people’s behaviors are determined by physics, not by their “will”.
And there are various other experiments showing that you can both increase or decrease people’s sense of volition. Electrical stimulation of the brain can make people think that they made a decision when in fact it’s purely the result of stimulating certain neurons. This causes people to make up stories of why they did things like raise their hand when a part of their brain is stimulated (“I decided to wave at that nurse”). But that sense of volition is bogus. This kind of post facto confabulation, which occurs very soon after you decide something or do something, is what makes us think what we have LFW. Further, there may be evolutionary reasons why we think we have libertarian free will, but I won’t get into those. Suffice it to say that I think that our feeling of having LFW is merely a very powerful illusion—an illusion that may have been installed in our brains by natural selection.
On the other hand, you can make people think that they didn’t have volition when in fact they did. A Ouija board is one example: people unconsciously move the “cursor” around to make words when they think that it’s moving independently of their will. There are other experiments like these, all showing that you can either strengthen or weaken people’s sense of volition and will using various psychological tricks. And they all go to refute the idea of libertarian free will
So yes, I think Sapolsky is right. His determinism agrees with nearly all the scientists (including compatibilists) who think that the notion of libertarian free will is bogus. To think otherwise is to believe that there is some kind of non-physical mental magic that can change the laws of physics.
One final point. Arguments about free will are not just philosophical wheel-spinning, for they play directly into an important part of society: reward and punishment—especially punishment. If the legal system truly embraced determinism of behavior, we could still have punishment, but it would be very different. We would punish to keep bad people off the streets, to give people a chance for rehabilitation (if they can be rehabilitated), and to deter others. But what we would not have is retributive punishment: punishment for having made the wrong choice.
Legal systems are grounded on the notion that we are morally responsible, but under determinism we’re not. Yes, we can be responsible for an act, but “moral” responsibility is intimately connected with libertarian free will; it’s the idea that we have the ability, at any given time, to act either morally or immorally (or make any any other alternative decision, even if it doesn’t involve morality). Yes, I know there are some who think that the justice system already implicitly accepts determinism, but they are wrong. For if it did, we wouldn’t have any form of retributive punishment, including capital punishment.
As for rewarding good behavior, well, yes, you couldn’t have done otherwise than, say, saved a drowning person. But rewarding people who do good is a spur for other people to do good. Even if the rewarded people don’t “deserve” plaudits in the sense that their accomplishments didn’t come from LFW, handing out rewards for things that society approves of is simply a good thing to do—for society.
Oh, a p.s. Because people feel so strongly that they do have libertarian free will, I have faced more opposition when touting determinism than when touting the truth of evolution. As I always say, “It’s much harder to convince a free-willer of the truth of determinism than to convince a creationist of the truth of evolution.” People feel so strongly that they have LFW that I have suffered two unpleasant consequences for touting determinism. I’ve told these stories before, but a big jazz musician nearly attacked me for implying that his solos were not truly extemporaneous, and that he could not have played a different solo, and on another occasion an old friend kicked me out of his house because he couldn’t abide the notion of determinism. No creationist has ever treated me in those ways!
Yes, I deliberately misspelled “trifecta” in the heading (actually, it was a typo, but turned out to be appropriate). First is a short video from The Dodo about a woman who tamed a very wary feral cat—with her hair! It took three years.
The mystery is why Spooky liked the smell of the woman’s hair but was wary of the smell of her body.
*******************
And an article from IFL Science (you know what the “IFL” stands for) discussing an experiment by seven Chinese researchers who created a superposition state lasting for 23.3 minutes. You probably remember that the Schrödinger’s Cat thought experiment involved an implication of the “Copenhagen interpretation” of quantum mechanics. Here’s a description of the experiment by Wikipedia and a drawing of it:
In Schrödinger’s original formulation, a cat, a flask of poison, and a radioactive source are placed in a sealed box. If an internal radiation monitor (e.g. a Geiger counter) detects radioactivity (i.e. a single atom decaying), the flask is shattered, releasing the poison, which kills the cat. The Copenhagen interpretation implies that, after a while, the cat is simultaneously alive and dead. Yet, when one looks in the box, one sees the cat either alive or dead, not both alive and dead. This poses the question of when exactly quantum superposition ends and reality resolves into one possibility or the other.
Although originally a critique on the Copenhagen interpretation, Schrödinger’s seemingly paradoxical thought experiment became part of the foundation of quantum mechanics. The scenario is often featured in theoretical discussions of the interpretations of quantum mechanics, particularly in situations involving the measurement problem. As a result, Schrödinger’s cat has had enduring appeal in popular culture. The experiment is not intended to be actually performed on a cat, but rather as an easily understandable illustration of the behavior of atoms. Experiments at the atomic scale have been carried out, showing that very small objects may exist as superpositions; but superposing an object as large as a cat would pose considerable technical difficulties.
The “Copenhgagen interpretation” of quantum mechanics, the one held by Bohr, Heisenberg, and many other founders of the field, sees q.m. as inherently indeterministic, involving only probabilities that certain states will occur. Schrödinger apparently thought the idea that a cat could be in a superposed state of being both alive and dead at the same time was ludicrous, so the feline Gedankenexperiment was really a critique of the Copenhagen interpretation. But it’s become part of the Copenhagen interpretation as far as I understand, with the boxed cat both alive and dead at the same time, and the superposition (wave function) resolved only when the cat is observed when the box is opened.
There are other interpretations of this experiment, notably the “many-worlds” theory, in which there is no superposition, but a splitting of the universe when the experiment is conducted, with the cat alive in one universe and dead in the other. Don’t ask me which interpretation is “right”!
(Caption and attribution from Wikipedia): Schrödinger’s cat: a cat, a flask of poison, and a radioactive source connected to a Geiger counter are placed in a sealed box. As illustrated, the quantum description uses a superposition of an alive cat and one that has died. Dhatfield, CC BY-SA 3.0, via Wikimedia Commons
No cats were superposed in any experiments. But according to the experiment described below, the experiment was performed on an isotope of the metallic element Ytterbium, kept in superposition (having opposite spins at the same time) for over 23 minutes. IFL Science gives a summary (click headline to read):
Excerpt:
States in quantum superposition are notoriously fragile but researchers in China have reported creating such a state that lasted for a whopping 23 minutes and 20 seconds. This record-breaking result is exciting in itself but the team believes that it could open new ways to high-precision measurements and even information processing for quantum computers – possibly even allowing scientists to probe the limits of physical theories.
The study, which is yet to be peer-reviewed, conducted by scientists at the University of Science and Technology of China, saw 10,000 atoms of ytterbium cooled down to a few thousandths of a degree above absolute zero and trapped using light. Each atom could be controlled with great accuracy and was put into the superposition of two very different spin-states. This is known as a “quantum cat” state.
In the famous Schrödinger’s cat thought experiment, we see a cat closed in a box with a poison activated by a random quantum process. Without opening the box we cannot ascertain the state of the cat, so it is both alive and dead, two contradictory states in the non-quantum reality we experience. In the quantum world, quantum cat states are superpositions where a quantum state can exist in several ways at once, although it’s impossible to tell which one it really is so it’s effectively all of them at once.
In the new experiment, it is the length of this quantum cat state that is astounding. In nature, the superposition will collapse into one or the other in a fraction of a second, but here it persisted for 1,400 seconds. The team thinks that with a better vacuum system, it can be made to last even longer.
“It’s a big deal because they’re making this beautiful cat state in an atomic system and it’s stable,” Barry Sanders, from the University of Calgary who was not involved in the study, told New Scientist. “A probe gets jiggled and pushed and nudged and prodded, and then by seeing what happens, you learn about the things that interact with it.”
Here’s a screenshot of the paper’s title from Arχiv (click to read). I’ve put its abstract below, but good luck understanding it!
The abstract:
Quantum metrology with nonclassical states offers a promising route to improved precision in physical measurements. The quantum effects of Schr{ö}dinger-cat superpositions or entanglements allow measurement uncertainties to reach below the standard quantum limit. However, the challenge in keeping a long coherence time for such nonclassical states often prevents full exploitation of the quantum advantage in metrology. Here we demonstrate a long-lived Schr{ö}dinger-cat state of optically trapped 173Yb (\textit{I}\ =\ 5/2) atoms. The cat state, a superposition of two oppositely-directed and furthest-apart spin states, is generated by a non-linear spin rotation. Protected in a decoherence-free subspace against inhomogeneous light shifts of an optical lattice, the cat state achieves a coherence time of 1.4(1)×103 s. A magnetic field is measured with Ramsey interferometry, demonstrating a scheme of Heisenberg-limited metrology for atomic magnetometry, quantum information processing, and searching for new physics beyond the Standard Model.
***************
Finally, one more example, from ZME Science, of a cat aiding scientific discovery. This wouldn’t have happened if the cat’s staff didn’t include a scientist, which helped identify a new virus in a mouse caught by his moggie. Click to read:
An excerpt:
It’s not uncommon for cats to bring home “spoils” from their hunt — usually a mouse, lizard, or some other unlucky creature. So, it wasn’t a shock when Pepper, a Florida cat, came back with a cotton mouse (Peromyscus gossypinus). But Pepper’s owner, John Lednicky, is a microbiologist and virus hunter at the University of Florida. So rather than toss out the rodent, Lednicky brought it to his lab. And there, he made an unexpected discovery: a previously unknown virus.
. . . Lednicky and his team initially tested the mouse to see if it carried mule deerpox virus (MDPV), a pathogen that has recently spread through Florida and a few other US states. Instead, they discovered something completely new.
By using next-generation sequencing technology, the researchers decoded the virus’ genome and classified it within the paramyxovirus lineage.
Paramyxoviruses belong to a larger group called Jeilongviruses. This family includes viruses responsible for measles and mumps in humans, as well as severe animal diseases like Hendra and Nipah. The newly identified virus, named GRJV1, exhibited an ability to infect various mammalian cell types, from rodents to humans. This broad “cell tropism” suggests the virus could potentially spill over from animals to humans or other mammals, which raises some concerns.
The lesson: don’t handle wild mice! And don’t buy mice in the wet market!
Here’s a three-minute video of the discovery:
***************
Lagniappe: A 6-7 year old injured street cat makes a miraculous recovery:
Street cat was as light as a kitten when he was rescued — watch him get chubby and gorgeous and knead on his new mom nonstop 🧡 pic.twitter.com/jrKPmyWjIv