Biology isn’t really like physics: we don’t have “laws” that are always obeyed, but instead have generalizations, some of which hold across nearly organisms (but even the “law” that organisms have DNA as their genetic material is flouted). The only “law” I can think of is really a syllogism that Darwin used to show natural selection: a). if there is genetic variation among individuals for a trait, and b). if carriers of some of the variants leaves more copies of their genes for the trait than carriers of other variants, then c). those genes will be overrepresented in future generations, and the trait will change according to the effects of the overrepresented genes.
But even that is not a “law” but a syllogism. After all, natural selection doesn’t have to work. There may be no genetic variation, as in organisms that are clonal, and different variants may not leave predictably different copies of themselves in future generations; such variants are called “neutral”. So there is no “law” that natural selection has to change organisms.
In this paper (click on screenshot below, or find the pdf here), evolutionary geneticist Michael Lynch from Arizona State University goes after two papers (cited at bottom of this post) that, he says, are not only failed attempts to concoct “laws” of evolution, but are flat wrong because their proponents don’t know squat about evolutionary biology. I’ll try to be very brief because the arguments are complex, and unless you know Lynch’s work on the neutral theory, much of the paper is a tough slog. What is fun about the paper, though is that Lynch doesn’t pull any punches, saying outright that the authors don’t know what they’re doing.
Here’s the abstract followed by an early part of the paper, just to show you what Lynch is doing. Bolding is mine:
Abstract: Recent papers by physicists, chemists, and geologists lay claim to the discovery of new principles of evolution that have somehow eluded over a century of work by evolutionary biologists, going so far as to elevate their ideas to the same stature as the fundamental laws of physics. These claims have been made in the apparent absence of any awareness of the theoretical framework of evolutionary biology that has existed for decades. The numerical indices being promoted suffer from numerous conceptual and quantitative problems, to the point of being devoid of meaning, with the authors even failing to recognize the distinction between mutation and selection. Moreover, the promulgators of these new laws base their arguments on the idea that natural selection is in relentless pursuit of increasing organismal complexity, despite the absence of any evidence in support of this and plenty pointing in the opposite direction. Evolutionary biology embraces interdisciplinary thinking, but there is no fundamental reason why the field of evolution should be subject to levels of unsubstantiated speculation that would be unacceptable in any other area of science.
. . . we are now living in a new kind of world. Successful politicians and flamboyant preachers routinely focus on the development of false narratives, also known as alternative facts, repeating them enough times to convince the naive that the new message is the absolute truth. This strategy is remarkably similar to earnest attempts by outsiders to redefine the field of evolutionary theory, typically proclaiming the latter to be in a state of woeful ignorance, while exhibiting little interest in learning what the field is actually about. Intelligent designers insist that molecular biology is too complex to have evolved by earthly evolutionary processes. A small but vocal group of proselytizers clamoring for an “extended evolutionary synthesis” continues to argue that a revolution will come once a critical mass of disciples is recruited (7–9), even though virtually every point identified as ignored has been thoroughly evaluated in prior research; see table 1.1 in ref. 6. More than one physicist has claimed that all of biology is simply physics. But 2023 marked a new level of advocacy by a small group of physicists, chemists, and geologists to rescue the field of evolutionary science from obfuscation, and to do so by introducing new theories and laws said to have grand unifying potential.
Note Lynch’s criticism of the “Extended Evolutionary Synthesis”, a program (and associated group of investigators) who claim revolutionary ways of looking at evolution, which, as Lynch notes, have already been discussed under conventional neo-Darwinian theory.
There are two theories Lynch criticizes in this paper
1.) Assembly theory. This is the complicated bit from the paper of Sharma et al. (see references below). It involves an equation that supposedly gives a threshold beyond which the assembly of components indicates life that evolved via natural selection (I won’t define the components, either, which aren’t important for the general reader’s purpose:
According to Walsh, this equation is totally bogus because it neglects all the forces that can impinge on gene forms during evolution. An excerpt:
However, this is not the biggest problem with assembly theory and its proposed utility in revealing the mechanistic origins of molecular mixtures. A second, more fundamental issue is that the authors repeatedly misuse the term selection, failing to realize that, even in its simplest form, evolution is a joint function of mutation bias, natural selection, and the power of random drift. There is a fundamental distinction between the mutational processes that give rise to an object and the ability of selection (natural or otherwise) to subsequently promote (or eradicate) it. In the field of evolution, drift refers to the collective influences of stochastic factors governed by universal factors such as finite population size, variation in family sizes, and background interference induced by the simultaneous presence of multiple mutations; via the generation of noise, the magnitude of drift modulates the efficiency of selection. For the past century, these processes have been the central components of evolutionary theory (reviewed in refs. 5 and 6).
Because this theory neglects forces like mutation and genetic drift that can change frequencies of gene forms beyond natural selection, Lynch deems it “a meaningless measure of the origins of complexity.”
2.) The notion that organismal complexity is an inevitable result of natural selection. This goes after the paper of Wong et al., and you should already know that this can’t be true: evolution is not, in any lineage, a march towards more and more complex species. The immediate refutation is the existence of parasites like fleas and tapeworms, which have lost many of their features to pursue a parasitic lifestyle. If you make your living by parasitizing other organisms, natural selection can actually favor the loss of complexity. Tapeworms, for example, have lost many of their sensory systems, their digestive system, and features of their reproductive system. By any measure of complexity, they are much simpler than their flatworm ancestors.
Lynch points this out, and adds that there are lineages of microbes (very simple one-celled organisms like bacteria) that have not become more complex over the billions of years they existed. There may have been a burst of complexity when the lineages arose, but clearly bacteria haven’t been on a one-way march to primates. They are doing a fine job as they are:
Despite their substantially more complex ribosomes and mechanisms for assembling them, eukaryotes do not have elevated rates or improved accuracies of translation, and if anything, catalytic rates and degrees of enzyme accuracy are reduced relative to those in prokaryotes (with simpler homomeric enzymes). Eukaryotes have diminished bioenergetic capacities (i.e., growth rates) relative to prokaryotes (21, 22), and this reduction is particularly pronounced in multicellular species (23). Finally, it is worth noting that numerous organisms (parasites in particular, which constitute a large fraction of organisms) typically evolve simplified genomes, and many biosynthetic pathways for amino acids and cofactors have been lost in the metazoan lineage.
Another bit of evidence against Wong et al. is that their adducing “subfunctionalization”, whereby genes duplicate and the duplicate copies assume new functions, shows some “law” of increasing complexity. (The divergence of hemoglobins occurred in this way.) But Lynch suggests that genes don’t duplicate to make an organism more complex, and, moreover, the differential functions of duplicate genes can arise from selection being relaxed:
Subfunctionalization does not arise because natural selection is striving for such an endpoint, which is an energetic and a mutational burden, but because of the relaxed efficiency of selection in lineages of organisms with reduced effective population sizes. How then does one relate gene number to functional information?
Lynch winds up excoriating these new “theories” again:
For authors confident enough to postulate a new law of evolution, surely some methodology and supportive data could have been provided. Science is littered with historical fads that became transiently fashionable, only to fade into the background, with a nugget of potential importance sometimes remaining (e.g., concepts derived from chaos theory, concerted evolution, evolvability, fractals, network science, and robustness). But usually when the latter happens, there is a clear starting point. This is not the case with the “law of increasing functional information,” which fails to even provide useful definitions of function and information.
. . . . To sum up, all evidence suggests that expansions in genomic and molecular complexity, largely restricted to just a small number of lineages (one including us humans), are not responses to adaptive processes. Instead, the embellishments of cellular complexity that arise in certain lineages are unavoidable consequences of a reduction in the efficiency of selection in organisms experiencing high levels of random genetic drift.
I would take issue only with Lynch’s claim that only a “small number of lineages” have become more complex than their ancestors. Most multicellular organisms are this way. In the end, though, Lynch’s lesson is that people should learn more about evolutionary theory, which has grown quite complex, before they start proposing “revolutionary laws of evolution.”
The two papers at issue (I’ve provided links.)
10. A. Sharma et al., Assembly theory explains and quantifies selection and evolution. Nature 622, 321–328 (2023).
11. M. L. Wong et al., On the roles of function and selection in evolving systems. Proc. Natl. Acad. Sci. U.S.A. 120, e2310223120 (2023).


Because I live in an apartment not a house I lack a lawn from which to chase kids. So I’m grateful to Jerry for this virtual lawn discussion and to Lynch for scolding the authors of those two papers.
I’ve never understood the argument about subfunctionalization. It’s true that genes get duplicated and then one of the duplicates can be recruited into some other gene expression network. But the reverse happens too: functional genes (important ones) can die and the corpse (a pseudogene) can linger in the genome for a long time. This can happen because few genes are essential, and many functions have 2+ redundant genes doing more or less the same thing for the organism. I don’t understand why astronomers and philosophers focus on one but not the other process.
There are some problems with the term “subfunctionalization” that makes the logic difficult to follow.
IMHO, “neofunctionalization” is the correct term for a gene duplication and subsequent evolution of a new function for one of the copies.
Subfunctionalization is when both of the copies acquire mutations that reduce their expression and/or functionality so that now both copies are needed whereas before only one copy was required.
Here’s a case where the the two processes might give the same result.
https://sandwalk.blogspot.com/2017/06/on-evolution-of-duplicated-genes.html
Given the general area in which Jerry lives, I would be surprised if HE had a lawn!
On a similar topic, Jerry, have you seen this article on Dire Wolves and a different way of defining species (in terms of mitochondrian DNA)?
The article doesn’t give a coherent definition. The sentence that is supposed to be the definition is
“…an organism belongs to a species if its two sets of genes – those in the nucleus and those in the mitochondria – are optimized to work together to generate life-sustaining energy.”
The part of the definition after the “if” does not express a relation between an organism and a species, because it does not mention the species. Nor does it express a relation between two organisms that could be interpreted as “being of the same species”. At most it defines a property of organisms.
Could there be a meaningful definition that the author has oversimplified (fatally distorted) for publication in The Conversation?
Jerry,
I agree with you that your attempt to define a “law” is a syllogism. I think the important distinction between population genetics and physical laws is that population genetics is about probabilities, not laws.
For example, if a trait confers a fitness advantage then population genetics describes the PROBABILITY that it will become fixed in a population under a given set of circumstances (e.g. population size).
Some popular writers have implied that natural selection (for example) is more like a physical law because they seem to be saying that as soon as an allele confers a fitness advantage it will inevitably become fixed.
Yes. Selection is not gravity.
I wonder if it’s a bit more like gravity than one might initially think. After all, the greater the gravitational force, the greater the effects on the body, and the greater the selection pressure, the greater the effects on the organisms.
Well, but many laws of physics are indeed about probabilities. I don’t think there is any fundamental difference between the laws of physics and chemistry versus the fundamental principles of biology. The only difference is the number of particles or instances; in physics and chemistry the law of large numbers virtually ensures that the mean value is realized almost exactly (atoms are really numerous), while in biology the numbers are often very low. But as Lynch and many others have shown, common organisms like bacteria follow population genetic laws with much more exactitude than, say, mammals.
I’m not supporting any of the nonsense targeted by Lynch here. I just don’t think that the distinction you propose is valid.
Yes when I studied physics as a student 40+ years ago I remember being taught that “laws” should be thought of as generalizations based on repeated observations.
I think that is a different issue, the fact that all “laws” are based on what we know today and so are not necessarily the real, absolute laws of nature. My point is different; it is that the laws themselves, as we know them today, are mostly about probabilities. You may remember statistical mechanics, which shows how the macroscopic properties of objects and gases are determined by probabilistic processes involving atoms. Quantum mechanics adds another level of fundamental probabilities. Schrodinger’s and Dirac’s equations, the fundamental laws of matter, describe waves which give only the probability of detecting a particle at a given place and time.
Yet another black eye for Nature. How the mighty have fallen.
I can’t help but think that some attempts to read directionality into evolution are derived from an unstated belief that evolution has to have a purpose or that evolution must inevitably lead toward us. I don’t know about the details of this case, but the law-like claims I’ve read about over the years almost always had embedded within them some deep misunderstandings of how evolution works.
Evolutionary theory is not like theory in physics, with practitioners (often) seeking to simplify and reduce nature to one or a few equations or to a standard model. Evolutionary theory includes a broad set of generalizations, of which the theory of natural selection is central. One must be willing to read the copious literature in order to understand the length and breadth of the subject. I left active research in 1996, but I still marvel at what Darwin and his successors have wrought.
This is all above my pay grade. But a question about “random”? In a deterministic world, the concept of random is suspect. So when people use phrases like “random mutation” would not “chaotic mutation” be more accurate?
Random presupposes an indeterministic world in some sense.
I suspect you know, but in this context it means that mutations occur in DNA without respect to their utility.
Exactly right.
In addition to what Mark and Edward said, the premise is wrong. The world is fundamentally random, and mutations in particular are often caused by high-energy UV light or by energetic particles that were generated by quantum processes.
You make an interesting point. I’ve long thought the same about shuffling and crossing over during meiosis. These processes are often described as “random”, but that randomness is only epistemic, i.e. outcomes are unpredictable and chaotic due to the complexity and sensitivity to initial conditions, but ultimately deterministic in a classical sense.
That said, I imagine ontological randomness could creep in, e.g. through quantum effects like proton tunnelling in enzymes. If such events influence molecular interactions at the right moment, they could introduce genuine indeterminacy that could propagate up to influence the deterministic biochemistry of meiosis. I’m just speculating here, though, and it’s definitely above my pay grade.
As Edward M rightly says, “random” mutations usually refer to their functional unpredictability, because they occur without regard to the gene’s function. However, many mutations are certainly random in the stronger, ontological sense. For example, ionising radiation can cause mutations via quantum mechanisms and radioactive decay (which can result in DNA damage) is a quintessentially quantum event and therefore fundamentally indeterministic.
Well, to get further into the weeds, ionizing radiation doesn’t cause mutation; error-prone DNA mechanisms do. Ionizing radiation damages DNA in such a way that DNA repair mechanism are activated. But these repair mechanisms make errors; the mutations.
I am not a fan of Philip Ball — he attacked Anna — but here is an article he wrote for Quanta Magazine on these topics:
https://www.quantamagazine.org/why-everything-in-the-universe-turns-more-complex-20250402/
According to the article, there is another “arrow of time” in addition to the second law:
They have proposed nothing less than a new law of nature, according to which the complexity of entities in the universe increases over time with an inexorability comparable to the second law of thermodynamics — the law that dictates an inevitable rise in entropy, a measure of disorder.
More than one physicist has claimed that all of biology is simply physics.
Is he really arguing that biological entities do not obey the laws of physics? If not, in what sense is biology not physics?
I was wondering too. The claim could be improved by removing the word “simply”.
As a mathematician, I feel that this cartoon sums up the relationship between the sciences clearly and succinctly:
https://xkcd.com/435/
(In my defence, I had no choice but share this. As our esteemed host has often pointed out, there is no such thing as free will. The laws of physics compelled me.)
Late to this, but I think he states his objection more clearly here;
“Living organisms are not equivalent to inanimate objects endowed with a special power for self-replication. The peculiar details of life’s structures and functions are legacies of historical contingencies, laid down prior to LUCA, which dictate all aspects of molecular assembly and breakdown. This is why biology is not simply chemistry or physics.”
The way I read it he is saying that initial states and contingencies contribute to the diversity and complexity of life so much so that the principles of physics are not sufficient to explain.
Sara Walker (one of the lead authors of the Sharma paper) has been promoting Assembly Theory for a while now. I have heard her discuss her book “Life As No One Knows It” on podcasts, and have read a couple of reviews, but have struggled to follow what she is saying. Your summary of Michael Lynch’s article is really helpful – thank you, Jerry (and Michael)!
She was on Michael Shermer’s podcast not too long ago. What she seemed most interested in was how to detect if there is life on other worlds. She was arguing that if certain complex molecules are found with much greater frequency than expected from random interactions, this was evidence that biological processes were at work. I think the basic idea is pretty reasonable.
Sure, but the spectral signature would be complicated bunch of peaks, generally lost in the signal noise of other molecules.
There is a recent discovery of simple organic molecule on a distant planet (I don’t remember what it was) that is only known to be made by bacteria here on earth.
I think that if astronomers find a good signal for O2 gas, then that should be very strong evidence of life since as far as I’ve heard nothing but photosynthesis makes that in quantity.
Interesting. I had not read of these “new theories”, just your refutation today. As a general purpose bet I put my money on your expertise. You’re seldom wrong in this field.
So I didn’t know about the increasing complexity idea, but (in my own amateur understanding) I have told people that “improving” is the wrong word when it comes to evolving. I gave Dawkins’ evidence of animals/insects in caves losing their eyes as they weren’t needed.
I think that ape to standing human cartoon has a lot to answer for.
But hey – a belief in evolution with some errors is better than creationism. hahhaa
D.A.
NYC
Given the apparent deficiencies in the two cited papers, how in the world then, did they pass peer-review and get into Nature and PNAS?
Opinions on this may differ, but I don’t mind that these things can sometimes pass peer review. Let the scientific public “have at it” to sort things out, all out in the open. If “out there” publications could be blocked by 3 or 4 anonymous peer reviewers, who’s to say that a really important but out-side-of-the box idea isn’t also being blocked as well?
That was amusing. There was much laughter and buzz around a similar silly paper about a year or so ago, with graphs of “complexity” on the Y axis, and dramatic curves swooping up the axis of complexity over time. I wish I could find the reference. Not that I need to keep it. It was over my head, but still obviously off the mark.
There was brief reference up there about sub-functionalization, where proteins that operate in molecular machines (like eukaryote ribosomes and bacteria flagella) wind up having their functions narrowly specified, so that what could be the work of 1 protein winds up being done by 2 or more more specialized proteins. This is part of an interesting idea called “constructive neutral evolution”, where molecular machines get complexified not by natural selection — so the more complicated machines are not more “fit” machines — but by randomness. It’s like a ratcheting process that builds in complexity and natural selection can’t simplify it because the fitness of the more complicated machines are about the same as the less complicated machines.
“a). if there is genetic variation among individuals for a trait, and b). if carriers of some of the variants leaves more copies of their genes for the trait than carriers of other variants, then c). those genes will be overrepresented in future generations, and the trait will change according to the effects of the overrepresented genes.”
That is not a syllogism; it is a conditional statement. A syllogism consists of two premises and a conclusion. For example:
“a). There is genetic variation among individuals for a trait. b). Carriers of some of the variants leaves more copies of their genes for the trait than carriers of other variants. c). Therefore, those genes will be overrepresented in future generations, and the trait will change according to the effects of the overrepresented genes.”
That would be a syllogism, but a blatantly invalid one, as the conclusion is not a logical consequence of the premises.
The logical concept of a syllogism is inapplicable and irrelevant to the Darwinian claim that you are talking about. It is just a compound conditional statement.
Ex-teacher of logic here.¹ ISTM this is straining at syntactic gnats². Your “if”-less syntax seems just fine when interpreted as a syllogism.
(Syllogistic logic always has at least an implicit “if”: the conclusion is true if both premises are true. This can be expressed using various syntaxes: “if (a) and (b) then (c)”; “(a) and (b) therefore (c)”; “(a),(b) |- (c)”; “the syllogism (a),(b),(c)”.)
I may have misunderstood your presentation; if so please point out the misstep(s).
. . . . .
¹ Thanks to Barbara Piper for the template.
² Pedantry is a good servant, but a bad master. Servus servorum.
I don’t see how you have proved anything. I taught logic for six years, and one of the first things that I try to get students to grasp is the distinction between a conditional statement and an argument. In a conditional statement, neither the antecedent nor the consequent is asserted; in an argument, all the premises and the conclusion are asserted. I don’t see how you can teach logic without getting students to recognize the distinction.
As for your claim that my ‘ “if”-less syntax seems just fine when interpreted as a syllogism’, either you believe that invalid arguments are “just fine” or you fail to see that the argument in question is invalid, in which case, I seriously doubt your claim to have been a teacher of logic.
From Wikipedia:
Comment?
What comment is needed? The argument is invalid, by this or by any other plausible definition. If you think that this argument is valid, show me how it has a logical form such that the conclusion follows from the premises.
“a). There is genetic variation among individuals for a trait. b). Carriers of some of the variants leaves more copies of their genes for the trait than carriers of other variants. c). Therefore, those genes will be overrepresented in future generations, and the trait will change according to the effects of the overrepresented genes.”
Please read https://en.wikipedia.org/wiki/Validity_(logic), particularly the distinction between logical “validity” and “soundness”.
Over and out.
The Lynch article reminds me of the pushback Steven Pinker received when he published “Enlightenment Now.” A whole bunch of Enlightenment historians basically said: “Pinker is not an expert on the Enlightenment. He should stay in his lane.”
As I (possibly mis-) understand assembly theory, it was motivated by the problem of abiogenesis:
The crux of the abiogenesis problem is that a molecular configuration with the capacity to self-reproduce – which is the minimum necessary for life to arise – is so incredibly complex that the odds it would form by chance are infinitesimal.
And so, Cronin, Walker, et al. appear to have invented assembly theory as some sort of solution to this problem. The theory doesn’t just try to quantify complexity – it hints at the existence of some ‘law’ of nature that facilitates the increase of complexity in the universe, thus making it inevitable that extremely complex molecules and increasingly complex life forms will arise via evolutionary-like(?) processes over time…
Or maybe assembly theory is something else entirely. Despite all the words I’ve read about it, the core of the theory remains a big fuzzy cloud of abstractions, metaphors, and allusions that I have been unable to make sense of. Usually when I don’t understand something scientific, it’s because I’m too stupid. But in this case, I’m beginning to think that the problem is that assembly theory isn’t really scientific. It’s more of a sciency vibe.
Quantification seems essential. Highly improbable things happen all the time, in a large enough sample. A factor of 100,000 could make a big difference, between “that’s surprising” and “you gotta be kidding”.
Glad to hear of Michael Lynch’s well-considered pushback, and Jerry’s excellent description of it.
Joe I asked chatgpt that who is greater evolutionary biologist between you and Lynch, and it replied : Joe Felsenstein 🙂
Just a nitpick that stood out to me: how do you get “clonal” as having no genetic variation? What about mutations?
I’d also argue that selection must happen. Always. As the probability that any two variants had absolutely identical odds of reproductive success in any given environment is essentially zero. It might take ten billion generations for the reproductive differential to be detectable, but so be it.
It is possible the evolution will be glacially slow where variation is minimal and selection pressure infinitesimal. But there will still be evolution.
Maybe Earth has been optimal for fastest possible evolution, hence we don’t see anyone else out there. Because we’re first.
Did you read what I wrote? I said that “if there is no genetic variation in organisms that are clonal”. IF. And that is true until there is a germline mutation. Yes, mutations will eventually happen if you wait long enough, but they may be the right type. You also seem unfamiliar with the neutral theory, in which two variants DO have identical reproductive success. There could be a mutation in a third position, or an effectively neutral mutation in which selection pressure is less than the reciprocal of the effective population size.
This “nitpick” is more like a lecture, and I stand by what I said.
Sorry. Didn’t catch the “IF”. Didn’t mean to lecture. I had a point somewhere but was sleepy.
Populations of clonal organisms evolve, by mutation. Every mutation is a new lineage subject to differential reproductive success. Single celled organisms are clonal. Pre-cellular self-catalyzing molecules where by definition clonal. Clearly for these phases of evolution, mutation rates in clonal populations mattered a great deal.
I don’t see the relevance that some clonal species might be regarded as evolving extremely slowly.
Seems to me that mutation (any random change introduce prior to or during transcription, when a replicator copies itself or gets itself copied) is the underlying driver of evolution. Other sources of variation (sexual recombination, horizontal gene xfer, any modulations which bias the “randomness” in some way) are all emergent.
Sorry, I don’t want to continue this argument. It’s still a lecture, and it’s muddled, especially saying that mutation is the UNDERLYING DRIVER OF EVOLUTION. It doesn’t drive anything; it produces the raw material that is disposed of via natural selection, drift, or other processes. You are calling natural selection or drift “emergent” forces. Not true, they don’t emerge from anything.
But I say let the discussion end here.
I am a chemist/physicist and am quite sympathetic towards these attempts to find some quantitative laws in biology even if it is not something I work on. Many laws of physics (e.g. in condensed matter physics) are so called emergent laws that are simple formulas and laws describing complex systems that hold for many particles as the microscopic details get smeared out.
So even if these two papers are wrong on technical grounds (and I have seen assembly theory harshly criticized from the side of information and complexity theory as well), I wouldn’t throw out this whole direction of inquiry.
https://physics.aps.org/articles/v12/2
Lynch’s work is hugely valuable. From the philosophical perspective I am defending, it is implicit in the way human beings think to automatically assume deeper meanings and purposive systems when there is no evidence whatsoever for anything like that in nature. As social primates we are obsessed by tribalistic status seeking – we want our kind to be at the top of the hierarchy – and that would be helped if we could find laws (Especially if they were God given) that were designed to get us there. Jerry is right. Evolution progresses according to syllogistic logical necessity. It is to make the mistake of thinking like a human being to want meaning, equating in this case to deeper laws that are driving evolution towards complexity.