Dembski and Ewert wish upon a star
In my post of 12 June, I commented on the first part of William Dembski and Winston Ewert’s new book, the second edition of Dembski’s 1998 book The Design Inference: Eliminating Chance Through Small Probabilities. I noted that to make the argument for a Design Inference they had set aside Dembski’s previous criterion of Complex Specified Information, and instead replaced it by Algorithmic Specified Complexity (ASC).
This is a measure of the difference between the length of a bitstring and the length of a bitstring that describes it. It originates from the mathematical work on “randomness deficiciency”, where the bitstring is a binary number, and the smaller bitstring is a computer program that computes it. In that field, numbers that are considered truly random are those generated by programs that are almost as long as the number. A number that can be generated by a short program is not considered random.
Dembski and Ewert intend to apply this to biology. The longer number somehow specifies a phenotype (or maybe a genotype), while the shorter number “describes” it. In my previous review, I had reached Chapter 7, “Evolutionary Biology”. Here, let’s consider how they argue the ASC criterion can be applied to biological adaptations. (Spoiler – we’re going to be disappointed).
I was full of expectations when I started reading Chapter 7. I had a number of questions in mind, ones I have asked before in posts and comments about their ASC criterion for design inference (for example, here, here, and here).
- What did the bitstring represent? A genotype? A phenotype?
- Was the “description” string a program for computing the larger string? Was it perhaps some genetic encoding of developmental instructions?
- When the difference between the lengths of the two is large enough, this is supposed to be astronomically improbable. Under what distribution?
- Is the probability somehow able to take into account the possibility of natural selection building up such large difference step by step?
Chapter 7 has a number of sections. Let me describe briefly what each covers, and what it accomplishes. I will summarize their arguments. I will add comments of my own, in parentheses, or otherwise explicitly identified:
Insulating Evolution against Small Probabilities (pages 321-326)
Dembski and Ewert describe the success and wide acceptance of their design inference in evolutionary biology. They then say “And now back to the real world”. The success and acceptance, they accept, has not happened in reality. This is because evolutionary biologists have been resistant to accepting that evolution of biological systems involves probabilities that small. The evolutionary biologists have done so by invoking mechanisms of cumulative selection. As an example Dembski and Ewert describe Richard Dawkins’s teaching example of a simple form of simulated natural selection starting with a random 28-letter sequence of letters and spaces, and reaching the target phrase “METHINKS IT IS LIKE A WEASEL” in hundreds or thousands of steps, instead of the 1040 steps it would take if one simply drew new 28-letter sequences at random.
Variations on the WEASEL (pages 326-330)
Evolutionary biologists are obsessed by the Weasel example (I would say it is creationists and advocates of Design that are actually the ones obsessed with it). “Perhaps design theorists should shift focus to the origin of life”. (The infamous OTOOL diversion – Off To the Origin Of Life). There cumulative selection cannot play a role, they argue. Simulations like the Weasel are declared to be “suggestive but misleading, so removed from biological reality that they cannot decide the matter”.
Resetting Darwinian Evolution’s Bayesian Prior (pages 331-336)
Considering the choice between design and natural selection as a Bayesian Inference, it is important what prior probabilities one assigns these two hypotheses. If one gives design a low enough prior probability, it cannot have any reasonably high posterior probability. A design inference can be used to “sweep the field clear of all relevant chance hypotheses”. (Actually they could only do that if they could show that the probability of the data, given natural selection, was zero, but Demsbki and Ewert don’t quite mean that, they mean showing that the data is astronomically improbable). Biologists argue that there may be many other “chance hypotheses”, but this is an argument by mere possibility as the biologists do not produce these.
Who Is Arguing from Ignorance? (pages 337-339)
The biologists are, since they essentially never produce the exact sequence of events needed to evolve these biological systems. However, the biologists accuse the design theorists of arguing from ignorance.
John Stuart Mill’s Method of Difference (pages 340-345)
Mill argued that if two situations differ in only one circumstance and a phenomenon occurs in one and not in the other, this circumstance is the effect, or cause. Biologists such as Kenneth Miller say that all that is needed for natural selection to work is the presence of selection, replication, and mutation. But these do not automatically lead to “complexity” or something “interesting”. The evolution experiments of Sol Spiegelman and of Richard Lenski do not show those, but have genomes that rapidly decay.
The Challenge of Multiple Simultaneous Mutations (pages 345-352)
This is basically a discussion of Michael Behe’s argument that major innovations require multiple mutations, each of which is individually disadvantageous, with an advantage only when they get together. It is extremely improbable that these can be brought together by natural selection.
What to Make of Bad Design? (pages 353 to 368)
Bad design arguments are not plausible for a number of reasons. Quotes from “some naturalistic thinkers” who do not think there is actual design but are impressed with the design of the cell. These include James Shapiro. Discussion of whether there is junk DNA, giving great credit to the 2012 (supposed) refutation of junk DNA by the ENCODE Consortium. Dembski and Ewert cite an embarrassing quote by Richard Dawkins in 2012 accepting the ENCODE result and arguing that it is compatible with his views.
Doing the Calculation (pages 368-378)
(At last, are we to be shown how? Well, partly.) Calculates the probability of a “biological system”, using protein folds as one example, invoking Douglas Axe’s experiments which that only one in 10-77 randomly constructed proteins would have enough of the proper structure and sequence to make a minimally functional β-lactamase enzyme. Axe’s paper was in Journal of Molecular Biology, volume 341, issue 5, page 1295 in 2004. The argument is based on the view that to get started evolving a β-lactamase enzyme one must have random assembly of such a protein, before natural selection can act. The example uses the protein sequence, with a 20-letter alphabet, as the string, equivalent to about a 648-bit string. Dembski and Ewert do not try to quantitate the length of the “description” needed. They acknowledge that evolutionary biologistss would argue that there might be other paths to the same functionality, without using these specific structures.
The Breakdown of Evolvability (pages 378-381)
“whence the confidence that Darwinian evolution should allow untrammeled interconnectivity from any one living form to any other, whether by direct evolution or by indirect evolution from a common evolutionary precursor?” They talk about protein alphabets, then uses a word game of changing a letter in a word, or deleting it, or adding a letter to change to another English word. Argues that one cannot go from any word to any other this way. Therefore although we have not proven that such interconnectedness in biological organisms cannot exist, we have shown it may not exist. Therefore “there are indeed good reasons to think that a gradualist Darwinian form of evolution is very limited in what it can accomplish in biology, and that some features of biological systems not only resist Darwinian explanations but also invite design inferences”. (This last part, the “some features” part, does not follow).
Where to Look for Small Probabilities in Biology? (pages 382-394)
This, the last section in Chapter 7, talks about Michael Behe’s arguments about Irreducible Complexity. Lots of discussion of how the bacterial flagellum could or could not have arisen. Evolutionary biologists never seem to come up with a detailed series of steps.
What have Dembski and Ewert accomplished?
Of the questions I asked above, they have not answered most of them. The one example that they give that involves a sequence of digital symbols asks about the probability of a particular protein sequence arising from random sequences of amino acids. What they give is really not an Algorithmically Specified Complexity agument, but a much older argument. Usually that argument takes the lengths of modern proteins as a necessary requirement for protein function, and does not take into account that many random protein sequences (and random RNA sequences) show a variety of weak enzymatic function. And small changes in those sequences often lead to similar amounts of function. It is therefore not impossible for evolution to follow steps uphill on a fitness surface.
The difficulty with Dembski and Ewert’s definition of Algorithmically Specified Complexity is that when the quantity is large, it means that there is a long sequence (of something) which is “described” by a short sequence. And there is simply no point in their argument that explains why (a) natural selection cannot achieve that, step by step, or (b) why achieving that is needed to result in any adaptation. The matter is left without discussion.
There is also a large difference between the new argument and the previous specified complexity arguments. As I noted in the first section of this review (12 June), the new concept is not a Functional Information argument. The “probability” is not the probability that a random sequence has this much or more of the desired trait. It is just the probability of this sequence. There is simply no discussion of how to compute a tail probability in some relevant distribution. Or even in some nonrelevant distribution.
So we are left with the questions I asked, and re-asked, above. How does this argument work? What are the relevant distributions? What is the “sequence”? How does the “description” relate to some relevant probability? Is it a tail probability under some relevant null distribution? Is it a probability that ordinary evolutionary processes could achieve something?
A burden of proof
By not explaining how their arguments work, in spite of 400+ pages of explanation, Dembski and Ewert are engaged in wish-fulfillment. They have “wished upon a star”. But it is not true that, as the song claims, that if you do, “your dreams come true”. Dembski and Ewert do not fill in the blanks, not even close. They do go off sideways into irreducible complexity arguments, and they do repeatedly claim that evolutionary biologists argue that natural selection will always succeed in meeting challenges. Ask the passenger pigeon about that, or the Great Auk, or the Dodo, or the Carolina Paroquet, or the mammoth. Or the trilobites, or the seed ferns. or the ammonites.
It is still possible for them to answer my objections by filling in the blanks. Saying that this is unfair, because biologists have not outlined every last step of the evolution of a complicated biological adaptation is no excuse. I’ll be happy to hear from them how their arguments actually work. We’re waiting …