Applying Information Conservation to Biological Origins

Episode 2166 January 26, 2026 00:23:12
Applying Information Conservation to Biological Origins
Intelligent Design the Future
Applying Information Conservation to Biological Origins

Jan 26 2026 | 00:23:12

/

Show Notes

Nothing's free in life. It's a sobering reality we all come to realize in life. And this cold, hard truth also applies to the realm of biology. On today's ID The Future, host Andrew McDiarmid continues his four-part discussion with mathematician and philosopher Dr. William Dembski. The topic is Dembski's work on the law of conservation of information, a principle asserting that information within a search process is redistributed from pre-existing sources rather than materializing from nothing. In addition to being used in computer science and physics, the law can also be applied to theories of biological origins to evaluate which theory best comports with the reality that all information comes with a cost, and that cost must be adequately explained. This is Part 3 of a four-part conversation.
View Full Transcript

Episode Transcript

[00:00:01] Speaker A: In a sense, Dawkins book probably should have been called Climbing Mountain Probable. You know, I mean, it's the mountain that makes it probable to get to the top, but that mountain itself is highly improbable. [00:00:18] Speaker B: Idaho the Future A podcast about evolution and Intelligent Design well, welcome to IVIEW the Future. I'm your host, Andrew McDermott. Today my in depth conversation with Dr. William Dembsky about his new monograph, Unpacking the Law of the Conservation of Information. And if you don't know much about this concept yet, you're going to want to stick around and get as much of it as you can today. And I'll point you to how you can read his monograph for free. Now, for starters, if you don't know Dr. Dembski yet, let me give you a few details. He's the founding and senior Fellow with Discovery Institute's center for Science and Culture, as well as a distinguished fellow with the Institute's Walter Bradley center for Natural and Artificial Intelligence. He's a graduate of the University of Illinois at Chicago, where he earned a bachelor's in psychology and a doctorate in philosophy. He also received a doctorate in mathematics from the University of Chicago in 1988 and a master of Divinity degree from Princeton theological seminary in 1996. He has held National Science foundation graduate and postdoctoral fellowships. Dr. Demski has published in the peer reviewed mathematics, engineering, biology, philosophy and theology literature. He's the author or editor of more than 25 books, most recently a brand new edition of the Design Inference, his classic work, updated completely in a new revised edition co authored with Winston Ewart. Now in our first episode, we established the foundations of what Dembski calls nature's missing the Law of Conservation of Information. We explored the historical roots of this concept going back to thinkers and scientists in the 19th and 20th centuries who applied the idea in physics and computer science. We also learned that this law acts as a rigorous accounting tool for search processes, proving that information cannot materialize out of nothing, but is instead redistributed or shuffled from pre existing sources. Now today we shift from the theoretical to the practical, looking at how the law applies to the study of biological origins and the theory of intelligent design. We'll explore the displacement fallacy and how famous evolutionary simulations like Richard Dawkins is me Thinks like a Weasel or the Evita program surreptitiously smuggle in the very information they claim to create from scratch. We'll also examine the staggering rarity of functional biological targets, such as functional protein folds, and why this rarity implies a Fundamental lack of evolvability that purely material mechanisms cannot overcome. All this will lead us to the ultimate conclusion of Dembski's work on the law of conservation, of the existence of an irreducible intelligence capable of creative innovation that, that transcends the limits of mindless algorithms. All right, let's get started. Well, Dr. Demski, thanks again for joining us. [00:03:16] Speaker A: Good to be with you, Andrew. [00:03:19] Speaker B: Okay, so just as a person uses Google or their preferred search engine to find a specific website in the vastness of the Internet, Neo Darwinism suggests that nature must search for functional biological structures within a near infinite space of narrow non functional possibilities. But some biologists, like H. Allen or argue in response to your work on the no free lunch theorems that evolution has no preset target and thus isn't a search. You contend that if evolution isn't a targeted search, can't really regard it as science. Can you explain the, the, the problem there? [00:03:59] Speaker A: Yeah, I think some biologists, I wouldn't say all, you know, you've got somebody like Stuart Kaufman for instance, who's totally comfortable with the idea of evolutionary search or the term evolutionary search. It's, it's widely used but, or I think, you know, and others see search is inherently teleological and so there has to be some sort of preset target set out by a purpose of intelligence. And so what they want to say is because there's no purpose really in biology, there's no purpose giver behind it. This very terminology is illegitimate, but I would say it isn't. And the reason is that there are still fundamental biological functionalities, things that you need to be alive, things if which you don't have your daily dead that are just basic to biology. And so biology has to be able to get to them. Insofar as biological evolution is a complexity increasing process, those increases in complexity have to be accounted for. They form targets as well, if you will. So biologists, I think could get more comfortable with this idea if they just adopted a view of these being natural kinds or just being there. There. There are features in the biological world that are required for biology to do anything interesting at all, for biology to be biology. So I think it's really more of a terminological quibble with, or how do you get the first life? How do you get something that has a genetic code, that has a machinery that produces proteins from genes? How do you get a bacterial flagellum? How do you get various functional tissues or things? It seems to me it's legitimate to refer to those as Targets and that they can be referred to as targets without importing a whole teleology. And then we can let the teleology be decided later based on the sort of science that we do and the inferences we draw. Anyway, I don't want to get defensive or get too much into it, but it's. It seems to me that the language of evolutionary search is entirely legitimate. And on the flip side, if you don't have something like search, what is evolution doing? I mean, if it's, you know, for, you know, what you didn't quote from that Alan Orr article is where he says that evolution is pure cold demographics. It's just gene frequencies. Well, gene frequencies aren't going to explain how you get these complex biological structures. Genes can fluctuate any number of ways. So there's a lot that his approach to biology leaves unexplained. So you're going to need. You're going to need something. You know, I would let me just put it this way. If it's not targeted search, what is it? Give me a model that will account for the sorts of complexities we're looking at. And he's not, he's not offering that. So, you know, so. And I would say a lot of biologists are comfortable with this idea of search. I mean, even Daniel Dennett, who is as hardcore a naturalist as you could find, would refer to evolutionary search. [00:07:43] Speaker B: Yeah, yeah. Now you've analyzed computer simulations like Richard Dawkins's weasel and the Avita simulation. How do these programs tend to smuggle in information when they claim to create it from scratch? [00:07:59] Speaker A: Yeah, I don't know if Dawkins would claim to create it from scratch, but I think the people who were behind Avita did because they specifically had their eyes on intelligent design. You know, what they're claiming is that they're getting some result that, you know, is supposed to be remarkable, that would be unexpected. But here it is, you know, so, I mean, Dawkins is the most, is the simplest one out there. He has a phrase. I think it's 28 characters. He thinks it is like a weasel. If you include spaces as characters, it's phrase from Shakespeare's Hamlet. And, you know, if you've got 28 possible, 28 characters, 27 possibilities, that's 27 to the 28. I think that's about the 10 to the 40 possibilities. And, you know, so getting that sequence among a sea of, you know, 10 to the 40, I mean, that's 10,000 billion, billion, billion. That's a lot. I Mean, that's probably, you know, the number of grains of sand in the world multiplied by itself. You know, I mean, it's, it's a lot. So, so how do you find that one grain of sand? Well, he says, you know, you can't do it just by chance, but if you imagine you just take a random sequence of 28 characters and then you start varying them randomly and then rewarding those that are point for point closer to this target sequence. Target, target, he doesn't call it necessarily target sequence, but he thinks it is like a weasel, then you can in short order evolve to that sequence. Now, computationally, all he's got is a hill climbing algorithm with, you know, straight up slope, no local minima or maxima anywhere. And, you know, it evolves. I think he, in his simulation, he got there in about 40 to 50 steps. So he got there, but the question is, you know, what did he actually show? Well, I mean, he had the target sequence built into the program and then had these comparisons between a sequence that's currently in place and the target sequence. And then they get buried randomly and then they just naturally get closer and closer until they finally hit that target sequence. So all the information was built in. I mean, you could just see it. Now, I think what, what you have with, you know, in this case, if you will, if it's like a, you know, game of P, you know, hide the P, you know, or move it, you know, it was pretty easy to spot where that P was and you know, you know, and as he's trying to move it where it is now, you know, you can add more shells to cover the P, you know, in the shell game and start moving things more quickly. And I think that's what happens with these more sophisticated programs. So Avita was trying to, this is a, was published in 2003, I believe in Nature. And the idea was to evolve Boolean functions of increasing complexity. And so they, they were able to evolve Boolean functions of increasing complexity. But one reason they were able to do it is because they rewarded increasingly complex Boolean functions. Now that was explicit in the program, so it wasn't really a big surprise. And so it was baked into this program and over and over again with these evolutionary computing scenarios which were supposed to be used to show that evolution has all this power. In fact, it wasn't doing anything like what evolution is supposed to be doing, which is not having all the information built in. You know, I mean, Richard Dawkins, one of his claims is that what makes evolution such a great theory is that you can get all this complexity from, as he calls it, primordial simplicity, you know, so you can move up in complexity and these complex Boolean functions, you know, it was, it was, it was baked into it. So, so no surprise, really. And you could, you could spot where the information was put in. Now in biology, it becomes much more difficult because life is so much more complicated. We. We can't look under the hood to see what the computer code was as we can here, you know, in terms of how things have evolved. So it's, you know, so I think often what the evolutionists can do is simply throw up his or her hands and say, you know, prove me wrong. You know, show. Show that it didn't happen, like by some blind evolutionary process. And so then the challenge becomes more to find systems that resist these Darwinian explanations where we can do the necessary sorts of calculations and show that indeed, they aren't unevolvable by Darwinian means. So I think Doug Axe's work is particularly important in that regard. But that's, that's the, you know, I think it's important to keep in mind that, you know, for the Darwinist, the straight up materialistic, naturalistic biologist, no aspect of design of biology can be designed. Everything needs to be explained by some sort of naturalistic story. So the burden on the design theories is not to show that every aspect of biology is designed. There can be aspects of biology. A genome that's been randomly hit by some gamma rays and then some sort of mutation takes place and it's preserved and transmitted. Okay, that was the result of chance, but how did you get that genome in the first place? So the challenge with, for the design theorist is not to show that every aspect of biology is designed, but that some aspect is designed. That's the negation. Okay, if for all X, such and such is the case, then, you know, it's, you know, the negation is it's not the case that for all X, which is equivalent to is there exists something for which it's not the case. There exists something. Yeah, that is, you know, so there's something that is designed is. That's enough. And if, if there's was just one clear example of design in biology that couldn't be ascribed to human or, you know, embodied intelligence, that would be enough to settle the matter. So that's the sort of, you know, what we're. What we're. What we're facing sort of thing. And, you know, then it's. Then we have to look at these questions of evolvability. And the information that goes into them and the sorts of naturalistic processes that are supposed to be involved and then see if there are some real barriers to these evolutionary processes. And again, it takes more of a negative line. We talked in the previous program about proscriptive generalization saying that certain things can't happen. You can't build a perpetual motion machine because of second law of thermodynamics. And so can we show that there's certain types of evolutionary transitions that can't happen, or they're very unlikely to happen, and if they still must have happened, well, that becomes evidence against a certain theory of evolution, you know, But I mean, the way I think things tend to be done these days by the naturalists is that the deck is stacked in their favor. You know, if they can show that some evolutionary transition could happen easily, you know, and there are examples of this because you may have some knockout experiment, and then, you know, it just requires one or two mutations to get to recover, you know, the gene for function, then it can happen by chance and presto chango, you're back with that function. But, you know, it's. How do you get that function in the first place? How. You know, that's, you know, those are the sorts of questions that the design theorists look at. But I think often we're seen as, you know, being this wet blanket showing what couldn't happen. Well, I think part of the reason we're a wet blanket is because we have these completely credulous evolutionists who believe that Darwinian evolution can do anything, you know, and has. Have no appreciation for its limitations. [00:16:54] Speaker B: Yeah, well, speaking of Dawkins, he uses the metaphor of climbing mountain probable to show how small steps lead to complexity. So why does the law of conservation of information suggest that the topography itself, the path, you know, heading up to Dawkins is mountain probable is what actually requires an information source. [00:17:17] Speaker A: Yeah, well, I mean, he needs Mount Improbable to be scalable by these baby steps, you know, because. Because Darwinian form of revolution is a very gradualistic form of evolution. You don't get hopeful monsters. You can't have things just magically materializing without any sort of precedent. But, yeah, I mean, in terms of conservation of information, just to review, the basic idea is that when you have some highly improbable event, if you're going to explain how it happened in terms of some. Something else, you know, you. You need to increase the probability of that event because it's high in probability, you know, just doesn't. You Know, it's, it's unaccountable. You know, just getting lucky is not going to be a good experience, good scientific explanation. So you have to have some mechanism, something that allows that probability to increase. Now which is a, let's call a probability amplifier. So the probability of that event given the amplifier now suddenly goes up. But there's a fundamental probabilistic inequality which says if you've got a probability of an event, it's going to be greater than or equal to the probability of an event given that amplifier times the probability of the amplifier. And so if the probability of the event given the amplifier goes up, the probability of the amplifier has to go down. It's like, I don't know, you know, if you've got a number 01, you know, and it's greater than or equal to 1 million, you know, times something, well that times something is going to be way less than 0.01. It's going to be, you know, you know, it's got to be one in a billion or something like that. But the thing is, you know, probabilities are always numbers between 0 and 1. So it couldn't be a million, you know, but, and just making a point, I mean this is how inequalities work. You know, if you got something that's greater than or equal to something else and it's a product, you know, both things can't be too big, you know, and if one thing is big, then the other thing is going to need to be small so you can maintain the inequality. And that's, that's, that's what we're dealing with with conservation of information. So to Dawkins example of climbing mounted probable if you've got a mountain that's scalable in this way, you know, where you can increase the probabilities according to conservation of information, they're going to be. And if it's highly improbable to get to the top of that mountain without, you know, some, you know, just directly, if you will, then when you consider all possible mountains that take you up, up to the top, most of them are not going to have those gradual paths. Most of them will not make things highly probable. Most of them will be sheer. Most of them will have paths that go up some way, but then you can't bridge, bridge it. But you know, that, that actually, you know, makes good sense, you know, and that, that's what, that's what conservation of information is telling you. Now Dawkins might say, but you know, there has to be mountain probable has to be scalable, you know, because we were here and you know, evolution was so successful. Okay, but how did you get mounted probable in the first place? Okay, which is the problem further back. And you might say, well, the environment then had to be specially tuned for that. Okay, but where did that tuning come from? That itself needs explanation. That itself is highly improbable, you know, so it's. That's. And that, that's the point that's always unappreciated. And we talked about that in the last program. Also, this sense of filling one hole by digging another. And often you're just not aware that you are digging another. It seems like you're actually explaining something, but you're not, you know, and this idea, you know, so he's getting up to the, getting up here, okay, we've got a, we've got a mountain probable. That makes it actually probable. Okay, but mount Probable, if you will, that gets you. That'll, you know, that overcomes that improbability is itself highly improbable, you know, in the sense, in a sense, Dawkins book probably should have been called Climbing Mount Probable. You know, I mean, it's the mountain that makes it probable to get to the top, but that mountain itself is highly improbable. There are lots of ways of being mountains, you know, and what, what, why do you have a special mountain that made it Probable? [00:22:00] Speaker B: That was Dr. William Dembsky explaining how the law of conservation of information can be applied to a critique of Darwinian evolutionary proposals. Now, I hope you're starting to get a good grasp on this concept of the conservation of information and how it applies to theories of the origin of life and of the universe. Now don't miss the conclusion to this conversation in a separate part four episode where Dr. Dembsky and I will discuss the ultimate origin of information, something he calls irreducible intelligence. And remember, you can learn more about all of this with a free download of Dr. Dembski's monograph. Check out the show notes for this episode for a link to that. It's at the Bio Complexity website. That's a journal, a scientific journal that is peer reviewed. And that's where you're going to find Dr. Dembski's monitor. But the link is in the show notes. Well, for ID the Future, I am Andrew McDermott. Thanks again for tuning in. Visit [email protected] and intelligent design.org this program is copyright Discovery Institute and recorded by its center for Science and Culture.

Other Episodes

Episode 0

April 17, 2020 00:09:24
Episode Cover

Jay Richards on How the Warfare Thesis Ignores the Roots of Science

On this episode of ID the Future from the vault, CSC Director of Communications Rob Crowther interviews CSC Senior Fellow Jay Richards. Listen in...

Listen

Episode 630

March 27, 2013 00:06:13
Episode Cover

Announcing Darwin's Doubt, a Game-Changing New Book by Stephen Meyer

Dr. Stephen Meyer's Signature in the Cell gave a ground-breaking inquiry into the mystery of the origin of life. Now, in Darwin's Doubt: The...

Listen

Episode 2119

October 08, 2025 00:47:19
Episode Cover

Attorney Barry Arrington on Materialism, Morality, and Objective Rights

How is the job of a scientist similar to the job of an attorney? And how do you define evidence? On this ID The...

Listen