[00:00:01] Speaker A: All you need is certain pressures of mindless material entities that are competing in some ways, that have some heredity, that have some variability, and presto changeo, you can get everything that previously rescribed to intelligence. So I think you know, the most Darwinists, if you press them, would have to say that natural selection is the designer substitute. It's what does the work of intelligence, but without anything like a real metaphysical intelligence as we've traditionally conceived it.
ID the Future, a podcast about evolution and intelligent design.
[00:00:47] Speaker B: Well, chances are you're already familiar with specified complexity, one of the mathematical pillars of the theory of intelligent design.
But there's another pillar that is much less well known, but I would say equally vital, the law of conservation of information.
Welcome to Idea the Future. I'm your host, Andrew McDermott. Well, today I'm joined by mathematician and philosopher Dr. William Dembsky. Dr. Demski is a founding and senior fellow with Discovery Institute's center for Science and Culture, and he's also a distinguished fellow with the Institute's Walter Bradley center and for Natural and Artificial Intelligence. He's a graduate of the University of Illinois at Chicago, where he earned a bachelor's in psychology and a doctorate in philosophy. He also received a doctorate in mathematics from the University of Chicago in 1988 and a master of Divinity degree from Princeton theological seminary in 1996. He has held National Science foundation graduate and postdoctoral fellowships. Dr. Dembski has published in the peer reviewed mathematics and engineering, biology, philosophy and theology literature. He is the author or editor of more than 25 books most recently and I hope you have this on your shelf or plan to a brand new edition of the Design Inference, co authored with Winston Ewart. Well, Dr. Demski, welcome back to I.D. the future.
[00:02:07] Speaker A: Always good to be with you, Andrew.
[00:02:10] Speaker B: Our conversation today centers on your recent monograph on what you call nature's missing law, the the Law of Conservation of Information, lci. You sometimes refer to it for short. Now, this concept can trace its lineage back to the 19th century with thinkers like Ada Lovelace, and you've expanded on these insights to create a definitive mathematical framework. I'm hoping that we can sort of teach our listeners this new framework, unpack it, and explain how it applies to intelligent design and evolution, for that matter. All right, well, first things first, let's define a few terms just so we can make sure we have clarity.
What do you mean by information? And can you share your Dallas analogy with us as a helpful way to understand the concept?
[00:02:57] Speaker A: Yeah.
Information fundamentally is about narrowing possibilities.
It takes many forms. You Know you can. Often we think of it in terms of building bits or alphanumeric characters. But when you write a given set of alphanumeric characters, you've narrowed down to one sequence against all the other possible sequences.
But the space of possibilities that you're narrowing down doesn't have to be an alphanumeric symbol sequence collection. It can be just about anything. So I'm here outside of Dallas, Denton County.
If I told you I'm in the world and on planet Earth, I really wouldn't have told you anything because, you know, leaving aside some astronauts that are out out there, you would know that as a human being, I'm on Earth. So I haven't really conveyed any information.
If I tell you I'm in the U.S. i've conveyed some information because I've ruled out all these other places in the world where I might be.
If I tell you that I'm in Texas, I've narrowed things down further. If I tell you I'm near Dallas, I've narrowed things down further. If I tell you I'm in the town of Aubrey, Texas, I've narrowed it even further. And so the more you narrow down, the more information you give. Now the thing is, when you narrow down, there's a typical one way of characterizing how much you've narrowed things down is by probabilities.
The probability of me being in the world is, you know, probability one. If you look at the entire land mass of the world and think of me as possibly being any place, uniform probability distribution to be in Texas would narrow things down quite a bit. I don't know what the land mass of Texas is compared to the rest of the world. But let's say it's maybe one out of, you know, it's 1, 500, you know, well, then it's now the probability of me being in Texas is one over 500. And then narrow it down to Aubrey, a little itty bitty town, you know, and it's probably we're talking 1 in 10 million, 1 in a hundred million.
So probabilities correspond often. I mean, it's one way, it's not the only way, but it's one way of measuring information. So the amount of that you narrow it down is, corresponds to the amount of information.
And in terms of probabilities, the more information corresponds to smaller probability. Information theorists usually work with that because you want more information for more narrowing down.
What you do is you transform it by doing what you apply a negative logarithm to it. So smaller probabilities become Bigger then. And basically probabilities. Then if the logarithm is to the base two, it becomes a matter of bits. So imagine, for instance, I flip a coin three times, I get three heads in a row. That's one possibility out of eight. So I've reduced eight possibilities to one information.
How many bits does that correspond to? All three bits of information? Because one over eight is one over two to the three. And that exponent logarithms are exponents. That three is the amount of information. And you can do that with fractional information. So if it's 1 out of 10 times a negative logarithm to the base 10, and then it's going to be something like 3.2 or something like that. So probabilities and information measures correspond.
And so that's the idea with information. You narrow down possibilities, but as you narrow down possibilities, probabilities become smaller, but the amount of information becomes bigger.
[00:06:43] Speaker B: Okay, yeah, that makes a lot of sense. And I do like the commitment aspect of information.
You're committing to one thing as opposed to another.
So saying one thing can eliminate 99 other possibilities. And you've just made a commitment.
So I kind of like that aspect of it, too. Now, closely related to all this is intelligence, and that's a key term here, obviously. But tell us, what is your definition of intelligence and how does it relate to information?
[00:07:13] Speaker A: Yeah, I mean, I guess there are various ways you can go. But I mean, for, for the purposes of information, I go back to the very root meaning of intelligence, where you break it down into the Latin. And intelligence is two words in Latin inter, the preposition between and the verb lego, meaning to choose or select.
And so what you're doing is you're choosing between. That's what an intelligence does. It makes. It makes choices, it makes decisions.
And that, you know, that makes. Makes good sense. As purposive agents, you know, to accomplish purposes, we have to make decisions. If I'm going to drive from here to, let's say, Oklahoma city, it's about 150 miles, I have to make certain decisions. I have to decide to go there. But then I, as I'm driving, I have to decide, am I going to turn right here, turn left, keep going straight? And there are all these decisions that are being made. So all these decisions, these choices are being made to accomplish a purpose. But the idea of intelligence, just from its etymology, is to choose between.
And so, you know, you're. You're making these choices, and when you choose, you're always settling on one Thing to the exclusion of another.
So that's where you get information. A choice always narrows possibilities.
Once I miss one turn off, you know, I'm committed to wherever that leads me, you know, and so that's.
So that's. That's where you do have a very tight connection between the ideas of choice or decision and intelligence. So they. They do match up quite nicely.
And you know, just even to take back the etymology of the word information in so into. And then form, you know, it's to shape something. So putting shape into something, if you shape something, a piece of marble so it looks like David's, you know, Michelangelo's David. Well, then it's not going to look like Michelangelo's Moses, you know. So you shape. When you shape something, you know, it's not that you've got a formless shmoo that can be anything. It becomes this rather than that.
[00:09:37] Speaker B: Yeah. So the moment you start making choices, you're shaping something to specify it to a format or to a function, I should say.
And that's sort of how they're related. And of course, you need intelligence to do that.
Now, it seems like you enjoy etymology as much as I do, because you're sharing all of this. And you also broke down the Latin meaning of decide, too, which is related to this, means down from, or to cut off or kill.
Very interesting, very explicit language there. Decide.
Kind of like homicide.
[00:10:13] Speaker A: Right.
[00:10:13] Speaker B: You're killing other possibilities by choosing a particular one.
So it's a very vivid understanding we get of the commitment here with information.
Now, why is this an informational act and an act of intelligence when you make a decision?
It is both, isn't it?
[00:10:34] Speaker A: Yeah, I mean, they come together.
But, you know, I think in some ways, what we described as intelligence here is more instrumental. You know, you could see something making what seemed to be decisions, narrowing things down one way or the other. But, you know, in a sense, that whole decision process can be delegated. I mean, you can delegate it to a machine, you know, and then there's the question also, is what's behind it, this, what we're calling intelligence, really a mind in the full sense of a consciousness that's aware of itself, that has purposes, you know, And I think this is where we come into conflict really, with this naturalistic Darwinian point of view.
I would say Darwin's greatest rhetorical coup, you know, triumph, was to coin the phrase natural selection. I don't think it's really even a scientific concept. I think it's.
It's a.
You know, I think It's a word game really, because selection, you know, it's the same idea as choice or decision.
The root in selection, it's lec is there? Well, it's the same root that's in le g the lego of intelligence interlego to foreign intelligence. It's the same root that in the Greek is logos, the reason or word, you know. So what he does is he basically says what we have previously ascribed to a full blooded intelligence that's ultimately understood in terms of an ethical monotheism, you know, a God that is able to pull all of reality together, created ex nihilo.
That's now something that nature can do without any, anything like an intelligence that as we traditionally conceived, all you need is certain pressures of mindless material entities that are competing in some ways, that have some heredity, that have some variability and presto changeo you can get everything that previously rescribed to intelligence. So I think, you know, the most Darwinist, if you press them, would have to say that natural selection is the designer substitute, it's what does the work of intelligence. But without anything like a real metaphysical intelligence as we've traditionally conceived it, you know, that's where I see intelligent design really facing its biggest challenge. I think it's more of a philosophical challenge than even a scientific challenge.
[00:13:35] Speaker B: When you look at that term natural selection, we're definitely hitting the nail on the head when we are coming up against that term. And I love the background you've given there on Darwin's use of it, because it is ascribing the ability to choose to a natural process. And everything we know about unguided evolutionary processes says that it doesn't have that ability.
And so coming at it from the perspective of what is information and what is intelligence and what can decide, I think is a great way to do it.
Now, speaking of definitions, let's just review the concept of the conservation of information because that's what we're here about. It's a term used in both physics and computer science.
Tell us how you would put it as far as what the phrase means. Conservation of information and a law that would describe it.
[00:14:29] Speaker A: Yeah, I mean, I'm there as you note, I mean two very different uses of it. I mean in the physics community it's basically that you can reconstruct something, anything based on some previous state or some future state for that matter. That I mean, you imagine for instance, the Library of Alexandria burns down.
You know, from a practical vantage it's destroyed, it's lost. You know, we're not going to recover that yet from the vantage of physics. I mean, most physical laws, you know, they, you can run them forwards or backwards in time and all of that information that was in the library is there if we just knew what all the particles, if we could basically with full. I mean, it's, it's almost a Laplacian view where basically if you know exactly what's going on at any one moment, you can retrodict and predict the entire course of the universe. And in a sense, even with quantum mechanics, we can do that.
And so the idea is that from the vantage of physics, no information is really ever lost. It's, it's just, it's recoverable if we just know enough about the present states or, you know, know enough about boundary conditions and then equations of dynamics that govern how things work themselves out.
So, so that's the view of, of conservation of information in physics.
I'm applying it in the context of probabilities and search.
Basically, information out from a search process is never going to be able to exceed information in. It may get degraded, but it can't. It's not going to be that, that you get out more than, than you thought.
It's basically a, I mean, it's a, it's a, you know, we. People use the term no free lunch. I'm careful about that there, because no free lunch theorems are, are its own thing in some ways. I mean, they're related, but I don't want to conflate that.
I think one way to look at it is you can't get something for nothing.
And the idea with conservation of information is often we want to increase our probabilities of something. So let's say you're doing an Easter egg hunt, okay, and you've got this huge field. No, there's one Easter egg there.
Field is so large, the egg is so small, so well hidden. Very improbable that you're going to find that.
So how are you going to increase your probability to find that Easter egg? That's what you want to do. And we pay money even to increase probabilities. You know, if you're a student and you want to get into a good college, you may pay a college consultant to help you raise your test scores to increase, increase your probability of getting into an elite school. Okay, so now we're, we're trying to find that egg. Well, one way you could increase your probability is if somebody who knows where the egg is has a little, you've got a little earpiece and the person says, Warmer, colder, warmer, warmer, warmer, you know, or turn right, turn left, straight, straight, 10 paces. Now turn left, you know, giving these instructions, and then suddenly you find yourself at the egg. Okay, now, have you overcome that improbability? Has a miracle happened?
Well, no, you know, you got instructions. The probability's now changed because of those instructions. Your probability of finding that egg has gone way up.
But then you have to ask yourself, okay, where did those instructions come from? You know, let's say right, right, left, straight, right, left gets you there.
Well, but I could have said right, straight, left, straight, right, left. I could have made any number of possible combinations of such instructions. So how did you pick one that got me there?
Okay. And the problem is, whenever you try to figure out, okay, what, what instructions, what was it that allowed me to raise my probability? And it turns out it says improbable.
Raising your probability with that thing, that allowed you to raise your probability as the original thing. Okay, now that may sound a little convoluted, but the idea, you know, let me try it, maybe try to say it again. You raised your probability, but what allowed you to raise your probability is that you had to now engage in a still another improbable action. And the way we usually put this is that the search for the search is no easier than the search, okay? The search for the search is you're searching for an instruction set that's going to get you to the egg. That's a search for the search, okay? The original search is just finding the egg. Now you're looking for an instruction set that will get you there.
But finding the right instruction set is no easier than finding the original egg. And this is where the Darwinists, you know, I think they live in a lala land because they don't appreciate this point. And, and I've, I've seen this, I've witnessed this in real time. So, for instance, I was on a program, this is, you know, this is 25 years ago with Eugenie Scott. You know, I was part of the reason, if I'm seeming a little bit snippy, it's because I was just looking, looking up, you know, is intelligent design a science? And I was looking at what AI was generating. They were just saying, no, it's not science. It's. I'm just giving all these, you know, and I. And it's like we've been talking about these ideas till the cows come home. But, you know, it's. It's just a willful ignorance on the part of the Darwinists.
I mean, it's it's almost like, you know, I'm going to become a millionaire. How are you going to become a millionaire?
Because my friend is going to become a millionaire and give me $1 million. Okay, how's your friend going to become a millionaire? Well, his friend is going to become a millionaire and give me a, give him $1 million and then he's going to give me a million dollars. Well, it doesn't account for how you, anybody gets the business savvy to make $1 million in the first place. So, yeah, turn to Eugenie Scott. So I was on a program on the Stanford campus, Hoover Institution with Peter Robinson, name that's probably familiar to you.
And he's interviewing us and he raises the old trope about monkeys typing Shakespeare. And, you know, given enough monkeys randomly typing, you can produce the works of Shakespeare. The thing is, you're going to need more monkeys than there are elementary particles in the universe, you know, raised to the 10th power, you know, whatever. I mean, it's so, so vastly improbable.
But Eugenie Scotch, who is head of national center for Science Education, I would quip that that's really, she meant the national center for Selling Evolution.
But she, she said, you know, you really shouldn't think of it that way as Matt Monkeys randomly typing. What you really should think of the way natural selection works. It's like a lab tech at the monkey's shoulder. And whenever the monkey makes a mistake, the lab tech has a bat of white out behind and can white out the mistake. And then the monkey, and then all we see are the, the, the correct keystrokes or to be or not to be, that is the question, or whatever.
And it's like, gee, that sure sounds great. I guess Darwin has solved the problem, you know, but it's like, where did that lab tech know what to white out? You know, it's like the, the question doesn't get, doesn't get asked.
And this is, this is how it always goes. You just push the problem further back, but you never solve it. So what conservation of information is saying is you can't get something for nothing. When you claim to have gotten something for nothing, in fact, you've had to pay for it elsewhere, you know, so the lab tech, I mean, the whole point of the lab tech whiting out mistakes of the monkey is that the lab tech shouldn't know Shakespeare. But of course the lab tech had to know Shakespeare, you know, or, you know, we can push it back further. Well, it's a lab tech who whites out the mistakes by the monkey, but the lab tech is actually having his arms moved by strings that then will white out the mistakes, you know, and so now you push it still back further. And that's the point of conservation of information. There's this regress every time, you know, you try to increase the probability, but then you have to pay for it with a probability that's at least as bad, at least as small.
So now you try to increase that probability. Well, now you've got to pay for that. And you know, the, the cost never disappears. So, you know, with the, you know, you imagine the, the Easter egg example, you know, so you have the instruction sets. Well, okay, the instruction sets, maybe you get those from maps which mark the location of the egg. So now you go to, to a map maker who will mark the X of the map, you know, so then the person giving the instructions to the person trying to find the egg will look at the map. But that map with the X marked where that egg is could have been marked any number of other places. How did you get the right map that allows you to give the right instructions that then allows you to find the egg? You know, and that's, that's the problem, you know, so, so that's really the essence of conservation of information. I, I'm, I'm at the point of thinking I need something, you know, zippier you like, you know, you know, the thing is, no free lunch has been used, you know, can't get something for nothing, you know, but, but, you know, it, it, it's not a difficult concept. You know, I think conservation of information, you know, it works because we do have to cash this out mathematically in information theoretic terms. So, so there is that.
But, you know, the core idea is that if you're raising a probability, you have to pay for it, you know, and you, you can either pay for it with knowledge or you pay for it, you know, with some other improbable event which doesn't then explain how you get, get the, out that come you're after. Because these are always needle in a haystack problems, right? I mean, you know, if, if something is highly probable, just run the experiment, just do the, the chance process and eventually you'll get it, you know, 10 heads in a row. Spend an hour or so or so tossing a coin and you'll see 10 heads in a row, hundred heads in a row. No, you know, no human being, you know, all the human beings did nothing but spend their time flipping coins. They would not witness except very Improbably a hundred heads in a row.
So if you're trying to account for how an event that improbable happens, you know, it's because the probability was raised. But if it was raised, then how did it get raised? And you basically have two choices. You either have to say it goes because some other highly improbable event happened, in which case you haven't explained anything, or because there was knowledge and intelligence had the knowledge which allowed this overcoming of the improbability.
Yeah, bacterial flagellum is highly improbable for chance processes to produce, you know, but for, you know, a super tech engineer, it may not be, you know, a mousetrap is going to be highly improbable to bring all the parts together if you're just, you know, just have those pieces and are just shaking them up in a tumbler.
But, you know, if you got a, an intelligence that's able to put them together with knowledge, you know, then the improbability disappears.
So that's the core idea. And then the math is just basically just showing how these probabilities work and that the probabilities, as you regress, as you go back in, trying to account for how what seemed to be a small probability actually got raised, you find that the probabilities never get easier, they can get worse, much worse. But there's no net improvement. There's no free lunch by going back.
[00:28:21] Speaker B: Yeah, well, that makes it very clear, I think there's no free lunch.
The bill has to be paid. The bill for a higher probability, the bill for complex specified information. It's got to get paid. And we can't brush that aside.
That was the first segment of my conversation with Dr. William Dembski as we unpack his new monograph on the law of conservation of information.
An important concept that can be applied to two key issues that we're concerned with. Evaluating evolutionary processes and making a case for intelligent design at work in the history of life and the universe. Now stay tuned. We've got a total of four episodes in this series.
In part two, Dr. Dembski continues to explain the law by reviewing the history of the idea in modern science.
We discussed several thinkers in the 19th and 20th centuries who acknowledged the law of conservation of information in their work, from Ada Lovelace to biologist Peter Medawar.
So stay with us as we continue to explore this important law and its implications for science.
Until next time, I'm Andrew McDermott for ID the Future. Thanks for joining us.
[00:29:33] Speaker A: Visit
[email protected] and intelligent design.org this program is copyright Discovery Institute and recorded by its center for Science and Culture.