[00:00:01] Speaker A: You know, so I think what they're doing is they're looking for some sort of positive result. This is how we can do it without intelligent design, without intelligence.
And what I would say is, no, you can't do it that way.
And so conservation of information ends up, as I'm developing it, this law of conservation of information. It's not a positive law that says, oh yeah, this is how you get it. It's, it's rather, this is how you can't get it. And so you better look elsewhere.
ID the Future, a podcast about evolution and intelligent design.
[00:00:42] Speaker B: All of us have a built in intuition to detect the hallmarks of intelligent design.
And that intuition also allows us to understand why you can't get something for nothing in this life.
But in recent decades, some scientists are claiming that you can get a whole lot from nothing. The entire universe and all of biological life, to be exact.
It's been said that the fundamental unit of life is the cell or the atom or even the quark, actually. It's information.
And information isn't free. There's a cost attached. And my guest today has developed a mathematical way to measure that cost and make sure it gets accounted for whenever it's applied. Welcome to Idea the Future. I'm your host, Andrew McDermott. Today I continue my conversation with Dr. William Dembsky, mathematician and philosopher, as we unpack his new monograph on the law of conservation of information.
Dr. Dembsky, in case you don't know him, he's a founding and senior Fellow with Discovery Institute's center for Science and Culture and a distinguished Fellow with the Institutes of Walter Bradley center for Natural and Artificial Intelligence. He's a graduate of the University of Illinois at Chicago, where he earned a bachelor's in psychology and a doctorate in philosophy. He also received a doctorate in mathematics from the University of Chicago in 1988 and a master of Divinity degree from Princeton theological seminary in 1996.
He has held National Science foundation graduate and postdoctoral fellowships. Dr. Dembski has published in the peer reviewed mathematics, engineering, biology, philosophy and theology literature. He is the author or editor of more than 25 books, most recently a brand new edition of the Design Inference, co authored with Winston Ewart.
This is part two of a four part series exploring the law of conservation of information.
Let's jump back into the conversation now.
Now, let's just place this law within the context of what scientists have been looking for. For a while now, scientists have been thinking, hey, there's a missing law that explains how formational order can arise in nature. Something separate from, but that complements the second law of thermodynamics. You think those scientists have identified the right problem but are coming up with the wrong solutions, Usually proposing a materialistic bottom up explanation for formational order. So is this law of conservation of information the answer? And how is it different from what's already been proposed out there?
[00:03:21] Speaker A: Well, I would say what's been proposed has not really been a proposal. I think it's the notice of a problem. Okay, so for instance, I mean nature's hidden law of formation. I think this is the sort of thing that Robert Hazen, origin of life researcher, has looked at and he'll, he notes that when you look at functional things in the sea of possible things, most things are non functional, you know, and so, so you have this, this huge, this very big narrowing, right? And whenever you see a narrowing of possibilities, you should think information.
And if you think information, lots of information, you should think low probability.
So he's sensitive to that. And so the thing though is he as a naturalist, well, as an observer, he'll say, well, you know, yes, the functional stuff in all these different contexts could be origin of life or whatever is very minuscule compared to the other things that are out there.
But it happens, okay, but we do see this functional stuff. So there has to be some law, something that accounts for how this could come about.
And so they are looking for something like natural selection, quasi natural selection, some physical process, some yet unknown law that will produce that. But the problem is laws are basically conditionals. If then, you know, and they are amenable to probabilistic analysis. And so what's going to be this law that increases the probability of something functional, functional formation in this sea of non functional stuff? Well, you know, you only increase probabilities in two ways. You know, it's either through knowledge or you know, by some other highly improbable thing happening, which means you haven't really explained anything. You know, you filled one hole by digging another and often you end up digging an even deeper hole than the one you, than the one you filled.
So, you know, so I think what they're doing is they're looking for some sort of positive result. This is how we can do it without intelligent design, without intelligence.
And what I would say is no, you can't do it that way.
And so conservation of information ends up, as I'm developing it, this law of conservation of information, it's not a positive law that says, oh yeah, this is how you get it. It's rather, this is how you can't get it, and so you better look elsewhere. Okay, so it's, if you will, a proscriptive generalization. It's a generalization where it prescribes something that says this can't happen.
Now, there are plenty of prescriptive generalizations in science.
Some perpetual motion machines.
Patent offices don't accept proposals for perpetual motion machines any longer. It's not that every possible perpetual motion machine has been tried. It's that second law of thermodynamics tells us that such a thing is not going to be possible.
So we can have reasons to think that something can't happen.
What conservation of information then is saying is that there's no way you can explain an increase in probability by some process that doesn't have at least the same sorts of probabilistic obstacles. Okay, so it's a prescriptive generalization. It says this sort of thing can't happen. You can't just have a naturalistic law of formation that overcomes probabilities, because these naturalistic laws with themselves have to be stochastic.
And they don't want to start by assuming that there is a purposive agent that is acting and able to overcome probabilities.
Once you have that, you're in a completely different regime.
So they don't want to go there.
And yet they want to think that it's still possible. It's basically, how do you get design without a designer?
That's always the impulse, and they go broke on it, but they don't realize it. And one reason they don't realize is that they can keep getting government funding. You know, they can keep going through these various epicycles of, you know, what. How this might happen in some, you know, bizarre delusional scenario. But they, they do it, you know, and, and this law holds. I mean, it's really just a very simple probabilistic relationship on which this law is based.
If you can think of it this way.
You have an event, the probability of event. It's highly improbable.
You have something that amplifies the probability of the event. So the probability of the event given this amplifier is much higher. Okay? The amplifier can be instruction sets to find that.
That Easter egg, okay? But then you have to ask, what's the probability of the amplifier? Okay? And it turns out the probability of the event is always greater than or equal to the probability of the event given the amplifier times the probability of the amplifier. And because of that pro. That probabilistic relationship, especially if the prob if the amplifier significantly raises the probability of the event to that degree, the probability of the amplifier goes down. Okay. And so, you know, so it's like, you know, I mean, Doug Axe gives this example my call our colleague.
You have an arrow hitting a target.
Okay, Highly improbable just by random chance. But the probability of the arrow hitting the target, given that there's a favorable wind that guides it there, is very high. Okay, now what's the probability of the favorable wind? Well, it goes way down.
Okay.
An even simpler example is probability of it being wet out wet outside. Okay.
E, wet outside.
Maybe it's not very small, but you know, it's the, the, the precise numbers aren't so important. It's, it's the probabilistic relationship. Okay, so then let's say we amplify the probability of it being wet outside. Okay, what can an could be a good amplifier? Well, it's raining outside, it's raining outside, it's going to be wet outside.
So the probability of it being wet outside, given that it's raining raining being the amplifier, is high. It's one. Okay, now what's the probability of it raining outside?
Well, it's going to be less than a probability of it being wet outside because it could be wet in other ways. Your sprinkler system can be on, you know, or any number of things. So the probability of that amplifier is actually less, strictly less in this case than the probability of it being wet outside.
And so, you know, this is, this is just the most elementary probability theory.
And you know, it's, but it, what it does is when you follow the logic through of what's involved in raising probabilities, you go broke if you think you can just magically raise these probabilities with the, by finding some other stochastic scenario in which the probabilities are better, you know, and it just, it doesn't work.
[00:11:47] Speaker B: Right. And yet they continue to look for a bottom up explanation because frankly there's nothing else when you're in a materialistic framework. And I liked your point about, well, you know, materialism keeps getting rewarded with grant money and you know, promotions and awards of other kinds. You know, that's why they're not turning from it, because they can be in this materialistic framework quite safely and not have to risk a different view.
[00:12:19] Speaker A: Well, you can expend. Yeah, I think you're right. And I think you can just also expend a lot of effort which gets rewarded and you can spend a lot of effort which gets rewarded and also seems like you're doing something productive. So you've got a hole to fill.
So you dig another hole and you fill that hole. Now you fill the hole, you've got another hole. You know, it's like, well, okay, now I've got this hole. Let me fill that by digging another hole. You know, and now let me dig another, you know, and so you're, you're doing work, you know, I mean, and if you can get people to admire your work and think that you're doing something productive, you know, I guess it's one way of wiling your way your life in meaningless pursuits. You know, people do other things meaningless ways. But this has the full imprimatur of the naturalistic scientific community.
It's just nonsense, you know, and I'm, I'm going to keep calling it that, you know, but, but these people still take themselves seriously, you know, and so, and they're, they're, our large language models are filled with the text that they've been trained on, which takes these people seriously, you know, they shouldn't be, you know, and the, and the, the math is against them. But you know, I've been, I mean the thing is these base, these ideas, I've been mooting them for 25 years.
I mean the, you know, the basic intuitions have been there, a number of theorems have been proved in special cases. But you know, the, the general formulation of this, you know, finally nailed it down, you know, it's, it's a steel trap, it's rock solid. I mean, you know, prove me wrong, please. Naturalism is true materialism, you know, then, you know, here we are. So either we're here as an entirely random fluke, you know, and just, you know, getting super lucky just isn't that great a scientific explanation, you know, I mean, but you do hear it, you know, in terms of multi universes, you know, okay, it's just, you know, things, things happen. But you know, when it comes to biology, origin of life, subsequent history of life, you know, you know, just getting lucky doesn't cut it. So you've got to do something else. And so you have to play act that, you know, and then that's this filling one hole by doing, doing another. And you know, I'm not just making this up. I mean, you know, the, the Eugenie Scott example that was, you know, particularly clear and intuitive. But you, you find this in the whole evolutionary computing literature where they, where biologists try to use evolutionary computing to justify that complexity can emerge from simplicity through some sort of evolutionary computing algorithm. And it, it can't, I mean, they're always, they're always smuggling in information. They're always doing something surreptitious. And you know, we've, we've done this, shown this at the evolutionary Informatics lab. Bob Marx, George Montanez, Winston Y. And I, you know, but, you know, it's, you know, if you're a committed materialist or naturalist, you know, you're going to think, well, Dembsky and his comrades, they must have missed something, you know, and, but we, we know that they're not even worth listening to, so we're not going to read their stuff, you know.
You know, we're going to let the, you know, last I looked, the article at Wikipedia was about, with 20 years out of date on conservation of information, you know, I mean, it, it's just very hard to get, get even updated, you know, it's.
[00:16:15] Speaker B: Yeah.
[00:16:16] Speaker A: So, you know, so we press ahead, you know, I mean, the ideas are good and you know, good luck in, you know, with your missing law of formation, showing how these things actually came together.
And yeah, we've got good reason to think that there is a prescriptive generalization called the law of conservation of information, which shows that this can't happen.
Right.
[00:16:43] Speaker B: Well, and it's been said that science advances one funeral at a time. So these, indeed just, these things take a while and for some may never get through.
Because hasn't it also been said that when you're under a materialistic framework, no amount of evidence doesn't matter what you come up with is going to shake that belief until one lets it go voluntarily?
[00:17:10] Speaker A: Yeah, I mean, no, I'm not hopeless.
I think I'm coming across a bit more testy than I usually am. But, you know, I think maybe I'm doing this more for effect, you know, just to try, try out a new, new Persona as opposed to the kinder and gentler Bill. But, you know, it's.
I think we, we still need to keep pressing ahead, speaking the truth, getting the ideas out. You know, I'm, and, and thinking, you know, just keep revisiting this idea, trying to make it simpler, more straightforward.
You know, I think this sort of can't get something for nothing.
Let's maybe, maybe I'll write another paper on this, you know, just try to make it, make it clearer, you know, and maybe some big podcast will pick this up at some point, you know. Yeah, but, you know, it, it is tricky though, because, you know, as we, we, as we said, you know, you've Got all this funding that's going toward a materialistic understanding of biology.
You know, they're, they're just a lot of entrenched interests and I don't know, I mean, you know, the, the comment that science progresses funeral by funeral.
You know, this is Max Planck. I mean, he'd lived to, into the 1940s. I mean, he had a long life. I mean, he witnessed the entire quantum revolution.
And you know, that was a revolution that took place within, I mean, about 10 years. I think from start to finish. I mean, you think of the late early 20s, Heisenberg, Schrodinger, you know, by the mid-30s, I think it was, it was, it was a done deal.
You know, we've been struggling with this for, you know, 30 years. If you think of the Discovery Institute center for Science and Culture, I think it's just celebrating its 30th anniversary this year. I think it's, it's just much tougher to get these ideas across because materialism is so entrenched in, in the scientific world.
[00:19:11] Speaker B: You mentioned history. You actually review the history of the law of conservation of information in your monograph by bringing up several thinkers of the 19th and 20th centuries who made this point in their work.
So this is not a new idea. You're just kind of combining things and adding mathematical rigor to it.
Let's just zoom into a few of the thinkers who have thought these ideas before you. One of the earliest references to the law that you found in the literature was from a French physicist, Leon Briuin, who stated in a 1962 book that the computing machine does not create any new information, but it performs a very valuable transformation of known information.
Right, I like that quote. This highlights something called data processing inequality. So how does that relate to the law?
[00:20:02] Speaker A: Well, you know, I think it's, it's a, an earlier, perhaps pre theoretic version of it. I mean, pre theoretic in the sense that it doesn't try to formulate it as a mathematical theorem, although he clearly had theoretical chops and was watching, knew what was going on with computing machines and how output always was subservient to input and the output couldn't in some sense exceed the input. And in fact, the very term conservation of information, I think it's with in this sort of context.
I think his was the first that I found with that.
But you know, and then law of conservation of information, I think there was. Peter Medawar, a biologist in the 80s, wrote about that, and I think he was thinking also in a computational context. But, you know, computational context, you're looking also those machines were deterministic. So conservation of information, as I've developed it also is applies stochastically. So it's, you know, where you've got chance processes, in a sense, when you've got determinism, probabilities are always 0 and 1. So this inequality that I described, you know, like the probability of E is greater than or equal to probability of E given the amplifier times the probability of the amplifier. Well, all those probabilities will be 1 or 0 in a deterministic context. And so there it makes.
Actually, it's much easier to see. It's when the probabilities are small and when you do really have contingency and not just things operating by necessity as they do in a deterministic computational context, that things become more complicated.
[00:21:54] Speaker B: And it goes further back, too. I mean, in 1843, Ada Lovelace, commenting on Charles Babbage's analytical machine, made the famous observation that machines don't have any pretensions whatever to originate anything.
So does that further lay the groundwork for what you're putting together here?
[00:22:13] Speaker A: Yeah, I mean, it's all in the same vein. I mean, it's this idea that it's input, output. It's like you're getting this output. Is that output doing something fundamentally different than the, compared to the input?
You know, and it's, you can ask that in terms of creativity, but you, you know, I, I put it in terms, in probabilistic terms because I think that's, that's where you can actually do some mathematics. But, you know, I think her point is, is, is valid still, and I think it applies as much in her day. I mean, you know, the Babbage's analytical engine was meant to be a mechanical device, actually, although, you know, it's, you know, he, and because it was. He didn't have electronics, he could never actually get it working. But he and Ada Loveless at least saw the potential in that machine.
But they saw that there was nothing fundamentally new that you could get out of that.
That machine that had not been inputted. Now, you can, you can mine it, right? I mean, you know, and I think there's, you know, I'm very much, you know, I think that large language models do interesting stuff. They, they can draw connections, you know, they can show you things that you may not have suspected. And yet, at the end of the day, I would say they're, they're not fundamentally creative, you know, and if you doubt that, you know, ask it to prove a new theorem that will get you 10 or tenure someplace. You know, it's never, never answered that question for me. I'm being a bit facetious here, you know, but yeah, yeah, it's, I think these, all these systems have limitations, you know, but this is, this is another thing we're facing now, this sense that AGI, artificial general intelligence and asi, artificial superintelligence is right around the corner, that somehow these systems will turn in on themselves, learn so much from themselves and just become these super intelligences. And I would say Ada Lovelace's insight applies and will and would argue that that will not happen. I would say that conservation of information actually shows that that can't happen.
You know, that you're not going to get those sorts of new spikes in creativity, those new highly improbable surprisal things that you would not expect and that these systems can put them out.
And there's I think a growing literature also on data that just these systems collapsing on themselves when they're trained on them themselves.
Because the, the idea with these systems is that they're going to get so good that they can basically just learn from themselves and then just find things outside. But that they can basically become these self contained systems that will just get better and better and in fact it seems that they still need to be trained on what humans actually produce.
So I think the, the preliminary indications are that these systems that whatever, you know, whatever your Aspirations are for AGI and ASI LLMs are not going to be the way to do it.
[00:25:49] Speaker B: Yeah, definite limitations there.
And over time the information contained therein can degrade.
Especially if you're just taking the data and training the next one on the same data.
The same things will, will apply in the end. Now you've described the law of conservation of information as an accounting tool that can be used to track the cost of success in any search.
Kind of like a balance sheet, if you will. A balance sheet for information.
So in plain English, can you explain just the, the workings of that? Just before we, we end this first episode, the difference between the uncertainty we start with, which is called endogenous information, and then the informational cost of finding a shortcut to solve a problem. How do these measures prove that if a search seems to be beating the odds, someone had to make a prior deposit of information to make that possible?
[00:26:44] Speaker A: Yeah, I mean, I think what you're describing, it's just another way of putting what I term put in terms of the probability of an event and then the probability of an event given an amplifier we call endogenous, active and exogenous information.
That was some articles that I was writing with Bob Marx and others in the late 2000s, 2000, early 2010s. That's language we use because we were also looking at a strictly information theoretic context.
I'm putting it much more just in terms of probabilities stuff because, you know, transforming things into information measures I don't think is as clear, especially if I'm trying to reach a general audience as I am here through this podcast. Yeah, but that's the, the basic, you know, it's. That example captures it is that you've got a low probability event that you have to account for. You know, I mean, you know, if you don't think you have to account for it, you know, if you don't have to explain how you found that Easter egg, how you got that needle in the haystack.
Okay, you know, but usually people want to know, you know, what's, what's going on. Did you have a glove, you know, a magnetic glove that pulled that needle out of the haystack, you know, or were you just searching for it? Because if you just were searching for it, you know, basically straw by straw, you probably never find that needle. So, you know, so you've got that improbable event and then it's, you know, there's so something because the event happened, something amplified the probability of that event. And then what's the probability of an amplifier?
And that probability of that amplifier combined with the probability of the event being amplified, those two probabilities together will never exceed the probability of the event by itself. And so there's this diminution. You know, it's basically you're either but strict conservation of information. The reason we use it is that the best you can do is for that equality, the probability of the event will to equal the probability event given the amplifier times the probability of the amplifier, it can actually be less than that.
And it becomes less than that when the probability of a non amplifier of something that didn't amplify it has positive probability. And the probability of the event given a non amplifier has positive probability. So it's, it's like there's. You could.
The, the.
The this inequality can be a strict in strict inequality, so it's actually strictly less or it can be equal. If it's equal then, you know, you have conservation of information is, is preserved. I mean basically the probability of the amplifier, to simplify a little bit, is the same as the probability of the event in question, but it can be strictly less. And so, you know, there are often frictional costs in bringing about the event through an amplifier and those frictional costs, that also needs to be in the balance sheet. And that's part of that, this conservation of information inequality. That's the basis of that, of this idea.
[00:30:09] Speaker B: And to be honest, I think everyday folks can grasp this. You know, we can smell a rat. We can, we can use our common sense and apply it to what, you know, Doug Axe has called our design intuition.
[00:30:21] Speaker A: Right?
[00:30:21] Speaker B: We're built, we're, we're built to be able to detect design and also detect the cost of things.
So it's just the learned ones, the, the ones that went to college and got trained on a reductionist sense of biology that you have to really retrain.
But I appreciate the effort you've put into this, making it as clear as possible and as simple as possible.
And I appreciate your time today unpacking this in a separate episode. We're going to come back and continue conversing about it.
In particular, how to apply it to the evaluation of evolution, how it can illuminate the rarity of function in a sea of genomic search space, and how the law can support arguments for intelligent design. So I hope you'll come back and join us for that. Don't miss that conclusion. In the meantime, do yourself a favor and read Dr. Dembski's monograph, the Law of Conservation of Information. It's like a short book that will give you all the detail you need to properly wrap your head around this provocative concept. And best of all, it's free. There's a free download at the website of the journal Bio Complexity. We'll include a link to it in the show notes for this episode. Until next time, I'm Andrew McDermott for ID the Future. Thanks for joining us.
[00:31:41] Speaker A: Visit
[email protected] and intelligent design.org this program is Copyright Discovery Institute and recorded by its center for Science and Culture.