Episode Transcript
[00:00:00] Speaker A: Foreign the Future, a podcast about evolution and intelligent design.
[00:00:10] Speaker B: How accurately does AI portray the theory of Intelligent design? Will large language models and chatbots like ChatGPT and Grok level the playing field for intelligent design? Or will this emerging technology simply reinforce the same stereotypes and and misconceptions about intelligent design that have been around for decades?
Well, today I'm delighted to spend some time with mathematician and philosopher Dr. William Dembsky. Dr. Demski is a founding and senior Fellow with Discovery Institute's center for Science and Culture and a distinguished Fellow with the Institute's Walter Bradley center for Natural and Artificial Intelligence. He's a graduate of the University of Illinois at Chicago, where he earned a bachelor's in psychology and a doctorate in philosophy. He also received a doctorate in mathematics from the University of Chicago in 1988 and a master of Divinity degree from Princeton theological seminary in 1996.
He has held National Science foundation graduate and postdoctoral fellowships. Dr. Dembsky has published in the peer reviewed mathematics, engineering, biology, philosophy, engineering and theology literature. He is the author or editor of more than 25 books, most recently a brand new edition of the Design Inference, co authored with Winston Ewart. Dr. Demski, welcome back to ID the Future.
[00:01:32] Speaker A: Good to be with you, Andrew. My pleasure, yeah.
[00:01:35] Speaker B: So I'm kicking myself for not having had this discussion with you a bit Sooner because in 2023 and 2024 you were writing about putting AI and large language models like ChatGPT and Google's Bard through their paces with regard to how they convey information about intelligent design. So I wish we had connected sooner, but I dare say, you know, the topic is getting to be evergreen these days. The technology isn't going anywhere. If anything, it's making an even bigger impact on our lives every year that goes by.
So it's still a timely topic to explore.
Now, first, early in your career, did you get the sense of that large language models were coming down the road and could do the things that they can do today? And how do you think your training in math and philosophy prepared you to evaluate the accuracy and truthfulness of AI technology?
[00:02:28] Speaker A: Yeah, it's an interesting question because I've been interested in artificial intelligence at least since the early 80s. I sat in on artificial intelligence course with professor was it Laurent Sicklossy. He had a book called let's Talk Lisp, Lisp and Lisp was Lisp processing. That was the state of the art AI language back then. And I probably did more programming in Lisp than any other language I've programmed in. So I've Watched this field over the years.
You know, there were these, you know, attempts to have chatbots early on. I mean, there was an eliza program.
What was it named? Weitzenbaum, I think. Was he the one who came up with it? Where basically what you did was you had a language, you had a mechanism for taking somebody's prompt, something they said and then reworking it, you know, or keying off of certain key words there and basically trying to pretend that you're a Rogerian therapist. Okay, Carl Rogers. The idea was to have non directive therapy. So instead of actually telling somebody to do something or coming up with a new idea, you just reflected back what they were saying.
So, you know, my boyfriend never brings me flowers. And so you would look at the word never and never, that's usually too strong is, do you really mean never here? Or something like that. So you reflect back. And Even back then, 40, 50 years ago, you'd have people who would be doing these sorts of chats, but there's nothing would be very easy to trip these systems up because they were basically doing grammatical rearrangements of text, you know. And so over the years, I mean, the gold standard for artificial intelligence was getting some computer to pass the Turing test.
And you know, as the years progressed, there was really no progress on that. If you go back, I don't know, about eight years ago or so, I think some people, because you'd have these competitions yearly, it's how close are we to passing the Turing test?
And I think a few years back there was one chatbot that kind of, that fooled some people by pretending to be a Eurasian teenager who was just smack talking, you know, and, but basically the smack talking allowed the programmers to get away with a lot of stuff.
So all this backdrop, I didn't see these generative AI large language models coming. I thought that Turing test was going to be a problem forever.
And I think that's no longer the case. You can certainly get these large language models could pass a Turing test. Where they might fail is because they produce their answers too quickly. So you might say, oh, this is a robot. Because we, you know, it's, you know, it just responded with this complete answer too quickly. Or there, you know, and there, there are ways, I think, to trip up these, these large language models. But in general, if you're unsuspecting and you prompt the system to act in a certain manner, I think you could pass the Turing test now. And that's something that has just happened in the last few years. So no, I didn't really see this coming. But now, in retrospect, given what was done to make these systems happen, I don't think they're as impressive as I would have been, you know, if I had just been told, okay, here's something that passes the Turing test. And I didn't know what they, how they were doing it because basically they've been trained on the entire corpus of everything humans have ever written. I mean, that's slight exaggeration. I mean, there's certain books and things that are outside, but I mean, it's just an enormous amount of humanly constructed text. And then they've been trained extensively, you know, so that's, you know, so they've. Whatever they've achieved, they've achieved it in a way that humans haven't. I mean, you know, the Noam Chomsky's idea about how we learn language, one of his points was the poverty of the stimulus. We learn language with very little experience of the language. I mean, you could even be in deprived environments. Kids will still pick up language. Whereas these systems, they need basically everything we've said to be successful at what they're doing. So they're.
But even so, I mean, I think they're extremely impressive. I mean, this is the biggest thing that's happened in artificial intelligence and the 45 or so years that I've been following the field.
So I think it is a big deal. But I don't think we're at.
I don't think we're poised to achieve artificial general intelligence. And you see this sort of thing advertised now widely.
And I think the systems themselves have some problems. I mean, I don't think, you know, they, they can do some impressive things, but I think they also have shortcomings.
[00:08:07] Speaker B: And of course, your, your background in mathematics, but also philosophy, I think that creates a unique place for you to evaluate these things from, from two different distinct perspectives. Now, in one of your posts, you make the argument that AI will help to level the playing field for intelligent design. That's obviously a very hopeful thing for all of us here in the intelligent design community.
We want that level playing field, and we've been fighting for it for so long. You say it will do an end run around big tech's collusion with gatekeepers like Wikipedia and release that control over what you call our cognitive real estate.
Now, let's unpack that for just a few minutes. First of all, what makes LLMs, large language models like ChatGPT or Grok or Google's Bard different than Wikipedia?
[00:09:00] Speaker A: Well, it's certainly going to be trained on Wikipedia. But I think where you can get around Wikipedia is some of the biases. I mean, Wikipedia is just this static entity.
Yeah. I mean, it changes as a consequence of people editing it. But the editors also control what can change. And for hot topics, for controversial topics, the editors keep very tight control. I mean, I think anybody in the intelligent design community has witnessed this. I mean, you know, when I've tried to correct errors in my Wikipedia entry, you know, automatically reverts, you know, and then just the way our articles are constructed. I mean, for instance, my Wikipedia entry, they make absolutely clear that, you know, I'm an intelligent design proponent. But to identify that with creationism and say that this is a pseudo science, they get all these keywords in there.
I think the large language models are trained on Wikipedia, but they're trained on a lot more.
I think what you can do is it's all in the prompt.
Give me an account of intelligent design.
As somebody who is sympathetic to this view might account for it. Because Wikipedia does not give you an account, try to give you something dispassionate. It gives you the standard methodologically naturalistic line about intelligent design. So you can say, well, don't take that line. Give me a different line.
Give me a different beat on it. So I think that will allow for some, something of an end run around the bias. So I think there are ways, but you have to elicit that from these large language models. I think the default will be to still mouth the Wikipedia line on these things. So, but, you know, but at least there's the possibility of that. Years back, I approached Wikipedia because I was trying to get my, my bio cleaned up and I broached the possibility, okay, what about just giving people who are being criticized, you know, if you're the subject of a bio, give the person who's the subject of the bio a thousand words to respond. You know, you can still have the last word, you can still do anything, but they, they categorically refuse to do that, you know, so, so that would, I think, help, you know, because on a lot of controversial topics, or global warming, you know, or climate change, you know, there's going to be a certain bias, okay? Allow people on the other side to have at least a thousand words, you know, you can still say whatever you want, refute it, but that's not the way Wikipedia works. So that sort of a possibility, I think is, is more in play with these large language models. So I think there's, there's the possibility of Bypassing some of this bias. But, you know, you have to also ask yourself, who are the people behind these large language models? And what, you know, if they, if they want to ensure that there is a bias, you know, they can train the model so that it's. So that there are going to be certain things, blacklisted words, certain things that are going to have to elicit a negative response, but that becomes draconian. And then also you have all these different models now that are out there. You've got Chat GPT, but you've got Grok, you've got Claude, you know, you've got. And so I think if a model gets too biased, you know, I think people will vote with their feet and go elsewhere, you know, So I think that that's why I think there and, and just inherently, I mean, you know, it's one thing to say, you know, you give. Give Chat GPT, for instance, a paragraph and say, rewrite this. Okay, well, it'll rewrite it. But then you could say, well, rewrite it in the style of James Joyce, or rewrite it in the style of this, you know, and then you're going to get a different sort of response. And I think that's, that's where being a prompt engineer, being judicious in your prompts and guiding it. And I think this is one of the things about these large language models. I think in the law, there's talk about leading a witness, and you don't want to do this, you want the witness. But with large language models, I found that you can lead it in certain directions and then it'll give you perhaps what you want. Or you can make it look foolish, you know, by getting it to say things that, you know, it wouldn't otherwise admit to.
[00:13:57] Speaker B: Yeah. And you, you can't really do that with Wikipedia, as you say, it's static. It's just there. It's. It's governed by these, these editors that don't want to allow a more fair and balanced approach to the gathering of facts. I mean, so much for the People's Encyclopedia. It really isn't.
[00:14:17] Speaker A: Well, I mean, I would say more. You know, it's, it's not entirely static. I mean, it's growing. You know, when they're big news items, they, they're gonna, you know, it is growing, but it's, there's this sense of incrementalism. And then there is also this, this bias that's there, certainly with controversial articles.
[00:14:36] Speaker B: Yeah. I mean, like you say to this day, and it calls Intelligent Design, quote A pseudo scientific form of creationism, unquote.
And anybody connected to it, of course, is a pseudoscientist of some stripe. And you know, the moment you try to correct that, the editors get, get an alert. That's what I realized. I tried to make a few changes to the entry for expelled and it was within seconds or minutes, you know, and so that, that had to point to them getting an alert. But in any case, they're standing by trying to guard, you know, and then I'm not really sure why, you know, what are they afraid of? But at least with large language models, you can play around with it a little bit more. As you say, you can, you can guide it, you can push it, and you can make it, you know, acknowledge that it doesn't really know the answer because it's dealing with so much more data.
Now, you've conducted full interrogations with LLMs like Chat, GPT and Bard. In one conversation before you mentioned intelligent design, you, you actually spent some time setting the stage with the LLM. You first asked it to provide some examples of scientific claims that had been tested and confirmed in history.
Then you asked it about SETI, the Search for Extraterrestrial Life, and to what extent SETI's claims were testable and had been confirmed. Now why was it useful to, in that instance, start with questions about SETI and how did ChatGPT, which was the fourth generation you were working with, fair when you later asked it about id?
[00:16:11] Speaker A: Well, I mean, I wanted to set it up, you know, and I think part of it is that I've did this stage setting because with intelligent design, so often the claim is made that it's untestable, you know, and so I wanted it to admit, ChatGPT, that SETI is testable, and then also say, okay, well what makes it testable? Because it's not like you see the little green guy who's sending the radio signal that we detect to be designed. All you've got is a radio signal.
And then the question is, naturally, well, if you have a signal from outer space, what about a signal from a cell that's telling you that there's information there? So that's where I was pushing it. And it's been probably a year, year and a half since I did that. So why don't you, since you've looked at it, what was I able to elicit from ChatGPT with that? Did it admit to full blown, full throated intelligent design?
[00:17:16] Speaker B: You had it walk through the scientific claims of SETI and how they could be tested. And indeed it fed you back some bonafide examples of, of the testability.
And it did acknowledge that thus far, you know, precious little has been confirmed through these claims. But it did note that the non finding of evidence was not confirmation that it had failed as a research project or that there was no extraterrestrial life. So he was willing to, to make those points.
[00:17:51] Speaker A: Yeah, I mean, which is. Right, because I mean, it's. Even if there's no evidence, I mean, it could, it is a scientific theory in the sense that such evidence could arise, you know, and I think one of the key indicia or criteria was what they were calling technosignatures. You know, I mean, that's.
So that there's a signature of technology in the radio signal. There are, there's a lot of random radio noise, but there could be something that would, would do that. So could such a techno signature be in life? Okay, and what's, what was the response?
[00:18:28] Speaker B: Yeah, well, you said it was pretty miserly in its acknowledgment of Intelligent Designs testability compared to seti.
It didn't rule it out completely, but it was, as your word, was miserly. You know, it just wasn't. Wasn't willing to admit as much. And I'm wondering that obviously wasn't coming from the pureness of the AI itself, but more the knowledge base of critical opinions toward Intelligent design that it's probably been fed.
Would you say there's a difference there between what it might admit in a pure sense versus just what has been fed? Or maybe, maybe those are the same thing?
[00:19:11] Speaker A: Well, yeah, I mean, you know, the thing is I, yeah, I, I would agree. It's.
It seems that if you were just looking at parity of reasoning, the same sort of reasoning happens in SETI as an Intelligent Design, then the testability of Intelligent Design in principle should have been on the same plane as that of seti. But it did seem that with Intelligent Design you have actual systems information bearing molecules.
You know, you don't have anything like that, anything concrete in seti. And yet it seemed to be willing to give SETI much more credence as a scientific theory than Intelligent Design. So that was, you know, I think that, I think that's right. But you know, I think we could have, or I could have helped matters if I had just said, okay, now look at this from the vantage of an Intelligent Design theorist. How do they, how would they argue this? Okay. And then I think you would have gotten a pretty good rendition, I suspect, you know, of what's you know, what, what we in our community say so because, I mean, it has been trained on our material as well. I mean, I've asked it to give a summary of my work, you know, and it as last I recall, I mean, this is, again, most of this, what I've done with intelligent design and large language models is probably, probably a year, year and a half old at this point, you know, so that's why, since I was asking you to refresh my memory, you know, but, but yeah, I think they, the, the knowledge is there, you know, it's a question of prompting it and eliciting it. But, you know, I've, I've had it also happen, and I don't know if you saw this one, but where I was trying to get it to allow for intelligent design at the origin of life. And so I just kept hammering it with questions about what are possible ways of explaining the origin of life. And so it went through all these different proposals that are out there, proposals most of which I knew. And then I would say, but, yeah, this fails because of that. This is inadequate, doesn't account for this.
And then every time I would refute it, I think it would usually say, yeah, you make a good point, or whatever, because it's also, I think, programmed to be polite, you know, and then, then I would question, okay, is there anything else? Is there anything else that you might come up with? Is there anything else? And so I'd be getting all these different sort of materialistic scenarios, but it just would never go, you know.
You know, and then I think I even said, well, what about religious possibilities? You know, and then I think it did get into creation, but it, you know, it was hard, you know, to finally get it to consider intelligent design. I think I had to basically let the cat out of the bag and just say, okay, you know, what about this? You know, and then finally it did address it, but it's as though it would just not put intelligent design as a viable candidate for the origin of life against all these other materialistic scenarios. So, but again, I think you can, you can get around that by just saying, okay, bracket all of this materialistic stuff. Tell me how an intelligent Design person would see this. Can you explain it to me in those terms? You know, and then I think then typically it'll, you know, this was my experience of it now. You know, this is again about a year, year and a half ago.
Is it still going to do that?
You know, I don't know. You know, has, has the training changed a whole lot.
You know, I don't know if I'd say that I thought these models have been getting smarter. I think they're getting more polished and kind of how they present package information. You know, they usually like to have a little preamble, give you a bunch of bullet points, give you a little conclusion in terms of actual insights. Getting something that really gets to the core, shows some deep inside intelligence on a subject I'm not sure, or adding something that you might not have thought of, at least on the question of intelligence. And I would, I would have my doubts. I hesitate there for a moment because it, it does seem like it can be useful. I mean, I've, I've had it draw business plans for me or do, you know, mission target taglines, business value propositions for various business enterprises I'm involved with and can be, you know, it's, it's been interesting. It's a good sounding board for a lot of things. So I, I like the technology, but again, I think you have to prompt it accordingly, you know, and you have to elicit what you can from it and then you have to critically examine it, you know, and I think this is, this gets to a point that you and I, we corresponded about with the hallucinations or the trustworthiness of these systems.
[00:24:21] Speaker B: Now, speaking of seti, you asked an intriguing side question in some of your writing. You said, what if biologists looked for signs of intelligence in living things with as much enthusiasm and tenacity as SETI researchers look for signs of extraterrestrial intelligence? That's a good question.
[00:24:40] Speaker A: Yeah, I think so.
[00:24:41] Speaker B: And perhaps, you know, there'll be a new generation of biologists that will take a top down approach. I mean, we're in the intelligent design community starting to spread the word about systems biology and how that can help systems engineering principles and how that can train you to look at organisms differently. So maybe we'll, we'll see that in the future. But.
[00:25:04] Speaker A: Well, you know, I think we, I think we are seeing it, you know, but I think, you know, it's a question of where's the biological mainstream on this, this. And you know, I think it's, they're kind of schizophrenic on the one hand, I think insofar as they're actually gaining insights into these systems is because they are acting as systems engineers, reverse engineers, mechanical engineers, electrical engineers, whatnot. But you know, when it comes to actually trying to understand how they originated, I think they leave that all by the wayside and adopt this, a Darwinian materialistic view that some, some form of Darwinian process is responsible for everything that we see in living forms and that this natural selection has all this creative potential to produce the information that we find in biological systems. And I would say it's there that theoretical work on design inferences and conservation of information show that these Darwinian processes simply don't have the creative power, do not have the information rich resources to bring these systems about. You know, but that's, I think the argument is on our side. I think we've actually nailed this down pretty well, in fact, very well. But getting people to recognize it has been difficult. I mean, coming back to Wikipedia, we find that this very notion of conservation of information, I haven't looked at it lately, but last I looked at it was about 20 years out of date. I mean, the sort of literature that they were citing. And the very first line is that this is a creationist argument by William Dembsky. It's like, well, conservation of information, actually it's an idea that's been out there for quite some time, but so be it.
This is what we deal with.
[00:27:02] Speaker B: Yeah, and I think you are right. We are winning the argument and getting those arguments out there. But I guess now it's going to be important to get the arguments into the training data and the knowledge bases of these AIs so that it balances out the materialistic biases that they're going to suffer from.
[00:27:22] Speaker A: Well, I don't know if it's a matter of, you know, when you say getting it in there, I think the material is there and I think make these, these AI systems try to be as comprehensive as possible. I don't think they, they want to censor certain big swaths of data because if they do that, you know, they're going to be other models out there and that are going to be better than they are. So I think they have to have to bring that, have to have that information all embedded in themselves.
So that means it's going to be available. And so I keep using this word prompt and also to elicit, you know, you've got to draw it out of these systems. You know, sometimes, you know, you mentioned the pure form. In the pure form, it would just tell you, you know, you just ask a straightforward question and it gives you a straightforward answer.
But if it's been trained to bias, has been trained into it, then you're going to have to, you know, say something, you know, prompted in a way that gets around the bias. So, you know, it could be as simple as in your training, you've probably been biased to think of science as a methodologically naturalistic enterprise. But imagine that you did not have to explain things simply as the consequence of unguided material processes, but that tactical teleological factors also had a place in scientific explanation. Teleological factors don't presuppose that this is a supernatural source. Somebody like Francis Crick has talked about directed panspermia being responsible for life on Earth, being seated, being created by extraterrestrial aliens who would be material beings.
So given all of that, now tell me this or that aspect of intelligent design, and I mean, it does.
My experience with these large language models is that they do quote, unquote, listen to you. I mean, when you set up the problem, it has to take that. It takes that into account.
And it makes sense that it would, because, I mean, the whole way these systems work is you give the prompt and then it is continuing.
It's generating these tokens in response to the prompt. You got the prompt and it generates a token. Then it looks at the prompt and the token and says, okay, what's the next token? Look at everything. What's the next token? It's just, as it were, continuing on that trajectory that you start with the prompt.
And so it does take it into account. It can't just say, oh, I'm just going to ignore these three lines that Dembsky just put in there.
So I think that's one reason why. I think it's a technology that does circumvent a lot of the biases of Wikipedia, but you have to know what to ask.
[00:30:30] Speaker B: Yeah, some very good points there. Well, another. Another point you make in your writing. At one point, chatgpt asserts that the supernatural implications of ID are problematic for science, since science, it says, relies on methodological naturalism, which is, of course, the assumption that all phenomena have natural explanations. Now, you call methodological naturalism sort of a red herring in discussions over id. Can you explain that?
[00:30:58] Speaker A: Yeah, well, I, in fact, I just spoke to that when I mentioned Francis Crick and his theory of directed panspermia. Because directed panspermia and panspermia is that life on Earth was seeded from asteroids or meteors, which brought some sort of protocell, you know, some biological information to Earth.
So hitchco ride. But directed panspermia is that it wasn't just some randomly moving meteorite, but it was space aliens in spaceships coming to Earth and seeding it intentionally and perhaps even creating that life form, you know, in. In the lab. So, you know, Very advanced space space aliens.
And the thing is, that would be a naturalistic explanation of life on Earth. And you, I think you mentioned expelled, you know, and you making a change on that. Well, for, for me, the most important section of that documentary was when Ben Stein sat down with. When Ben Stein sat down with Richard Dawkins and interviewed him for, you know, I think the segment was about two or three minutes.
But Dawkins in the interview admits that there could be signs of intelligence in living systems and that those could clearly implicate an alien intelligence, a space alien that seeded life on Earth. But then he immediately adds, okay, but that space alien would himself herself itself have had to evolve by some acceptable naturalistic means, that is by natural selection. So pushes the problem further back.
But you know, the thing is that does allow that we are here by design. It would be a design that would be compatible with materialism, you know, and I think it becomes also something of a throwaway for him to say, okay, and that alien would have had to evolve by some naturalistic means. Well, maybe that alien then was himself seated from some other space alien, you know, and you can go back, but, you know, there's, there's a regress there, but it's not at all clear that you have to end that regress in some material process.
Maybe that the material process is inadequate for bringing about the sort of complexity that we need for biology. That's been standard Intelligent design talking point that natural selection doesn't have the creative power that somebody like Richard Dawkins says. But to your point, I mean, the issue with intelligent design. Intelligent design looks at existing material objects and asks, are there patterns in those objects that would tell us that is the consequence of an intelligence? So you look at Mount Rushmore.
Yes, that pattern requires an intelligence. The mountainside next to it can well be explained by wind and erosion, but not Mount Rushmore.
So how did that arrangement come about?
You can know that it's intelligently caused, but you need not know what the source is. Does it have to be supernatural? It could be if there is a super nature. But it could, you know, but you still are confronted with the fact of intelligent design even if you don't know the precise causal details. And that's why I say the supernaturalism is a red herring, because you know that you're dealing with intelligence. The nature of that intelligence is a subsequent question, you know, and you can, you may bring in theological or philosophical resources to explain it.
But you know, then as a materialist, though, then you're also confronted with your own philosophy or worldview? I mean, what, what commits you to this? That matter is all there is, you know, and there, there are certain contradictions that come up if you take a purely materialistic line.
Because if matter is all there is, and if all we are is an assemblage of matter and motions and modifications of matter, how do we have something like consciousness?
How do we have knowledge? What is it about the configuration and dynamics of the material things that make us up, the atoms, electrons, whatever, that allows us to have knowledge of the world? What is the connection between the. Those atoms that are bumbling about, you know, in, in my brain and my knowledge of the tree that I'm looking at? And how do we have, how is knowledge, how is this sort of aboutness of, you know, of these material configurations telling us truths about the world?
How do you explain that? And I think on materialistic grounds, there is no explanation for that.
So. And if you don't have knowledge that you know, if you don't have a basis for knowledge, then how do you have knowledge about materialism, you know, so it becomes, you know, it's this snake that eats its own tail. And this is a point that C.S. lewis, Alvin planning have made, that materialism defeats itself because you can't account for knowledge, you can't account for things like consciousness and, you know, and just to put a plug in for Michael Egnor and Denise o' Leary's new book, the Immortal Mind, I mean, this is, this is an argument that they make there as well. So I would encourage people to read that.
[00:36:32] Speaker B: That was Dr. Bill Dembsky talking with me about generative AI and whether it will level the playing field for intelligent design.
Now, in a separate episode, the conversation continues. We talk about truth and trust in AI and why it's important to independently verify what LLM chatbots tell us.
We discussed to what extent AI large language models have a connection to truth and whether there's hope that AI can indeed level that playing field and communicate concepts about intelligent design with accuracy. So don't miss part two of this conversation until next time. I'm Andrew McDermott for ID the Future. Thanks for joining us.
[00:37:13] Speaker A: ID the Future, a podcast about evolution and intelligent design.