[00:00:04] Speaker A: ID the Future, a podcast about evolution and intelligent design.
Can evolutionary processes take credit for human creativity?
Welcome to ID the Future. I'm your host, Andrew McDermott.
Today I'd like to share with you a conversation that first aired on Mind Matters News, another podcast from the Discovery Institute that focuses on the intersection of artificial and natural intelligence.
In this episode, guest host pat Flynn welcomes Dr. Eric Holloway and Professor Robert J. Marks to discuss the information cost of creativity.
The conversation is based on a chapter in the recent volume Minding the brain, authored by Dr. Holloway and Marx.
Essentially, they are addressing the following can the marvels of human creativity, like novels, speeches and ideas, really be explained by random processes and brain chemistry alone? As Holloway and Marx explain, even allowing for the computational capacity of the entire universe and a hypothetical multiverse, the probability of randomly generating a short, meaningful phrase is astronomically low. This suggests that human creativity cannot be fully explained by natural random processes and may require a non material or external source of information and guidance.
Let's listen in as Flynn and his guests climb the metaphorical mountain of information to address the origins of human creativity.
[00:01:39] Speaker B: Hello everybody and welcome back to the Mind Matters podcast. I am the guest host today. My name is Pat Flynn and we will be discussing the information cost of creativity.
This is based on a chapter in the book authored by Dr. Eric Holloway and Robert J. Marks. The book, of course, Being Minding the Brain, of which we've had many great conversations through the past several months, and essentially they are addressing the following question, can the marvels of human creativity, novels, speeches, ideas, really be explained by random processes and brain chemistry alone? Let us climb the metaphorical mountain of information and find out the. My guests today are, As I mentioned, Dr. Eric Holloway and Professor Robert J. Marks. Gentlemen, it is a pleasure to be with the two of you today.
[00:02:27] Speaker C: Thank you, Pat.
[00:02:28] Speaker D: Yep, thank you. Great to be with you.
[00:02:29] Speaker B: Yes. So, you know, it's been a while since Dr. Marks and I have spoken, Eric, this is the first opportunity to connect with you and I'm grateful for that. I think we should do some of the usual formalities, if the two of you wouldn't mind just briefly introducing yourselves, who you are, what you do before we get into today's episode. I think that would be just swell. So, Eric, why don't we start with you, if you don't mind.
[00:02:51] Speaker D: Hi, my name is Eric Holloway and I'm currently working as a consultant for machine learning and artificial intelligence, and I have a doctorate in electrical and computer.
[00:03:05] Speaker C: Engineering from Baylor University and my name is Robert Marks. I'm the co editor of Minding the Brain. I'm a distinguished professor of electrical and computer engineering at Baylor University and the director of the Bradley center for Natural and Artificial Intelligence at Discovery.
[00:03:21] Speaker B: Excellent. Well, this is going to be a fascinating conversation. This is something that, again, is pretty much on everybody's mind, especially with the advent of artificial intelligence and large language models. This idea of creativity. Before we start to dive too deeply into this topic, I think we should spend at least a little bit of time talking about what we mean by creativity. At least a sort of minimal working understanding so we know exactly what the target is that we're after. So I'll just throw that out there and whichever one of you wants to help answer that question first, please go for it.
[00:03:54] Speaker C: Yeah, I'll give it a try and then like to hear what Eric says. I like the definition of creativity offered by Sommer Brinkshort at Rensselaer Polytechnic. He's been a guest on Mind Matters and is a contributor of two chapters in Minding the Brain. Very brilliant guy. But he proposed a number of years ago the idea of the Lovelace test. And this is creativity for computers.
And a computer, including a computer generating AI is going to be creative if it does something beyond the intent or the explanation of the programmer, or I think in modern terminology, the programmers, because there's lots of people contributing to the overall program that's being generated.
And this kind of puts a little buckle in the idea of AI writing better AI because if AI writes better AI, the this new AI that's being generated has to be beyond the intent of the original programmer.
And it does turn out that computers do write other computer programs, but nothing which is more creative. Nothing that is beyond the explanation or intent of the original programmer. So that's how I would define creativity through the Lovelace test. Somer, brings yours. Lovelace test.
[00:05:13] Speaker B: Yeah. Had had an excellent conversation with him not too long ago. Encourage people to check that episode out if they haven't listened already.
[00:05:19] Speaker C: Oh, that's right. You did interview bringjord. That was a great episode, by the way.
[00:05:24] Speaker B: Yeah, yeah. Really enjoyed it. So highly recommend people check that out.
[00:05:27] Speaker C: Yes.
[00:05:28] Speaker B: Eric, is there anything you want to add to what Dr. Mark said there?
[00:05:30] Speaker D: Yeah, I think that's a great definition of creativity, one that I would completely agree with. And the one addition I'd make is it's important to distinguish creativity from randomness, because on the naturalist side, they'll say, oh, we get our creativity through random mutations and so on. But creativity is fundamentally different than randomness. Randomness in one sense you could actually say is still deterministic because all stochastic processes will follow some kind of probability distribution. But in that case you still fall prey to bringsGeorge definition of creativity because once it's following a probability distribution in some sense, you know what to expect. But creativity can't even be defined by a probability distribution, so it's also not random. So that's the one addition I'd make.
[00:06:20] Speaker C: I think it was Aristotle that said there's one way to get something right and billions and billions of ways to get it wrong.
So randomness in terms of creativity follows that Aristotle premise that yeah, there's one way to get it right, but man, there's billions of ways to get it wrong. So if you just do processes by flipping a coin, it ain't going to happen.
[00:06:41] Speaker B: Yeah. So just to clarify where this conversation is going, it's not going to be entirely focused on artificial intelligence, although we can say a little bit more about that as we move along. It's really focused on naturalism at large and whether we can get not the full suite of creativity. The two of you really focus on the generation of just meaningful phrases, which I think is smart to restrict the top target a little bit.
And you're going to ultimately be staging a sort of probabilistic argument against it to say that this is more or less virtually impossible. I do have another thought that I'd like to just kick to the two of you. That came through my mind as I was reading through your guys excellent article.
And it comes from philosopher Richard Taylor originally. And he's focusing on meaning or semantic content. Right? And obviously this is what we pick up in sentences, that there's a sort of clear determinate meaning, semantic content, real intentionality, that these are about things.
And so what Richard Taylor proposes is the following example. You know, say you're, say you're on a train riding to Wales and you ride by a collection of rocks that somehow form the sentence or seem to form the sentence, welcome to Wales.
I mean, so naturally you would think that, oh okay, this means something. It means that I'm being welcomed to Wales, for example. But then Richard Taylor has us imagine that there's no mind behind it, that the rocks, just through a random process, however improbable, just fell into that arrangement. What Taylor argues, I think very cleverly, is even if that could happen, so set the probabilities aside, which you two are going to talk about in detail here. But even if that could happen through entirely random processes, you cannot accept that both it occurred through random processes and it's actually conveying the meaning that you think it is. He argues that those things are strictly incompatible. So if you think it came from random processes, you have to give up the idea that it's actually trying to convey some sort of meaning to you, specifically that it's giving you any sort of welcome message to whales. Or you have to give up that it came through completely random processes and say that it is something that comes from the realm of mind. Right.
Which he thinks the realm of mind just is the realm of meaning. His point is not probabilistic, it's strictly metaphysical. But it was in the back of my mind I was reading your excellent article, which will of course make a more probabilistic case. Curious if the two of you have any thoughts on that more metaphysical point, if you want to call it that.
[00:09:17] Speaker D: Yeah, I think that's a pretty good point. You can think of it like when you. With clouds in the sky, and you've probably done this sort of experiment where you look up at the clouds and you can pick out different clouds that resemble, say, a face or a rabbit or any number of different things.
And so that kind of follows his same scenario where you see some kind of message in the clouds. But since you know that the clouds are just forming randomly, you don't expect those messages from whether they be a face to actually mean anything. So if you see a face in the cloud, you don't think that means, oh, you're going to see that same person the next moment or something.
On the other hand, when you see, say, a skywriter, say, today is so and so's birthday, then.
Yeah, because at that point you don't believe it's purely the wind blowing the letters into shape, but actually written by someone intentionally, then you do expect it to correspond to something externally.
So, yeah, I agree that meaning definitely depends on there being a mind behind whatever you're interpreting.
[00:10:23] Speaker C: This actually is a comment on current large language models like ChatGPT and Grok. It turns out that they are trained on syntax, I.e. the statistical relationship among words, and it does an incredible job. If you've ever used ChatGPT or Grok, you just gotta be amazed at what it does. But again, this is all syntax. It doesn't relate to semantics. Human beings look at these, these things and we see semantics, we see the meaning behind the words, not just the interrelationship between these and other words.
The other thing I would mention is that in the chapter that Eric and I wrote, we look at a much more general question about welcome to Wales, or the one that comes to my mind is the Hamlet, where Hamlet's looking up in the sky and somebody says, what does that cloud look like? And he said, methinks it looks like a weasel. This has turned out to be a very common quote in Intelligent Design.
We can actually look at probabilities of individual phrases occurring. But what Eric and I did was something a little bit different, more general. We looked at the chance of anything of meaning coming out of randomness. Anything of meaning. Now, what do we mean by meaning? We meant that we had, as our base, we had the letters of the English Alphabet plus the space, so 27 characters, all capitals. We didn't even consider lowercase because that would get too ugly. And we consider that we had a dictionary. And then the question is, what is the chance of anything meaningful coming out of that? We were even sloppy in determining what meaningful meant. We said that any words in the dictionary that came out that would have meaning, even if they were stupid and didn't have any semantic meaning, as long as they were words that were in the dictionary. And that's the problem that we looked at and I think is one of the.
One of the interesting contributions of the chapter is the idea is, what is the probability of anything, any phrase, coming randomly out of the Alphabet and given a dictionary?
[00:12:25] Speaker B: Yeah, great. So that's helpful because it really sort of builds upon Taylor's metaphysical point. You're looking at, well, what are the chances that the boulders could actually fall in this sort of arrangement, so to speak.
[00:12:35] Speaker C: Right, yes.
[00:12:36] Speaker D: Good.
[00:12:36] Speaker B: All right, so let's get into that. Now, you begin by framing the challenge with a certain mountain climbing metaphor. I think this might be helpful to get into the minds of the audience to situate things. So, Eric, would you mind explaining that analogy for us?
[00:12:50] Speaker D: Yeah. We can think of meaning as being, say, summits of a mountain.
And then you can talk about whether a natural process could possibly reach that kind of meaning by whether it can climb the mountain. So if you have a nice smooth mountain with many paved straight roads all the way up to the top, then you can expect a natural process to get up there or a computer algorithm. And so in that case, you don't need to attribute anything special to reaching that summit. But then on the other hand, if it's like a really craggy mountain, lots of holes and crevices and windy paths, and maybe it's not even a mountain at all, but, say, like a single pole sticking straight out of the ground and Climbing up miles into the air. Well, nothing's going to be able to climb up that to get reached the top. So if something's at the top, then it has to reach the top through some other means that is not natural processes like the airplane of intelligent design.
[00:13:54] Speaker C: Yeah, let me expand on that a little bit. Eric's idea a little bit. I look at the mountain as in our case, the summit of the mountain was generating a meaningful phrase.
And it does turn out that in order to get to that top, to the top of the mountain, there are many paths which you can take.
There might be a path that leads directly to it. There might be a path that doubles back on itself. There might be a path that leads you to an elevator that kind of helps you up the mountain. And this is something we call active information, because that elevator was put there by some intelligence to help you get to the summit. So that's the metaphorical reason that we use climbing the mountain because again, the peak of the mountain is generating a phrase that consists of words in the English language.
And there was a great article published in 2019 called the no Free lunch Theorem.
And it basically says in relationship to the mountain metaphor, that if you're standing at the bottom of the mountain and you have all of these paths that you can take to the top and you know nothing about anything, in other words, you're starting kind of ex nihilio, you know nothing, then one of the paths on average is as good as any other paths. There might be paths which lead directly to the top. There might be some that double back on themselves, and there might be ones with little elevators, if you will, mid mountain that you can take, that you can take up. So there's many paths to that top, but the no free lunch theorem says, yeah, if you don't know anything, if you're starting out with nothing, and by nothing we mean the letters of the Alphabet and the dictionary.
Which, which path do you take? And what the no free lunch theorem says is that randomness on average is as good as any other path. So just, you know, choose a path. And so that's what Eric and I are trying to measure in the paper.
[00:15:44] Speaker B: Okay, good. All right, what other bits of background information do we need to get your argument up and running? You guys discuss the idea that generating meaningful phrases is a high information cost task. Perhaps that would be another good place to start. Yeah, help us get. Let's get the found set so we can see exactly the issue, if you don't mind.
[00:16:06] Speaker D: Yeah, so one important piece of context Here is what we mean by information, or in this case, we're talking about Shannon entropy.
And Shannon entropy is a measure of possibilities, how many choices you have.
And let's say you had a book that just consisted of the letter A over and over and over again. So in that case, you don't have any choice. It's just the letter A. So there's very, very low entropy, and you have no creativity or any meaning in that book.
On the other hand, let's say you have a book that's written with the full Alphabet and has words and paragraphs and so on. So in that case, you have loads of choices. You have many different letters, punctuation, symbols, words and paragraphs and so on. You can choose at any point.
So in that case, you have really high entropy. And so you can see that entropy is a necessary precondition for what we call meaningful information.
So that's kind of a key concept in the paper we wrote.
[00:17:10] Speaker C: Yes.
And the summit of the mountain. How high is the mountain? You got to measure it in some sort of units. You know, we might measure it in how many feet high are the mountains?
We measure it in bits, the computational bits that's required to reach the summit of the mountain. And we want to argue that the number of bits to reach the top of the mountain is incredibly high.
And in order to do that, I'm going to take you through a little scenario now.
[00:17:38] Speaker B: Yeah, please do.
[00:17:39] Speaker C: Which I think is kind of fun. We want a number so big that it reflects the computational resources of our universe.
And if the summit of that mountain is higher than the computational resources of the universe, we're never going to reach it. Now, just without some assistance, just to.
[00:17:59] Speaker B: Clarify, the summit of the mountain is just one meaningful phrase or more than that, what's the summit in this case?
[00:18:06] Speaker C: It could be any meaningful phrase. And we actually go through a number of different mountains of how long that phrase is.
[00:18:12] Speaker B: Okay, so it's a pretty minimalist, gracious target that you're putting out there.
[00:18:17] Speaker C: Exactly.
[00:18:18] Speaker B: I got it.
[00:18:18] Speaker D: Yeah. You can think of our mountains more as if they're plateaus, which, by the way, is good news, because a plateau is the highest form of flattery.
Okay, moving on.
[00:18:29] Speaker B: Yeah, we got it in.
[00:18:33] Speaker C: So we wanted to give in the paper an idea of how high this summit was. So we wanted to just choose a big number, a big number that people could relate to. So it turns out there's 10 to the 80th atoms in the universe.
That's one followed by 80 zeros. Now, there was a physicist named Seth Lloyd. And he based on physics, the universe's computational capacity is 10 to the 120th. Operations on 10 to the 90th bits. Now you get. So these numbers get so big that they're kind of meaningless, but we'll shed light on the meaningless in a minute. So that combines to 10 to the 210 bits.
Now, what we wanted to do is to even eclipse Seth Lloyd's basis. So here's what we did. There's something called a Planck length. It's very small, and it's the smallest measurable thing that you have. If you're into string theory, that's where that's the length that you have to go through in order to evaluate strings in string theory and physics. To give a relative precise perspective, how big is a Planck length? Well, if you took a Planck length and you scaled it to an inch, you ready for this? Then a proton would have a diameter of more than 5,000 light years.
So the Planck length is incredibly small. They're unbelievably short.
So imagine taking a Planck length and making a little cube out of it. We call it a Planck cube, if you will.
And we look at our universe. Our universe has a radius of 47 billion light years. And we ask how many Planck cubes can fit in the universe.
Well, it's 10 followed by 184. That's really big. But we even want a bigger number. And so we talk about Planck time. What is Planck time? That's the time it takes the light to travel. One Planck length, you can imagine that's very, very short. Also, it turns out it's like 10 to the minus 44 seconds, if you're into that sort of thing. And so we're going to take the history of the universe, assuming that big Bang. We're assuming the Big bang here, and we have a 14 billion year estimate of the age of the universe. And that translates to about 10 to the 61st Planck Time Units. So if we divided up the history of the universe since the Big Bang into Planck times, we would have 10 to the 60th. So imagine for each one of those little Planck times, we fill the universe with all of these Planck cubes. Well, it turns out there would be 10 to the 224th Planck cubes for every Planck time in the history of the universe. And this corresponds to 10 to the 244th bits. Now, that is pretty big. And I would maintain, well, it's not 10 to the 244 bits. It's 10 to the 244th Planck cubes over the history of the universe. But again, we're not trying to get Planck cubes. We're just trying to get a number, which nobody can argue that we have the computational resources to exceed that number.
[00:21:26] Speaker B: I see.
[00:21:27] Speaker C: So imagine we have 10 to the 224 bits. Bits. Now, they're all caps, 27 letters. Remember what we want to do, we want to climb this mountain, if you will, and get to the top. And we're allowed to get anything in the dictionary. And this is amazing. Suppose that we do just random selection of letters and guess how long we can generate a phrase for only 268 characters.
And if you don't believe that, it's an astonishing result. But the math is there, it's solid, and I don't think you can argue with it. But only 268 characters. Now, people that don't believe in the universe say, well, you know, this isn't big enough. We actually have these parallel universes, and maybe that would increase the chance of generating this. Well, there's different models of parallel universes, but one of them, the maximum number of universes is 10 to the thousandth. Well, that gives, it turns out, 10 to the 1,244 bits.
And that can generate only 1380 characters of meaningful that correspond to words in the dictionary. So we see that with the computational resources of the dictionary, we are unable to generate even short phrases that are meaningful, and they're only meaningful in a very lax sense. We just want the words to be in a dictionary. They don't have to even line up to be a sentence. And these, this I think, is astonishing result and shows that if we have the capability of generating many meaningful phrases, then the mountain that we have climbed to generate those meaningful phrases is enormously high. And certainly to get there requires more than the computational resources of the universe or the multiverse. Astonishing.
[00:23:16] Speaker D: Yeah. And one thing I want to add on there.
[00:23:18] Speaker B: Yeah, go ahead, Eric.
[00:23:20] Speaker D: Something to notice in what Dr. Marks just said is, so he lays out one scenario, and then someone says, oh, what if we add all these other resources? But if you notice that even when you add exponentially more resources, you only slightly increase the number of meaningful characters you can generate. So that's kind of the core problem here is we're dealing with exponential difficulty. So if you double everything, you only get one more character or one more bit and so on. So that's that that was my huge takeaway, is it's just exponentially difficult to make Any progress?
[00:23:56] Speaker B: So, just to summarize, there's. There's a big problem here. There's. There's a big mathematical problem for the naturalist who needs to reach the top of this mountain, for sure. And I think it's also important that, you know, while we're focusing on easy examples, common examples, like, you know, words, not even complete sentences, necessarily for people to reflect upon just how meaningful our entire experience is. I mean, our perceptual. Our perceptual experiences are filled with semantic, intentional content. I, When I see the computer, I'm having the experience that there is a computer in front of me that's, that's meaningful. That's. That's intentional. Right.
And so it's, It's. It's easy to miss that. But everything about us is really suffused in a realm of meaning, isn't it?
[00:24:45] Speaker D: Right.
[00:24:46] Speaker C: Yes. And in fact, the reason that we chose this problem is because it's mathematically tractable.
[00:24:52] Speaker B: Right.
[00:24:53] Speaker C: If you look at, for example, what's the chance of forming the human brain or something? Well, it might be mathematically tractable, but I don't know how to do it. But this is something that Eric and I were able to get a handle on and actually generate the results. Which are astonishing.
[00:25:10] Speaker B: Yeah, I think so. Because you're working with, I think, very modest starting points, which makes your project all the more forceful. Okay, so let us think about the idea of modest guidance. You talk about active information. You brought that up at the beginning of this conversation. Let's return to that, because ultimately you don't think that this is actually going to help at all. So can you re familiarize the audience with what you mean by active information and why this isn't going to be enough to sort of destabilize your argument.
[00:25:41] Speaker D: So, active information.
Let's go back to our mountain climbing metaphor, and let's say you have a couple different kinds of mountain climbers. You have one guy who's just like me, just sits around messing on his computer. And then you have the difficulty for that mountain climber to reach the top, which is going to be pretty difficult, maybe impossible. Then you have another mountain climber who's practiced a whole lot, has all the equipment and so on. He's going to have an even easier time. And then maybe you have a Sherpa who's like, genetically predisposed to be able to sprint up mountains.
So those guys correspond to different amounts of information about the mountain. There's a correspondence between their mental and physiology and resource makeup that makes it easier or harder for them to get up the mountain.
And so active information corresponds to how much of all that you need to reach the top of the mountain. If the mountain is like an elevator to the top of the skyscraper, you need very little or almost no active information to make it to the top. Whereas if it's like Mount Everest, if you're going to survive with a high probability, you pretty much need to be a Sherpa.
So that's what we're meaning by active information is how easy can we hit these targets? If it's really difficult, then you need loads of active information to hit them.
And also just to bounce back to what I was talking about regarding exponential difficulty.
So even if you have more active information, it only helps you a slight amount. And Dr. Marks can go into more how just little you can make progress off of active information. You just need constant influxes of active information to make any progress.
[00:27:34] Speaker C: Yeah, let me. Yeah, Eric's points are excellent.
Let me expand on them a little bit. Here's an idea of active information. Somebody has hidden an Easter egg in the state of Wyoming.
And your job is to find that Easter egg. Now, if you start out with no active information, no assistance, you're never going to find that Easter egg, much like climbing the summit. But if you have a friend, an expert that comes in and keeps telling you, you're getting warmer, you're getting colder, you're getting warmer, you're getting warmer, there's a good chance that you're going to find that Easter egg. Now, what is this external source of information doing? It's giving you active information to assist in your search. So you might think, well, you know, we always have active information. Well, no, we don't. You cannot choose your source of information randomly. You might have one person that just doesn't even look at where you're at and just flips a coin and says, you're getting colder, you're getting warmer, and it has nothing to do with where you are, and that isn't going to help you very much. Or you can have the total contrarian that when you're getting warmer, says you're getting colder, and that increases your time that you do it. So you can't choose your sources of information randomly.
You have to have the ability to make sure that your source of information assisting you up the mountain is accurate. And if you get a bad source of information, you're never going to find that egg in Wyoming.
If you have a good one, yeah, it's going to help. But you do not have the ability to choose one of these sources at random. This is why the no free lunch theorem says that one piece of advice on the average is good as another piece of advice if you have no idea of what you're trying to do. If you have no domain expertise that you can bring to the problem.
[00:29:22] Speaker D: A quick analogy. So let's say you're trying to figure out tomorrow's winning lottery number and you don't have any idea. It doesn't help your chances at all if instead of just going off of your numbers, you decide to pull a thousand people. Even if you pull, like the entire world's population, it gives you zero help for finding the winning lottery for tomorrow. So, yeah, it's basically a no free lunch theorem. You can't just randomly get active information to help you out.
[00:29:49] Speaker B: Okay, great.
[00:29:50] Speaker C: Yeah. So sometimes it doesn't exist unless you have an insider.
Like there was the McDonald's Monopoly sweepstakes. I don't know if you heard of that, but they were giving out tokens every time you bought it.
[00:30:03] Speaker B: I remember that.
[00:30:03] Speaker D: Yeah.
[00:30:04] Speaker C: And. Yeah. So, yeah, what? Check your chances. Well, they're random. No, they weren't random. They were. They were predetermined because there was a guy at McDonald's that kind of gave the winning pieces. I think it was pieces for Boardwalk and Park Place. Yeah, he gave them out to friends and. Yeah, there's. There's a great source of active information in there.
[00:30:24] Speaker B: Yeah, but that's obviously not something that would be available to the naturalist in this case. Right.
[00:30:28] Speaker C: Yeah, that's true. That's true, Pat. Yep.
[00:30:31] Speaker B: Yeah. Awesome. All right, so we have to cover the always lurking evolution objection. Right.
The vague appeals to. Well, look, evolution, natural selection, brain evolution. This will get us what we need.
Why isn't that going to help, Bob?
[00:30:49] Speaker C: Well, it gets back to this idea of creativity.
Darwinian evolution in its purest form assumes creativity comes from randomness sources. Well, okay, and where does creativity come from? It comes from mutations and survival of the fittest and repopulation. And you repeat that again and again. That's the purest form.
We have written papers that actually show that this process doesn't work unless you have a target in mind, because you have to make sure that this mutation makes you better. You don't see mothers lining up saying, hey, I want to mutate my baby, because maybe if I mutate my baby, it'll turn out to be better. No, that's kind of ridiculous. So, yeah, mutations can be many times as deleterious. In fact, are Typically deleterious as opposed to advantageous. So if we do have this improvement over time, as evolution says, how is that guided? That has to be guided. Just like the path up the mountain, it has to be guided by somebody with knowledge of where the next step in the evolution process occurs. And that has to be infused into the natural process in order to get it to work. So active information is a necessity in order to make evolution even make sense, right?
[00:32:10] Speaker D: Yeah.
[00:32:10] Speaker B: So just to be clear, I mean, your general argument, when you're assessing the computational resources of the universe, that's going to encompass what evolution can do, correct?
[00:32:20] Speaker C: Yes, in fact.
Yeah, but yes, that is true. And we can get a analytical handle on the problem that we talked about. Evolution is kind of a more of amorphous topic and people can argue against you. But I think that what we did, I think is parallels this idea of evolution and.
Yeah, so you're not going to get the pushback from our analysis that you, you do from evolution.
[00:32:47] Speaker D: Well, and also to build on Dr. Mark's point, basically the point is evolution adds nothing by itself. And you can see this with the lottery example. Let's say instead of just picking tomorrow's lottery number, you want to have an evolutionary algorithm that involves better and better predictors. So you pull a thousand people, and the guys who get one of the right numbers, you keep them around, and then you get rid of all the others that got bad numbers. So eventually you'll get a pool of people who have picked good numbers. But will those guys help you pick the next week's lidar number? No, they won't help at all.
So that's basically the position that evolution is in.
No matter what process you use, whether you have mutation, natural selection, and so on, if there's no information in those processes about the actual target you're trying to hit, they're completely useless. Your lottery evolution algorithm only works if you happen to have an insider feeding you information.
[00:33:48] Speaker B: Just to bring us back to Richard Taylor's point again, I think these arguments are compatible and in some ways reinforce one other. You know, his point was strictly metaphysical. Right. You can't get, you can't get meaning from a meaningless random process.
I mean, it'd be like. And then thinking that evolution can solve that issue would be something like saying that, oh, well, we can get a, you know, a round square from evolution. Well, no, you can't. Right. This is a strict.
But it doesn't matter how useful being a round square might be for survival. I mean, nothing can kill an impossibility this is, this is something that just cannot in principle happen.
So I think these, these objections are often misguided. I think this sort of objection against your argument is already accounted for in your general assessment and against the greater metaphysical types of arguments really just completely misses the point at hand. So. But it's one that always needs to be addressed because it's typically an objection that is brought against the sorts of proposals that we're offering here, which in fact we should get to that as the final point.
What are we getting from all this? What does all this imply ultimately about the source of creativity?
Could there be a non material or external source? Is that what we're after? What can we infer from all this?
[00:35:05] Speaker C: Gentlemen, look, I think in the narrow sense this says that we have the ability to generate meaningful phrases. How do we climb that mountain?
Clearly we can talk about evolution, we can talk about learning from other people, we can talk about how we're born. We're born pre wired in certain instances.
But if you can't generate meaningful phrases at random from the computational resources of the universe, the ability to human beings, of human beings to have climbed that mountain and generate meaningful phrases really doesn't work.
And so I think that that's the narrow takeaway. I think in the more general takeaway, it's the idea that you can't get something for nothing.
You have certain limits. You have certain limits to. Well, if you think about physics, there's certain limitations to physics. You can't go faster than light because if you go at the speed of light, your mass becomes infinite. You can't trisect an angle in mathematics.
Pat, you mentioned squaring a circle or circling a square. I guess you could circle a square, but you can't square a circle. Okay? So you can't do that. And it's the same thing with computers. Computers don't have the ability to be creative as we've defined it via the Lovelace test.
And I think that evolution just happening and creating information, the concept of evolution, by the way, is very similar to this Lovelace test. As we go down generations and they get better and better and better, then that betterness has to come from somewhere.
If it is not inbred and is not a part of the original organism that's evolving, then the guidance of that has to come externally from active information.
And I think maybe that's the larger takeaway.
[00:36:56] Speaker B: Excellent. Well, I want to thank both of you gentlemen so very much for this excellent conversation. This has been a great climb up the informational Summit. Indeed, the evidence seems clear. The natural world alone may not be able to explain the genius of the human mind. I invite everybody to join us next time as we turn from the limits of chance to the illusions of machines. Thank you all for listening to Mind Matters Podcast. Talk to you guys soon.
[00:37:31] Speaker E: This has been Mind Matters News.
Explore more at mindmatters AI that's mindmatters AI.
Mind Matters News is direct, directed and edited by Austin Egbert. The opinions expressed on this program are solely those of the speakers. Mind Matters News is produced and copyrighted by the Walter Bradley center for Natural and Artificial Intelligence at Discovery Institute.
[00:38:03] Speaker A: That was Professor Robert J. Marks and Dr. Eric Holloway with guest host Pat Flynn discussing the origins of human creativity.
The conversation first aired on Discovery Institute's Mind Matters News podcast, hosted by Robert J. Marks. Catch more episodes of Mind Matters News at Mind Matters AI podcast.
That's MindMatters AI podcast. Or you can look it up anywhere you find podcasts today. For ID the Future. I'm Andrew McDermott. Thanks for listening.
Visit
[email protected] and intelligent design.org this program is copyright Discovery Institute and recorded by its center for Science and Culture.