Bob Marks on Why AI Won’t Destroy the World, or Save It

Episode 1903 May 17, 2024 00:20:09
Bob Marks on Why AI Won’t Destroy the World, or Save It
Intelligent Design the Future
Bob Marks on Why AI Won’t Destroy the World, or Save It

May 17 2024 | 00:20:09

/

Show Notes

Today’s ID the Future from the vault dives into the controversial realm of artificial intelligence (AI). Will robots or other computers ever become so fast and powerful that they become conscious, creative, and free? Will AI reach a point where it leaves humans in the dust? To shed light on these and other questions, host Casey Luskin interviews computer engineering professor Robert J. Marks, head of the Walter Bradley Center for Natural and Artificial Intelligence.
View Full Transcript

Episode Transcript

[00:00:04] Speaker A: ID the Future, a podcast about evolution and intelligent design. [00:00:12] Speaker B: Is artificial intelligence going to take over the world? Hello, and welcome to id the future. I'm Casey Luskin, and today I'm speaking with Professor Robert Marks, a distinguished professor of electrical and computer engineering at Baylor University with a PhD in electrical engineering from Texas Tech University. He's the founder of the evolutionary informatics Lab with William Demski and the author of many technical papers that have explicated how information makes a case for intelligent design. He's also the author of a great book on the topic of evolutionary informatics, appropriately titled Introduction to Evolutionary Informatics. And he's also a director and senior fellow of the Bradley center on Natural and Artificial Intelligence at Discovery Institute. So, Doctor Marks, thank you so much for coming on the show with us today. [00:00:58] Speaker C: It's great always to talk to you, Casey. [00:01:01] Speaker B: Well, we're here today to discuss your contribution to a new book that's coming out in October of 2021 titled the Comprehensive Guide to Science and exploring the ultimate questions about life in the cosmos, along with William Demski and Joseph Holden. I'm a co editor of the book. I hope you'll check it out. It's available on Amazon.com dot. Doctor Marx contributed a chapter in the book titled will Intelligent Machines rise up and overtake humanity? And I want to say, Bob, that as I was editing basically all 48 or 50 chapters, whatever it was in this book, yours was one of the most enjoyable to edit. I just really found it very enlightening and interesting and a lot of fun to read. [00:01:40] Speaker C: Well, thank you a lot, Casey. I reviewed the book in preparation for this podcast. It's really good. It's one of the better things that I've written. So, yeah, thank you. Thank you for the compliment. [00:01:53] Speaker B: Well, I'm glad you think so. So, Bob, we've seen huge advances in AI over the last few years, and lots of different opinions expressed on what the consequences of AI will be. What are some of these views about the potential implications of AI and which position do you take? [00:02:09] Speaker C: Well, I think that there's kind of two different viewpoints. One is, I call them dystopian chicken littles. These are people that believe that AI will eventually write better AI, that will eventually write better AI and eventually achieve the capabilities of human beings. And this, of course, I think is ludicrous. The reason is because I think a lot of these people, including some pretty, some people that are really smart in other areas, they just don't have the understanding of the foundations of artificial intelligence and computer science and don't realize the concrete ceiling that is above anything that is going to be done with computers. It simply isn't going to happen. This dystopian AI, there are others, such as myself, who identify a number of limitations of artificial intelligence which are backed by computer science, the philosophy of mind, and a number of other areas. This is not a global sort of acceptance, but hopefully, through this podcast and through the chapter, we'll make a little bit of indentation in this arena. [00:03:23] Speaker B: Well, and also through the Bradley center, where you guys are doing great work to talk about these issues. So, obviously, you think that there are limitations to AI. Does that mean that AI can produce something that's equivalent to human cognition, even though it isn't really behind the scenes and the nuts and bolts equivalent? Or what do you think we will be able to accomplish through AI, and why do you think those limitations are there? [00:03:46] Speaker C: Well, the fundamental limitation is that there are things which are provably non computable. And all artificial intelligence follows computer software, and computer software always follows algorithms, step by step procedures. That's what computer code is. And this is ubiquitous. Even. Even if you look at a web page, if you do a right click and click on show source, it'll show you the program that is being used to generate your webpage. This is true for web pages all the way up through artificial intelligence. Therefore, in order for something to be done, it must be computable. A fancier word for that is algorithmic. And algorithm is a step by step procedure for doing something, and all computer code is nothing more than an algorithm. It's. In the 1930s, it was first shown by Alan Turing that there were some problems that were non algorithmic. He specifically showed something called the halting problem, which is the question whether a computer program can be written to analyze any other arbitrary program and tell whether that computer program is going to stop or it's going to run forever. He applied some really clever mathematics based on the work of Kurt Godel, and he showed that, no, this was impossible. Since then, we've had great scientists, such as Gregory Chayton, who have shown that there are many, many other things which are non computable, and therefore, you cannot write a computer program to do them, because they are provably. And again, I say provably. This is. This is a hard mathematics. It's something that is provable beyond any doubt mathematically. So if there are things which computers can't do, and there's a lot of. Are there things that a human does which are not computable, which are do not follow these algorithms. And, yes, and one of the things that we maintain and talk about in this chapter is things like sentience, qualia, creativity, and understanding. Those are a few things which appear to be non algorithmic. It also has escaped AI thus far of having any common sense. AI has no common sense. I'm not sure whether that's going to be algorithmic or not, but it doesn't look like there's much headway being made by the people that are doing research in AI. So that's the concrete ceiling that I talked about in terms of the capabilities of artificial intelligence. There are things that humans do that artificial intelligence, being a computer program, simply cannot do. [00:06:30] Speaker B: So, doctor marks, can you define certain terms that you just used, like quelia? What is qualia, and how does that relate to the ability of a computer to truly emulate or become like a human being? [00:06:43] Speaker C: Well, one example is, I don't know, look around the room that you're at and see if you can see something that is red. If you see something which is red, you're having an experience. You're experiencing what that redness is. There's something going on in your head which says, this is red, and this is something that you can experience. Now, imagine trying to explain what redness is to somebody who has been blind since birth. You can do things such as talk about the wavelength of red. You can say, apples are red and give other examples. But as far as duplicating your experience, and again, I underline duplicating the experience of your perception of red, you will not be able to explain it to the person who was blind since birth. And if you can't explain that simple act of qualia, that qualia has to do with the things that we experience. If you can't explain that to a man that was blind since birth, how are you going to explain it and duplicate that process in artificial intelligence? It's, I think, obviously non doable. Again, you can talk about the characteristics of red and put in the wavelength and all sorts of description, but duplicating that experience that you're seeing when you look at red is simply not possible. It's the same thing with any of the sensual pleasures that you have. If you bite into a orange, you feel the orange flavor explode in your mouth, you feel the delicious orangeness of it. You begin to chew on the orange, you swallow it, and with that, you are having an experience. That's another example of qualia. So taste, aroma, smell, these sort of things are referred to as qualia. They're kind of a subset of sentience and qualia is, I think, obviously, something which is non algorithmic and cannot be duplicated by a computer program, therefore cannot be duplicated by artificial intelligence. [00:08:47] Speaker B: What about understanding, Bob? Can a computer achieve understanding in the way that a human being does? [00:08:53] Speaker C: This was addressed, I think, in a quite compelling way about 40 years ago by a philosopher named John Searle. And John Searle, who doesn't speak Chinese, came up with the illustration of the chinese room. John Searle said, he doesn't speak Chinese, but he's in a room and it has a bunch of file cabinets in it. And somebody comes up to a slap in the door leading to the room and slips in a question which is written in Chinese. Searle takes this slip of paper and he begins to search through the file cabinets. And eventually he finds a match. He says, ooh, look, these little things which I don't understand match this. He pulls out the card, and underneath is the answer or the solution? Well, the answer, if you will, to the question which is being asked. So he writes this down, he refiles the card, and he goes over and slips the card outside the door. Now, to somebody from the outside, it looks like John Searle is just an expert in Chinese. No, he's following an algorithm. He's following a procedure to do something which on the outside looks like understanding, but is not in itself understanding. I don't know, a few years ago, gosh, I think it was ten years ago or so, there's this famous quiz show called Jeopardy and IBM. Watson, which was a computer, took on a couple of jeopardy champions, and Watson didn't always get the right answer, but he ended up beating them in the end. Watson was a humongous chinese room. Watson had available to it everything on the Internet, for example, everything in Wikipedia, probably a lot more than that. And therefore, when there was a query, Watson could respond with an answer on jeopardy. Now, did Watson understand the question? No. Did Watson understand why he. I'll use he because Watson is kind of a male name, why he gave that response? And even more fundamentally, a computer can do things like add the number seven and three. But the computer does not understand what the number seven or the number three mean. It does not have understanding. It is simply blindly following an algorithm, a procedure, if you will. [00:11:09] Speaker B: Obviously, Bob, there are limitations to artificial intelligence, but does that mean that there is no way that AI will ever become dangerous? Is it compatible to say that there could be limitations, but yet AI could still be something that could become dangerous? [00:11:24] Speaker C: I think, yes. Indeed. I think that AI is like any tool, something that could be used for good or evil. I would point, for example, to our use of electricity. We find it so convenient, and we use it all the time. But does electricity still have dangers? Yeah. You know, if you have frayed wiring in your house, it can burn down your house. And if you have a downed power line, you could go out and touch it or have a line worker go out and touch it and be electrocuted. So no matter what tool you have, there are dangers associated with that tool, and that's something that one has to mitigate. And we are going to have to, as we go into the future, mitigate these dangers that we see in artificial intelligence. [00:12:06] Speaker B: So, Bob, one of the most interesting phenomena about the debate over AI is over the last few years, it seems like everyone has an opinion on it. We've seen tech gurus like Elon Musk or Bill Gates or Mark Zuckerberg or leading scientists like Stephen Hawking expressing opinions on whether AI is going to take over the world or whether it poses no threat. And this is something we should just let right into our living room. Are these experts people that we should be listening to, even though, obviously, everybody thinks that Elon Musk and Bill Gates and Mark Zuckerberg and Stephen Hawking are very smart guys, and they certainly have a lot of interesting things to say when it comes to artificial intelligence. Are these the authorities we should be listening to, or is this an example of the Kruger Dunning effect going on here? What do you think about that? [00:12:53] Speaker C: Well, unfortunately, in our society, expertise and celebrity in one area is extrapolated to all areas. We have the game show host, Bob Barker, who used to be the host of the price is right testifying in front of Congress about the Elephant Preservation act. What does Bob Barker know about the Elephant Preservation act? He has no expertise in that area. Now, with artificial intelligence. Yeah, we got some pretty, pretty big names here. Elon Musk, for example, is clearly a great entrepreneur. I can quote here the co founder of Discovery Institute, George Gilder, who said, elon Musk is an incredible entrepreneur, but he's kind of a retarded thinker. And if you look in detail at some of these things, some of the things he claims, that you will see that we also have Elon Musk going head to head with Zuckerberg about the dystopian chicken little future of AI. And Musk disagreed on that, which I think is. Yeah, is telling. So we have some people that are more educated than other people. The case of Stephen Hawking is really interesting. I mean, Stephen Hawking was just a brilliant physicist, and he came out and he made a statement. I'm paraphrasing here, he said that, yeah, AI is going to write better AI that writes better AI that writes better AI. And pretty soon computers are going to take over, and we're going to end up being the pets of the computer. Now, one of the things that. One of the assumptions in Stephen Hawking's claim is that AI can be creative, because if AI writes better AI, that AI that writes better AI, it can be creative. Creativity is another one of the aspects of artificial intelligence and computers that will never be achieved. And we have to be careful. We don't have time to do this. But in the paper, we define, for example, something called the lovelace test, which. Which is better than the Turing test for what creativity is. No, AI will never achieve anything which reaches the level of creativity which is needed to write better AI that writes better AI that writes better AI. Now, Stephen Hawking worked very closely with a guy named Roger Penrose. Roger Penrose received recently the Nobel Prize for his work with Stephen Hawking. Now, Penrose is not only a physicist and a cosmologist, he is also a brilliant mathematician. He wrote a couple of books, one of which is the. Let's see if I can remember what the names were. One is the emperor's new mind, and the other one is shadows of the mind. And in fact, this was the first place I read a compelling case that artificial intelligence in computers would never be creative, and that there were things which are non algorithmic that computers couldn't do so either. Even though Hawking believes this, he has a very close colleague who is a super genius, who was one of the first to recognize the limitations of computers. So, yeah, we have a lot of people, I would maintain, and I don't want to diss these people, because they're clearly brilliant people, such as Elon Musk and Stephen Hawking and Penrose. But I maintain that if these people look closer into an area called algorithmic information theory, I tell people that if you want to have fun, and you're a math nerd, look at algorithmic information theory. It's more fun than any science fiction that you've ever listened to and get into algorithmic information. And in this algorithmic information theory, you will find the arguments that do not allow computers, for example, to be creative, or any of the other non algorithmic things that I talked about which are characteristic of humans. [00:16:49] Speaker B: Okay, well, doctor marks, for the mathematicians that listen you can go check out algorithmic information theory. I'm sure you talk about that in your introduction to evolutionary informatics. Is that right? [00:16:58] Speaker C: If you do go and buy the book, and I would recommend it because it's written at the level that somebody with a modicum of understanding of mathematics can understand. But the book is introduction to evolutionary informatics by William Demski and Winston Ewart. And in there we cover algorithmic information and we cover the reasons that computer programs can't do some of the great things that people are forecasting in the future. And we're nowhere near this. Nobody has really made a dent in these non algorithmic things, such as understanding, such as creativity. There hasn't been a dent in them. Basically, computer programs do what they were written to do, and some of the results might be surprising, some of the results might be unexpected, but they are not indeed creative. As far as the dangers of AI, I believe one of the greatest dangers of AI is going to be AI doing something that is unexpected, unexpected contingencies. And we have a big list of them, for example, in the paper, of where there are unintended contingencies. And some of them are innocent, like Alexa not being able to pick out my doo wop song. And then it gets very serious up to the point where you have people's lives being taken, such as the uber self driving car that killed a pedestrian. So does AI have dangers? I think one of the big things is making sure that the AI does exactly what it was designed to do, does no more. And this is going to be a requirement that I think is probably going to be legislated or imposed on AI at some point. [00:18:37] Speaker B: Doctor Marks, thank you very much for your time talking to us today about artificial intelligence and whether it's going to rise up and take over the world. [00:18:45] Speaker C: Thank you very much, Casey. I enjoyed the chat with you. [00:18:48] Speaker B: I'm Casey Luskin. I hope you'll check out the new book which Doctor Marx has contributed to. It's titled the Comprehensive Guide to Science and Faith. There are other contributors to the book, including Jonathan Wells, Stephen Meyer, Michael Behe, Douglas Axe, Guillermo Gonzalez, Walter Bradley, Brian Miller, myself. I certainly hope you will check out the comprehensive guide to science and faith. I'm Casey Luskin with id the future. Thanks for listening. [00:19:13] Speaker D: Did you know that Doctor Marks has also done excellent work in the area of evolution and intelligent design? If you want his insights into how computer evolution simulations actually disprove evolutionary theory and support intelligent design, check out his new video course at discoveryu. Find [email protected]. Dot that's Discovery the letter u.org. You'll also find other excellent video courses there by leading id thinkers such as Steven Meyer and Michael Behe. Check it out discoveryu.org. [00:19:54] Speaker A: Visit [email protected] and intelligentdesign.org dot this program is copyright Discovery institute and recorded by its center for Science and Culture.

Other Episodes

Episode 0

December 07, 2012 00:04:32
Episode Cover

Cesare Lombroso and the Rise of Darwinian Criminal Justice

On this episode of ID the Future, John West shares from his book, Darwin Day in America, about Italian criminologist Cesare Lombroso and the...

Listen

Episode 1501

September 08, 2021 00:18:05
Episode Cover

Physicist Brian Miller Talks Nanotech, Origin of Life, and Area 51

On today’s ID the Future physicist Brian Miller and host Eric Anderson continue their exploration of a recent conversation between origin-of-life investigators Jeremy England...

Listen

Episode 102

March 05, 2007 00:10:04
Episode Cover

One Doctor's Journey to Becoming a Darwin Doubter

On this episode of ID The Future Dr. Michael Egnor, professor of neurosurgery and pediatrics at State University of New York, Stony Brook, tells...

Listen