Episode Transcript
[00:00:00] Speaker A: Welcome to ID the Future. I'm Andrew McDermott. Today's episode comes to us from our sister podcast, Mind Matters News, a production of the Discovery Institute's Walter Bradley center for Natural and Artificial Intelligence.
You can learn more about the show and access other episodes at mindmatters. AI.
Hello everybody and welcome back to the podcast. My name is Pat Flynn. I normally host the Philosophy for the People podcast, but today I teaming up with Mind Matters to talk once again with Dr. Selmer Brinjord on all things related to AI consciousness. Consciousness, I should say. Excuse me, rationality.
Dr. Brinjord has done a lot of really fascinating work on this front. If you missed our previous episode, I'm going to encourage everybody to check it out because we discussed in some depth one of my favorite arguments for the immateriality of the intellectual, the seeds of which go back at least to Aristotle, but were further developed by one of my favorite philosophers, James ross. And then Dr. Brinjord has furthered that conversation as well. So it was one of my favorite conversations. I want to make sure that nobody misses that, but here we are for a continued conversation. So, Dr. Brinjord, it is great to be back with you again. How are you?
[00:01:20] Speaker B: I'm doing very well and it's on my end. Great to be back too. I did enjoy that very much, that prior interaction on that topic.
[00:01:28] Speaker A: So for people who may have missed that, if you wouldn't mind, let's just start with some of the usual preliminaries. Who is Dr. Brinjord?
What do you do, and how did you get into all this business concerning AI and the nature of consciousness?
[00:01:42] Speaker B: Sure. Well, I'm fresh from a grad student dinner last night where I was asked to, understandably, in fact predictably and almost obligatory asked to recount my career to the point of last night in broad strokes, obviously, so that perhaps it could provide some guidance. And summing it up, I'd say I've never failed to be interested in the mind and cognition and probably the same time logic. And that got really, really intense in high school in my Spanish class when the unforgettable Mr. Ruyak introduced me to, well, it'll be ABCD and we'll have to cut it off or we'll run out of time. But introduce me to the comical but yet penetrating logic in Don Quixote, the first novel, as far as we know, where things like in the modern era, the Princess Bride type puzzles in that movie, which I'm sure many of your listeners will have seen, there are these opportunities to ask a key question so that you can deduce what's the safe track ahead. And so he introduced me to that, and then I became more and more interested. And he said, you know, you can actually study this stuff when you leave my class. And I don't mean just high school. So he was referring, of course, to college. And he had a former student come in who was majoring in philosophy, and that was really quite something for me.
And so I decided I wanted. Do. Wanted to do that, but really I felt that the best thing to do would be to use logic in law so that I wanted to be a trial lawyer.
Love debate. Still do. I don't think a day goes by when I'm not debating, and I always win the debates I'm in.
[00:03:36] Speaker A: Oh, naturally. Of course. Yeah, of course.
[00:03:37] Speaker B: Thank you. Thank you.
So, but then I pulled a fast one on my mother when she came to Penn to visit one time, and I said, mom, I'm sorry, no law school.
I'm going to get a PhD in Philosophy and go on and continue to study logic and philosophy of religion and put those two things together. And she was just flabbergasted. But it wore off after about 30 minutes, she said, which was par for the course, thankfully for her. If that's what you want to do, more power to you. And so that's what I did. And then when I got to Brown University, my advisor, Roderick Chisholm, compressing greatly here, said, oh, you crazy guy. You're not going to get a job if you are doing pure logic, and pure logic in the realm as well of philosophy of religion. You'll never do that. So our conversations quickly ended up with AI, because I said, well, hey, I was really interested in computation actually at Penn. And he said, well, now we might be getting closer. And then I said, well, I'll put it in the hopper. In the next conversation, I came back, I said, what about this AI things? This is, you know, this is 82.
This is just about the time when people in AI were saying the counterpart of what some are saying today, which was that, oh, this hot stuff everybody is excited about, which was expert systems is just going to replace every expert on planet Earth who's a human. So he said, good, go for it. Start studying. And I did. And I think, really, I'm a very boring person.
I've never been far from logic and cognition and computation for as long as I can remember. Really?
[00:05:30] Speaker A: Yeah. Well, that's. I mean, that's. That's super, because you've been in this conversation, you've been in this debate for a long time, and today we're going to speak specifically about something called integrated Information theory. But before we get to that, I just. You know, for somebody who's been involved in these conversations and thinking about AI for as long as you have, maybe this will be an interesting question to start with. What did you. From your understanding of AI and its inherent limitations back at the beginning, up until now, were there any predictions you had originally concerning the limits of AI that have turned out to be correct? Were there any predictions that you had originally that turned out to be incorrect? And then now, from where you stand, I'd be curious to hear, Dr. Binjord, of what your current predictions are with your understanding of AI, if there even are any measurable predictions. I think this is something interesting. I think it's also something important to have. And I think it's. I think it's an area where good philosophical thought about the nature of consciousness, rationality, formal thinking and what AI is can help us avoid having certain.
A certain amount of egg on one's face. Right. Because there's there, There have been many predictions made by, by people who are a bit skeptical of AI that were very false. Right. You'll never pass the Turing Test or something like that. Right. So you probably see what I'm getting at. I'd just be curious to hear your thoughts on all of that.
[00:06:49] Speaker B: Yeah, it's a great question, at least for me.
I don't think any of my predictions have turned out to be false. The darker predictions aren't yet fully true, but I think the empirical evidence for them is becoming close to overwhelming. So on the darker side, first, there's coming a time when over a given finite interval. So I discussed this at length in my first book, what Robots can and Can't Be, Over a finite interval of time.
And putting anybody in the judge's seat, sort of like the Turing Test, armed with state of the art technology to unmask today, we would call these things mostly deep fakes.
Yes. So that book is a defense of two propositions.
We're heading toward a time when Blade Runner, the original Blade Runner movie, and that kind of detective work is actually a well defined career. I see that starting to happen, believe it or not, in my own case, which is really ironic. I never had that thought. But we're now needing to unmask these deepfakes. Yes, we're now needing to have something better than Captcha, et cetera. So we're heading toward a time when over a Finite interval of time. We're not going to be able to tell the difference between a human person and the appearance on the spot. Or it could be through video of any eye or an Android robot. You're just not going to be able to tell the difference. And yeah, people are starting to get worked up about this, right? And it's, and it's still easy to tell the difference and people are still worked up about it. And I think it's, I don't like the fact that they're worked up about it. I think five minutes with a search engine going back to some, some ground truth, you'll be all set. It's a matter of a lot of laziness, perhaps today, but it is starting
[00:08:46] Speaker A: to happen and it is alarming, isn't it? I mean, deep fakes have always been around, but they sort of face certain technological hurdles. Right? But now those technological hurdles are being overcome. That seems absolutely right to me. And it is concerning.
[00:09:01] Speaker B: Yeah, right, right. And gradually overcome. We're gonna, we're slipping and sliding into this period of time. But if the judge is an expert.
So I don't think the Turing test has been passed or anything close at this point because I allow for putting experts in the judge's seat and I can unmask a deep neural network, as can many, or a predominantly deep neural network like a chatbot of today in a few minutes.
But this time is coming yet if you turn to infinitary, any systematic reasoning involving the infinite is still utterly and totally out of the question for an AI. So I mean, an AI can reason over, over some logical and mathematical content up to a point. It's generally exactly the same point as was issued as a challenge four decades ago. By the by, I'm sort of two generations younger than the original generation of logic based AI folks, right? So they were, they were saying, I'll give you a sentence, a single sentence in English.
You show me how that's going to work in an artificial neural network. And if you say, well, we'll put the data in for it, fine. I still get to ask my follow up questions about what follows from what I tell you. So if I tell you everyone likes anyone who likes someone who likes at least four dogs, good luck.
No way. Very little data on that.
Miniscule amounts of data. A deep neural network cannot reason over that. But mathematicians and logicians, again, it's a bit esoteric, but I, I made the prediction. This is an insuperable barrier.
By the way, that example is a modification of one from the. The great cognitive scientist of reasoning, Phil Philip Johnson Laird. Yeah.
So you know that prediction that infinitary reasoning will be one that AI strike out on holds true.
The, the prediction that. Which is in the same book that actually beliefs that are layered in terms of things like believing, knowing, intending, desiring, communicating. So you know, if I say Austin believes that Selmer believes that Pat is talking to Selmer, that's actually true. And we know instantly that that's, that's true.
[00:11:50] Speaker A: Yeah.
[00:11:50] Speaker B: But that is really right now.
And humans have to sort this out to read even a basic detective novel.
[00:11:57] Speaker A: Right.
[00:11:57] Speaker B: Okay. Because. Yeah. So believe it or not. And I could amplify that, but that's still. There's a chapter on that kind of thing.
We're nowhere. I just saw a test by my close colleague on the consciousness front as well, Naveen Sundar Govindarajalu. Last night. I told my wife about it this morning. I said you wouldn't believe the test. That.
Well, the test is ingenious. But you also wouldn't believe the results of Naveen applying the test to state of the art systems like in the case of OpenAI Microsoft 01 Preview.
It's completely unable to figure out how to put together a plan to affect another agent's mental states. If there's any layering.
So the other thing about. I mean I'm still sticking with. Echoing John Searle. I'm still sticking with the prediction that these machines understand nothing. And you could, you could view a lot of these failures as directly in line with the implicit prediction that Searle has his issues and his problems. And he's certainly not to be across the board venerated even just intellectually speaking. But by golly, he was right 100% about this.
[00:13:27] Speaker A: Yeah.
[00:13:27] Speaker B: If anything is just manipulating symbols with no understanding, it's got to be an artificial neural network in any form because ultimately the data is just numbers swimming through time. So lack of understanding.
[00:13:42] Speaker A: Yep.
[00:13:42] Speaker B: Yep. Is we still have it. We know we have it. The one thing I would say that does certainly surprises me is that we let ourselves get to this point in terms of what's sometimes called hallucinations on the part of these systems.
I never saw this coming because my focus was really on the question could an AI be premeditatedly and ingeniously deceptive? And that's still a serious interest of mine. I have a recent paper with Micah Clark, a former student and now AI scientists specializing on autonomy about that. And we, we look at that issue. But this hallucination thing How. I mean, I know, I know we all know why it's happening in general. Right. So. But that we let ourselves get to this point in the human race that, I mean, look at what happened. If you're not aware, you listen to say when, when the. There was a German group of scientists started using an LLM to write some papers on the very topic that they were experts on.
Yeah. It would generate a paper almost instantly, replete with nice abstract formatting, the references, list of references. And then they were like, yeah, there's only one small problem.
We looked at the reference list and some of the wonderful things that we wrote according to the reference list we actually didn't write.
[00:15:20] Speaker A: They don't exist. Yeah, right.
[00:15:22] Speaker B: They don't exist. And so, I mean, they were justifiably flabbergasted. And yet we still have these systems out there.
I think we've started to realize we've got to compartmentalize them to some degree so that they're operating in domains that are not, quote, mission critical or serious. But even there, I'm not sure we've clamped down as we rationally should. So I did not. I mean, if you'd. I had no idea that was going to happen.
[00:15:53] Speaker A: Yeah, okay, that's, that's really helpful. And one, one more question on, on this before we start to talk about integrated information theory is. Yeah. I'd be curious where, where you think the line is between certain arguments that give us an imprint reason to see that AI is really not a thinking thing and whether or not those, those give us clear predictions. Right. Or certain limits that we can expect. So maybe it'll be helpful if we just go back to James Ross's argument for an example. Right. So Ross's argument, and people can listen to our previous conversation for the support of this is something like this. Right. He'll, he'll claim that all, at least, at least all formal thinking is determinant, like logical thought and mathematical thought, but no physical process is determinant. So, you know, Ross argues that no, you know, no formal thinking is a physical process. Right. And we supported all that in the previous episode and there's actually a considerable number of papers out there right now.
I think that's a really good argument. In fact, I think it's the closest thing to a demonstration.
Right. And like that, to use a traditional sort of scholastic term, is in philosophy, you know, on the scene. It, it totally convinces me.
But does that set certain expectations? However, Dr. Bridger, at least that particular argument of. Because for me, it's always been a little bit fuzzy thinking about that. Like, okay, I think that that's definitely true.
AI is not a thinking thing. An impressive, we might even want to say, like magic trick in some respects. But coming at from that angle doesn't give me a totally clear idea of exactly where I think the lines will be drawn and what I can actually expect these systems to be capable of. Does that make sense?
[00:17:36] Speaker B: Oh, more than makes sense.
It's a great question.
I'll try to channel, so to speak, Ross.
I didn't get my PhD under him. He got his PhD under my advisor. But I think he would answer your question with a yes, and then the follow up will be okay, well then what, what, what should we expect? And has it arrived?
I think he would say, well, why don't you test it with some pretty taxing challenges.
The right answer to which and the justification for the right answer center around determinate.
And maybe to some, maybe it's fair to say maximally determinate reasoning if the problem does require precision at this high level and if to verify it so that we know what you just said in terms of an argument or a proof. Because again, as we, I think discussed, just like there are programs, computer programs that are still programs even though they're invalid, there are proofs that people produce, actually more than anybody would, I think, like to admit.
[00:19:07] Speaker A: Yeah.
[00:19:07] Speaker B: That are invalid. So in the case of what I'm saying is the expectation, and we also need to verify it and we can. So we would expect that this would be minimally, extremely tough through the 21st century for really any AI that is not engineered, that is not based essentially on the science that Ross. I mean, yes, it's a philosophical argument, you said. It's as close to a demonstration as you can get in the area, I think, of philosophy, mind, et cetera. I'm with you 100%.
I mean, the thing about it that's amazing is when he says determinate, precise, rigorous formal reasoning, we start helping him out with some synonyms like we've got the first specimens of that in Aristotle. I mean, this is something, right? This is something the human, if anything in human intellectual commerce, if you will, has been going on so long that we should know what we're doing it is that okay. And goodness gracious, these machines are horrible at it.
Unless as a scientist or engineer in AI you give them a token of the very thing that Ross is talking about. So, you know, good old fashioned modus tollens.
[00:20:33] Speaker A: Yeah.
[00:20:33] Speaker B: You know, if P, then Q, not Q, well, then therefore Not P. Well, you can give them modus tollens, but you have to program it in. They do wonderful things. You're just giving them a token of that. So you're not giving them literally modus tolens. Yes, so we've got that. We understand that because we know we can actually change the code, change the system, change the way it's tokenized, change the way it's encoded, if you will. So you would expect, based on Ross's argument, to see what we currently see with systems that are not explicitly and carefully engineered to conform to the structures, the abstract structures that he's talking about. And now will, will you be able to show up and knock on the door at OpenAI and say, Hey, I, I want to come in because I want to show you basically why your neural network based processing has no chance of doing careful, rigorous reasoning in a verifiable form. Yeah, they might not let you in if they let you in. I don't know if they'll let you hang out at the whiteboard all that long. But if it was Ross, he'd be done in two minutes on the whiteboard. And if there were honest people in the audience there and they weren't aware of this, this being Ross's reasoning and original argument, which, you know, we tried to extend, I think they'd say, oh, in their heart of hearts and be like, oh, gosh, I never, I never saw this before. This, this does. So that would be an expectation, I think. And it has, it has, it has turned out to materialize.
[00:22:19] Speaker A: Yeah, that's, that's great, Dr. Brinjur, thanks for entertaining me on that front. And I think the general question of, well, what should we expect given our positions? What predictions does our position make? I think that's an important question. And I think that you've given us some reason to see that for some time these predictions have not always been optimistic, but they have been fruitful. Right. And I'm with you on some of the darker, more concerning sides.
Now we're going to take a look at another recent contribution from Dr. Selmer Brinjord, and the title of which is Can Consciousness be Explained by Integrated Information Theory or the Theory of Cognitive Consciousness. So we're going to take our time with this. This is not an area that I have specialized in. It's a fascinating paper.
So to start out, Dr. Brinjord, let's do this. This is a way I always like to kind of try to present material that can be quite technical to people who are interested and willing to work hard at understanding certain concepts. But maybe not. Specialists explain this to us like we're five years old, if you don't mind. Right. What is. What is integrated in information theory and what is your general argument about in this, in this paper? Let's start really, really broad and then we can dive into. Into the tall grass.
[00:23:41] Speaker B: Yeah, I'll try fives.
A little.
[00:23:45] Speaker A: How about nine? How about nine?
[00:23:46] Speaker B: How about nine, you know, I was gonna. Right. I can always particularize these sort of lower ages and a challenge of presentation, elucidation and teaching now courtesy of my, my granddaughters who I of course appreciate as totally brilliant. But I'm not sure the older one is.
I appreciate you moving up to nine because I'm not sure at six she's going to get it. Although I don't know, you know, the mat. The stuff on the raw stuff as I think we're got into that line of argument and the expansion of it.
She's pretty darn good at arithmetic, if you will. Number theory. And that's quite amazing.
[00:24:33] Speaker A: Surprise you. I have soon to be six children of my own. I'm constantly surprised at what they can grasp at a young age. But. But we'll give you a nine and maybe. Or up to high school, take whatever you need. Let's just try and get the simple statement out there.
[00:24:44] Speaker B: Yeah, yeah. The nines are. The nine is great because to rather perhaps barbarically encapsulate an example we have for the exposition of IIT and its connection to consciousness and the paper in question is consideration of a robot. Actually two robots and nine year olds probably starting to play at least in virtual environments with robots. It's possible there's science in school early on, which in this age almost inevitably means the science teacher is talking about AI and robots. It's motivating, it's cool, etc. The kids love it. So if you look at the challenge of getting yourself or Robby the robot for the household because maybe there's some activity in your home.
Of course, you've got a large family, you can divide all the chores. Maybe it all gets done, but there's a lot of drudgery, perhaps. So you like your robot to come in there and do things in the home. And that's not just what we primarily see today, vacuuming the rugs. It's all right, you know, we're all too tired to cook tonight and is daddy going to go out and get takeout, which is four miles away and it's cold or whatever and Maybe we'll just have the robot whip up some food. So cooking, laundry, all kinds of cleaning, beyond vacuuming, so dusting. And don't break the cherished family pictures on these tables. And if you find something shiny on the floor, it might be a ring, another piece of jewelry, save that, put it aside, but still get the floor done, etc.
I think IIT says the following about a comparison between two kinds of household robots that you could get.
In one case, Robot Inc. Could ship you a robot that has a whole bunch of modules that are separately engineered for the tasks I mentioned.
So if it really is going to make a dinner and then do the laundry, it engages two separate modules.
Now, there are probably dedicated algorithms that you would always want to have in the case of a household robot for these two spheres of activity.
But these modules that I'm imagining are really separate. And, you know, there was the. There was the laundry team at Robot Inc. And then there was the cooking team, and. And they just decided, hey, for all kinds of reasons, we really want to modularize this robot. And then there's the other.
Maybe Robot Inc. Sells two kinds. Or maybe it's Robot 2 Inc.
Creates a robot that is based on a belief that its interoperation across these different activities would be much better because now they can ship a robot that has many less specialists demanded for these dedicated algorithms. If, you know, someone says, well, I want not a household robot, but I want for these purposes, but I need a new robot in my factory floor. My factory floor. What can you do? And maybe Robot 2 Inc. Says, well, yeah, I mean, easily this more general purpose robot can be exactly what the doctor ordered for you. So in the case of the first robot, it would have lower consciousness, all things being equal, than the second robot, by virtue of the fact that information in it is integrated across its, quote, intelligence or its processing.
And this is a humble example, but I think that this diagnosis of severe differential between these two robots, having spent some time with the originator of IIT and Christophe Koch as well in a two summer program on technology and consciousness, which we may get a chance to talk about, it was quite an experience.
All right. Is it a little presumptuous of me, maybe? But I would say, stripping the math out of it, getting down to brass tacks and saying, look, what's your basic intuition?
I would say not only would they say, but now they would have to say that the robot that's, quote, integrating information across these modules has a higher degree of consciousness. Most, I don't want to say sane But I think most thoroughly rational people without a horse in the race, hearing what I just said, would say, are you joking? I mean, neither one of these robots has anything like consciousness. What are you talking about? Yes, that's right. Right, right, exactly. But IIT says something very different. And if you keep scaling this up and look for more, more movement of information at the higher and higher, at higher and higher intensity levels and then at a higher and higher level of integration across the system, again, we really don't, we don't need to, but we don't want to either. Get into the mathematics. You're going to have more consciousness. And I think, you know, Naveen and I say, look, show us first in the robots how. If, if, if Boston Dynamics sends the first kind of robot to the house and Google shed. Boston Dynamics, so now they have their own robotics division, they send the second robot. But really, can you, can you, can you explain how one is more conscious than the other? They, they're both just doing fundamentally nothing that warrants descriptions of consciousness. But that's what they would say and that's what they are forced to say.
And, and you know, I say, well, it's nonsense, but there's the nine year old, you know, version of that. And then, then you end up, if it's a nine year old, you could spook them out. You can. Maybe we're getting close to Halloween, so it's appropriate. Maybe I should try. You could say, well, what if it's not a robot? But what if it's. What if it's your iPhone versus your Samsung smartphone?
They are both conscious, you see, and, but they vary in terms of the readings that you would get if you could take it with IIT and now since actually Searle himself pointed this out, well then given that ultimately they're made of the same physical stuff going far enough down, do we say that everything that's determinate as a system, maybe we'll put that minimal constraint on it is conscious at a certain. And they have, they have swallowed that.
[00:32:07] Speaker A: I would think that they would have to swallow that. Okay, so this is really interesting.
And to me it does seem based on a sort of fundamental redefinition of consciousness right at the end of the day.
And it also seems like it commits them to this idea that conscious is something that is, that is graded. Right. Which seems to me not entirely correct. I mean, we even think about intentional states. While it makes sense to say that I might have states that represent less, there's no state that is itself less of a representation.
[00:32:36] Speaker B: Right.
[00:32:37] Speaker A: Of anything. Right. Either there's, there's a representation or there's not. Right. This, it seems to me that this is an on off thing. Same thing with, with conscious. There might be different feelings of what it is. Like maybe more feelings than not. But there's either a what it is likeness or there isn't. And it seems like there's a certain idea. And help me clarify this because like I said, I haven't spent a whole lot of time on this but that like we can just take a certain number of.
It's entirely quantitative but not qualitative. And I know in your, in your, your paper you, you do differentiate between two notions of consciousness. So maybe that's where we should spend some time between phenomenal consciousness and access consciousness. And I think what's of interest to most people is that first one. Right. The what it is likeness understanding of consciousness. So yeah, talk to us, talk to us about that and maybe that'll help clarify the, the discussion a little bit.
[00:33:27] Speaker B: Yeah. So we can back up a little bit. I think pedagogically speaking, we for, for our 9 year old or 12 year old or whatever. We, we have to. And for the audience.
[00:33:39] Speaker A: I began to high school pretty quick here. Yeah.
[00:33:41] Speaker B: Oh we are, we are and your audience is brilliant. So we really do. But we still still need to back up because we didn't say what we mean by consciousness. And you, you're, you're right. What they mean by consciousness is so called P consciousness to use the abbreviation philosophers have really adopted. So it's phenomenal consciousness. And, and the phrase that you used is right on target. It's. There's something it's like to. And then we would have get pinched severely.
Taste coffee finger. Right. Taste coffee for me.
Maybe I should reduce it a bit but carve a ski turn at high rate of speed with, with perfect equipment designed for that race skis. So that, that is just one kind of consciousness but it's the one that they're definitely after. And I do think with one caveat that you're right.
It is there or not. It's either on or not. To use the phrase I think Stephen Harnad used many years ago, there's either someone home or not.
Now the caveat is though really important and that is there's self phenomenal consciousness. So you are phenomenally conscious or P conscious but you know that you are, you know that you're a subject which is in pain when you are, et cetera. I'm not entirely sure that as we Go down through non human animals. Yeah, less and less.
It'll correlate.
So we not to identify, but it'll correlate with neurocomputational complexity as we go down.
I'm not sure that it's self phenomenal consciousness. So Harnot's phrase may be a little bit infelicitous. Right. So, but there's clearly pain being experienced by the creature. Okay. Whether it has self. And we do this, unfortunately for us, it makes for sometimes when things go off the rails, mental illness.
Because the thing is, we have it at two levels all the time. So we say things like to ourselves, or we rue the fact that, well, I am in pain now, I don't want to be in pain in the next hour, so I am going to take acetaminophen. So we reflect on the fact. So we go one level up and say, I believe that I'm perceiving my pain. That's two levels. So for them, they've yet to show how any kind of this gradation would work. We have a, in cognitive consciousness, our approach to consciousness allows exactly that. And the other challenge to them, I mean, I issued this on the spot because the only sort of behavioral prompt we had for coming up with the concept of cognitive consciousness to pit against this and our own measurement scheme, lambda, because they have a measurement scheme, it's called phi, that, you know, phi in theory can measure the level of phenomenal consciousness in any system, including in our two robots. We could apply phi to it. In fact, we can work that out mathematically with some rigor to show that phi ought to give a different readout for higher consciousness. In the case of the robot, that is integrated, that uses integrated information. But we say that humans stand in a severe gap or on the other side of a sphere gap or chasm or canyon from non human animals. And cognitive conscious consciousness predicts that and says that. And they're forced to go the other way. They end up saying everything is. Now there might be a higher phi reading, but everything, you know, I'm looking at my, my earpod, I'm looking at my Apple ipods right in front of me now. So it is going to end up with a phi reading. It's actually computing right now. It just sent me out, I opened it up, it sent me out a green signal saying, I'm fully charged, or at least I have power enough to charge your pods. Yeah. And look, is it, is it, is there something. It's like to be my ipods. Goodness gracious, I Don't.
[00:38:05] Speaker A: The panpsychist might say yes, but I think most of us still want to say no. Right?
[00:38:09] Speaker B: Yeah, I think so. Now, you know, one thing that I wanted to get into with you and I don't know how it will accord or not with your plans, is that since we wrote the paper, we and they and many, many others have started to explore the quantum case.
So that is quantum computing.
Quantum computation.
Because we now, you know, we now have quantum computers. We had the amazing chapter in American corporate battles between Google and IBM that some may be unaware of, where Google laid a trap for IBM.
They, they got a quantum computer and they declared that it had done some amazing things that were unprecedented. They said it has reached quantum supremacy. And then IBM said, well, well, come on, I mean, you can't just make that announcement. You have to write a, you have to write a technical paper and have to disclose it and someone has to referee it. And then the trap was they actually had, had done that and they announced the paper was coming out.
But people have talked about this connection between quantum mechanics and quantum effects in the world of the quantum and consciousness forever, really. I mean, Roger Penrose, great experience. I had a dinner with him years ago when I think he was just starting to write about these things. And he said, you know, I do think there might be something in the brain at the quantum level that's relevant. And when he was using the term consciousness, he was talking about phenomenal consciousness. So these folks are pushing the envelope.
Since the release of our paper, we didn't have. There was no opportunity to write about it anyway. It wouldn't have made sense. But they're pushing the envelope and using, I mean, your listeners can take a look. They have a 2024 paper.
Koch's the last author, but I think the driver of it, Christophe Koch, they're exploring phenomenal consciousness in a quantum computer and whether that's possible or not.
So we've started, we have no publications, so we've started to look at this from the standpoint of cognitive consciousness, which obviously I have to explain.
But their stuff in this regard, I mean, this is really scary stuff because if they really believe.
Well, for starters, back to the Apple pods, right? If they really believe that my Apple ipods have some degree of consciousness in the phenomenal sense, I've already parted ways. But let's just take note of the empirical fact, Pat, that they, they actually believe these things, okay? They're not, they're not Naveen. And I learned that firsthand in person.
These are not people Trying to get attention or accolades or anything like that. In fact, many people in AI looking at consciousness have accused them of playing fast and loose in various ways in their publications. It's been pretty. There's really severe intellectual battles going on. On.
[00:41:25] Speaker A: Yes.
[00:41:25] Speaker B: Yeah. So they're. They know, they believe this stuff. Well, then they really believe that they're poking around now in the realm of. Because of the quantum. The move to quantum computation. They. That, you know, we might actually be seeing glimmers of phenomenal, phenomenal consciousness when we are using a quantum computer. Now, let's think about this. Google fired Blake Lemoine, I think, for saying that his chatbot, in his opinion, was conscious. He called, he used the term sentience. But what he meant was that it is subjectively aware that there was something it was like to be it. And he got worried and said, I don't want any part of this.
And they fired him. But wait a minute. The only way you would be justified in firing him is if you provided a precise, compelling rationale for why he was wrong.
Now I believe he was wrong. They believe he was wrong. But we've crossed over into. I mean, this is going on. It's, it's generally unnoticed. I could say that there are some people who certainly notice it and are worried about it just by just thinking about the sort of basics of Western jurisprudence here. Okay. He was convinced that the chat bot had inner feelings, emotions, and that there was something it was like to be it. All right, he's wrong. We could spend an episode or another episode demonstrating why he's wrong.
[00:43:08] Speaker A: Yeah.
[00:43:08] Speaker B: But now let's think about what these folks are doing in the case of IIT and Fi in moving to the quantum realm and the. It's open access, the entropy paper. People can. If people are skeptical, they don't believe that this is a detailed investigation and that they're really pushing these limits in a. In a. In an amazingly careful way, physically. Okay, they can go get the article. So how, you know, that's morally questionable?
I mean, if they're 100% convinced that this is all it takes, this integrated information happening in a system, and they firmly believe that when they move to the quantum level, they really have it.
Why would they.
From an ethical standpoint, how could it be okay to possibly create exactly what Lemoine thought erroneously was going on in his case? I mean, this is really weird stuff. I mean, I wasn't a referee, but I would say at least you have to.
You got to say something about this issue.
[00:44:14] Speaker A: Yes, right, obviously. Yeah. It's a great point. Right. There is obviously major potential ethical implications if they are correct about this position.
[00:44:23] Speaker B: Yeah, right, right. And you know, it's up to, it's up to God only, I suppose, to judge people for purely mental acts. You know, I mean, in terms of law. Yeah, you know, I got that. But, but we introduce mental states as necessary components of crimes that are physically determinate. Right. So if that's why we have shades of, essentially shades of, of, of murder. Right. And then we go to manslaughter, etc, we do take account of mental states. So I'm not saying, I'm not saying what they're doing definitively is morally wrong. I'm saying it sure looks like if that's the beliefs they have that this could very well be morally wrong. Because what's your upside to this especially we got to talk about cognitive consciousness. But when we can get the gains in the intelligence of a system by looking at something which, since my interactions with John McCarthy on this concept are no, there's no consciousness here in that, in that phenomenal sense. I'm using the term to refer to intense levels of consciousness in a structural sense that produces intelligent behavior in the system. I don't think there's any feelings going on and there'd be no reason for that is what McCarthy would say. You know, so I, I, I'm just, just throwing that out there. So we, Naveen and I are, yeah, we're going to take it, we're going to take lambda into the quantum realm and see what we can do.
[00:46:02] Speaker A: This is interesting.
I want to get this point out is to understand, I guess maybe some of the wider philosophical assumptions or the wider philosophical backdrop at play here. I mean, if you talk to good old Aristotle or an Aristotelian, the reason they would say that an animal is conscious or we are conscious because it's tied to certain powers like sensation and locomotion. And then once you kind of go down the chain of being and those things aren't there, there's really no reason to think that they have consciousness. Right. Like this is like there's no, like on a sort of basic point. Right. But then if you come, I guess from a general materialist standpoint, and Aristotle is not a materialist, so I don't think that he has this problem.
What, I guess which, what you're, the issue here is how through a series of quantitative steps you can leap over what seems an enormous qualitative abyss. Right. That seems to be the fundamental issue that is constantly recurring. Right. Well, Said because, Because if you are immaterials, you think that what is at bottom is essentially, essentially the exact opposite of everything that our sort of conscious life is. It's not intentional, it's not directed. It's not about anything. It's not teleological. There's no what it is likeness. Right. None of that is there. Right.
[00:47:21] Speaker B: Right.
[00:47:22] Speaker A: So how do you take whatever that stuff is that is totally devoid of any of those features and put them in various combinations or permutations and then suddenly it poofs out? To put it somewhat polemically. Right. This, this reality that is qualitatively, utterly unlike where it just apparently emerged from. Right. I think that's the fundamental issue. And so I think a lot of naturalists have realized, okay, maybe the problem is that we need to just. We can get over this problem by not having the gap there in the first place. And we just have to have either consciousness itself or some type of proto consciousness or something like conscious down at the ground floor somehow. Right.
And then we, then we don't have this sort of gap problem. And that's an interesting move. I think it's a concession that materialism and physicalism are frank false. And I think what we're talking about now is, is no longer anything like that worldview. And it's debatable of even arguing whether it's even a naturalism or not.
So talk to us about, talk to us about all that. I just threw a lot of stuff out there. But it seems difficult to have these types of conversations without trying to make these background, wider philosophical assumptions explicit. And when or when not, it's legitimate to bring in something like panpsychism without having to admit that the prior world picture you had is essentially refuted by that ad hoc move, if that makes sense. Yeah, talk to us about that.
[00:48:52] Speaker B: Yeah, yeah, it's.
It's an ad hoc move considered from the standpoint of sort of pure rationality. But it's not ad hoc from the rhetorical point of view or the subjective point of view.
Pure, pure objectivity, which frankly, you're using to minimally cast some doubt on it, if not outright render it overwhelmingly suspicious, says it doesn't make any sense. It rhetorically makes sense.
I don't think anyone.
So, Naveen and I, my co author in this paper, I think in many ways we both start with an honest observation which Christians and other historical monotheistic religions could get in here as well. But I'm not. I'm a Christian. So here in listening, for example, now Just about done with Marilyn Robinson's book Genesis, which is just absolutely unbelievable.
The observation is a two part one. The observation in terms of, you know, simply the behavior and accomplishments of creatures on our planet. Look, we take the simplest working system, engineering wise that we, that we sort of are semi impressed with, you know. So most people don't know how a toilet operates. We've, this has been empirically demonstrated. A doggone toilet is actually really a smart system for how the siphon action works. Yeah, we came up with that. Okay, I don't need to talk about Ferraris, but I can, I don't need to talk about rocket ships, but I can, I don't need to talk about space stations, but I can.
Using a hammer or using a rock as a hammer and you. And tool use. That's not what I'm talking about. I'm talking about the engineering of a system. We're just simply, let's be honest, brilliant creatures. We're more, we're, we're still doing things like making deep fakes and invading countries for some. No, no ethically sound reason, but intellectually we're unique and we don't know if they're. We haven't been to Alpha Century, so, you know, I don't know, maybe there are aliens that are super smart beyond us. We keep saying, the AI people keep saying, yeah, well, we'll make the super smart things by ramping up AI. Yeah, I've been hearing that since I was an undergraduate.
It's not going to happen anytime soon by any metric. So we start with the observation, look, we got this big problem here. We got this, this fundamental divide. What is going on here scientifically, formally, mathematically, engineering wise, what's going on? And I think somehow, just as you say, somehow, a lot of these folks on the IIT side and the phi side say, well, you know, all these other things also are phenomenally conscious. Even things that you might, might say or absurd to consider. So like seemingly inanimate objects, we just have a little more integration and so forth and you know, that's ridiculous.
[00:52:22] Speaker A: Yeah, yeah, that's ridiculous.
[00:52:23] Speaker B: You know what this thing called math and engineering, that Leibniz and Newton were pretty, you know, it's a big deal that we have the calculus.
[00:52:33] Speaker A: It is ridiculous and it parallels, I think, other kind of extreme positions in philosophy. I mean, think of miriology, study of parts and wholes, right? You have two extreme positions. You have universalism, which is, which is the idea that any random things you point at form a legitimate object. Right? My, my foot Donald Trump and my minivan, right. Form a legitimate object. And I think most people. No, that's insane. That isn't a legitimate object. It isn't tightly unified enough. Right. And then there's nihilism, which is the idea that there's, there are no composites at all. Everything is sort of erratically simple. And most people realize, no, there's, there's got to be some sort of reasonable middle view there. Right. And Aristotle offers it. It's a substance based ontology, right. Where we can differentiate between, you know, real, real substantial unities, accidental unities, aggregates and so on, so forth. We don't have to get into that now. But this debate about consciousness, it seems to be that people are, if you're sort of a materialist, you, you, you. It's just swinging from one extreme to the other in a parallel way. There's people who are materialists that are eliminativists and they realize the same problem. They realize that if we don't have sort of any qualitative dimension at the bottom, we're not going to get it at the top, so it can't be there. And whatever we think this consciousness thing is, it's not what it seems like. It's an illusion, it's, or whatever, you know, what eliminated to say, right. And most people are like, that's insane. Right. Like this is. Right. This is just definitely not correct.
And you can argue against it and give all sort of reductios, but I think, I think some things can really just kind of be dismissed with a hand wave, just being honest about it. Right. On the other hand, it seems like people, the same types of people with a similar paradigm realize the problem and so they're just going to the other extreme instead of, instead of saying that nothing is conscious and everything is conscious. Right?
[00:54:12] Speaker B: Yeah.
[00:54:12] Speaker A: And that's definitely wrong too. Right. So it's, it just seems like a very parallel situation where what we should do is get back to that same middle position and find a world picture, a worldview and ontology that allows us to make sense of that. And I, I would say, and I think you'd probably agree that something like an Aristotelian view broadly can really help us do that so we don't have to land in either of these crazy, honestly crazy positions.
And we can still do just as well with all, with all the science, right. It's empirically equivalent and all this or that. So, yeah, your thoughts on, your thoughts on all that. It just struck me that, yeah, this just seems like Another just similar case of sort of materialists because of their paradigm being forced to swing between two various extremes where there's a very reasonable position that can be maintained if you're just willing to just, you know, forego that materialist physicalist starting point.
[00:55:00] Speaker B: I couldn't agree more.
No question about it. The eliminativist case has of course been instantiated by, well, maybe sometimes he got sheepish about it, but Daniel Dennett. Yeah, you know, I think that was, you know, it's all an illusion and so forth and so on. Yeah. I mean you can call the thing an illusion but then when you finish writing your book and you go, you go get a cup of coffee, you
[00:55:25] Speaker A: talk about how good it felt to write that book.
[00:55:29] Speaker B: Well, there you go. And then if you go get, maybe you celebrate after finishing it with a glass of champagne. I don't know if he engaged in such things like oh this, this is delightful, this champagne. So happy to be celebrating my achievement.
Interesting self refuting move that you would make in registering your appreciation of the champagne. So I think that's, that's the other problem with the view that you, that the offshoots of the move that you're pointing to, the eliminativist move. So yeah, yeah, no it's, it's amazing, it's, it's amazing that we have these, these groups. I, I, I respect them, I like to talk to them, some of them are my friends. But really at the end of the day, goodness gracious.
[00:56:13] Speaker A: I mean, yeah, well, it's, you know, Aristotle himself, you know, a little error in the beginning can be multiplied a thousandfold by the end. I would say, you know, probably it's more than a little error at the beginning. But it is interesting to see how these rather absurd, you know, implications, these, and it is funny to see that these sort of panpsychist moves which of course comes in different models and flavors are becoming increasingly popular. And I think that that itself is an admission, maybe not an explicit one of the failure of the materialist paradigm to account for these. For not just the hard problem of consciousness. Consciousness is just one of the issues, I think. I mean James Ross hasn't even focused on consciousness. He's focused on rationality. Right. And, and so was Aristotle. They didn't, they didn't even, they didn't really think about the, the what it is likeness. They were thinking about these, these rational powers that we have. And these are logically, these are logically distinct issues, aren't they?
[00:57:11] Speaker B: They are. And we haven't talked about what might.
I don't think you can have multiple Achilles heels. I guess you could.
If it's the humanoid case, you can have two, but we haven't talked about another, if you will, Achilles heel here in the affliction of this camp, intellectually speaking, which is the utter absence of a precise definition of what's being talked about.
So I think it's fine in a conversation like ours to refer to phenomenal consciousness as that which it's like to, as an example of it, experience a fine single growth, pour over coffee from your high end coffee house or the ski turn or whatever it is, that's fine, okay?
But that's not, that's not how science works. So these people are doing science and they're allowed, for reasons that I have yet to fathom, including in the recent paper and entropy. They're allowed to refer to consciousness over and over again and pursue it, seek it, and they haven't defined it. I mean, the lack of definability of the very thing that's at the core of human existence should be a signal to you that something unbelievably profound is going on here. Don't fake it. Don't fake it. And give people the impression that it's not that big a deal. We can handle it, we can formalize it, we can mathematize it and we can look for it empirically. No, you can't. You can't even define it.
And the lack of humility in the face of that, given that, again, this is bit criticisms made of Dennett, I think, repeatedly. Let me get this straight. You're telling me the very phenomenon that is the reason I'm staying alive and not just ending my life, or that it's the very reason why no one should monkey with my life.
Cateris paribus is an illusion. Okay? Really? Well what is this thing that's an illusion? Can you please define it for me? He never gave a definition of phenomenal consciousness. So you're not allowed to do that.
You might be allowed to do that 200, 300 years before Christ, when you know, Aristotle, bless his heart, has the first logics. Pretty darn good mathematics. This is 21st century.
We've got a lot of math, a lot of logic, a lot of theory of computation. If you really know what you're talking about, just define it, write it down, write the definition down and then we'll go investigate it. Or step back and say, oh my goodness, this is the very essence of why life is meaningful, that I have a mind that can ex. That can have genuine experience, that I'm conscious, that I have feelings, that I have hopes and I have dreams, that I can have beliefs. You mentioned intentionality, that I can direct my mind at remote objects and abstract.
If you can't define it, don't fake it.
[01:00:31] Speaker A: Yeah.
[01:00:31] Speaker B: Step back and say, wow, how in the world am I going to explain this? And good luck to you trying to explain it from a materialist point of view.
[01:00:40] Speaker A: Yeah, that's a great point. And we talked about moral implications, and obviously there's a lot of practical implications here. Right. I mean, ideas have consequences.
Right. And these perspectives are often just so seriously diminished and impoverished with respect to morality and meaning. And I know not all atheists or naturalists are nihilists or want to be nihilists, but it's hard to avoid nihilistic consequences, often outright depressing nihilistic consequences from a lot of these perspectives, especially eliminativist one ones. And I think if you are an eliminativist and if you're trying to be consistent, I, I think you should be a nihilist in, in many respects. I mean, think of somebody like Alex Rosenberg. You know, he's got that, that book An Atheist Guide to Reality. He's an eliminativist and he's a nihilist. I think he's deeply wrong, but I think he's at least pretty consistent. And I know a lot of atheists, naturalists, they don't like his project. But it's, it's hard to see why, if you aren't committed to a certain world picture, certain ontology, a certain epistemology is sort of broad scientism, why you shouldn't be on his program. Right. It's, it's, it's hard to avoid. And that's sort of like his mission. Right. If you, for anybody who's read his book An Atheist Guide to Reality, he doesn't really argue against God in that book. He just sort of like lazily gestures towards Hume. He's like, oh, yeah, Hume took care of that problem.
This is really a guide for atheists. He's like, if you're an atheist, this is what you should believe. Right? So it's really a book written for atheists trying to get their beliefs straight. I think it's pretty, pretty convincing in that respect. And I think it's, he's sort of an unintentional ally to people like us in the sense that he's, he's giving us aid because the conclusions are so absurd that if they really are tied to that starting point. It should give people really strong reason to go back and revise that, that starting position on all sorts of fronts. You know, not just with respect to more morality and meaning the things that, that you've been talking about, but of course to just other things that are just flatly undeniable. Like the fact that we do have, you know, phenomenal states. It really, really is what it is likeness to have this conversation with Dr. Brinjord while I'm sipping my coffee and that like the life comes to be through this, that is my world. Right. And to deny that just fundamentally undermines everything, including any sort of practical, reliable scientific inquiry into the world. I mean it is so core and so fundamental that I don't think you can coherently at the end of the day actually sort of eliminate the sorts of things that this paradigm is trying to eliminate. So this is just kind of my long way, long winded way of saying I definitely agree with you. And I actually think pointing towards certain people like Dennett and Rosenberg can actually in a sense be helpful because I think that they are, I don't think they're stupid. I think they're actually pretty good thinkers that are just trying to work out the consequences of a particular paradigm. Do it in many ways more consistently than other people.
[01:03:43] Speaker B: Yeah, no, I couldn't agree more. Absolutely, yes.
[01:03:47] Speaker A: So Dr. Brinjor, before we wrap up here, is there anything else that we haven't said about integrated information theory?
You think is, is important that other people should be aware of any other aspects of your paper or, or argument? Obviously we're going to include a link. This is available online for free as a bonus to the volume of Minding the Brain. So people can read it at their leisure and we'll encourage them to do so. But yeah. Any other final thoughts on anything?
[01:04:14] Speaker B: Well, perhaps just one. We haven't talked much. Anyway, I've sprinkled in a few references and some more specific information about the essence of the competitor here, cognitive consciousness. And that's pretty easy.
I could actually just use Dennett, an example of his, to perhaps crystallize the theory and the difference.
I think Dennett says at some point, well, I like dogs. Obviously I'm paraphrasing, I like dogs. Dogs are pretty smart. Darwin thought dogs were really amazingly smart, even stacked against the human mind.
But look, I put the food in the bowl and the dog believes there's food in the bowl and the dog saw me do that. So the dog believes that I not only put the food in the bowl but believes that I believe there's food in the bowl.
So that's two levels.
[01:05:21] Speaker A: Yeah.
[01:05:22] Speaker B: Okay. Does the dog believe that the person behind me, who's not the so called master but always around another family member believes that the master put the food in the bowl? If so, the dog's got a third order belief. And Dennett said I could go along with that. Maybe, probably.
But we, I mentioned detective stories earlier. We have, we routinely go way beyond that. Okay. And cognitive consciousness as a theory says, look, that's the mark of serious levels of cognitive consciousness. And if you get, we believe dogs are cognitively conscious.
But now when you work your way down, if we don't have a single layer of cognitive verbs that are, that are operative in the agent, we don't have that the agent knows P for some meaningful proposition. Sorry, there's no cognitive consciousness. When you apply our measurement scheme lambda to a pure deep neural network, a pure chatbot, it says the level of consciousness. This would be my demonstration in the cognitive consciousness case for Blake Lemoine. I'd say, yeah, well, you don't know what phenomenal consciousness is anyway. But, but we could talk more about that in the cognitive consciousness case. We can prove that that large language model, if it's a pure deep neural network, has zero cognitive consciousness. It literally has no beliefs. You can't find the structures in the system that correspond to an agent believing a proposition. You can only find numerical data swimming dynamically through time. And what John McCarthy said originally, we have an inspiration from John McCarthy. What he said was, he was echoing what he said in around, you know, 56. He was echoing that in 2006 at the celebration of AI at 50 years of age. He was saying, you, you folks are talking all about the statistical based processing of these systems, but they don't, do you understand? They, they don't actually know anything. And worse, they don't know they know anything. So if they don't know they know something or know they don't know something.
And beyond that, how are they intelligent? They're not. And they're certainly not cognitively conscious. So I would just throw that in there. And of course, many more details along that line are provided in the chapter in question.
[01:08:09] Speaker A: Yeah, well that, well that's great, that's super helpful. And the paper is, it's great, it's a technical, technical read. So people should be ready for a challenge. But we like that. We like rigor, we like technicality, we like challenge. And Dr. Brinjord, let's, let's finish with this. What are you currently working on next? What can people look forward to? And how can people keep up with your work?
[01:08:30] Speaker B: I don't really know how they can keep up with my work systematically.
[01:08:33] Speaker A: Not regularly on social media like the kids are these days.
[01:08:37] Speaker B: Yeah, yeah, they can Google stuff and they go to my cv, which is a little behind, but in terms of stuff I'm currently working on, one of those items I alluded to in passing is extending our concept of consciousness and our measurement scheme for it to the quantum computing case. It's going to take a while before we have some publishable results, but maybe a year, year and a half. And that's really exciting. I. I got exciting. Yeah, yeah, I got caught off. I mean, this quantum computing thing, it's one thing to talk about. Deutsch long ago introduced it in concept, the concept of a quantum computer. But. But by golly, back to the point we discussed about humans being singular in the known universe and making.
Making systems into, you know, systems of complexity that are amazing. If a quantum computer doesn't amaze people. An actual quantum computer, and we have one at my university, rpi, thanks to.
Well, thanks perhaps to a billionaire and IBM, and it's an amazing thing. So I'm really excited, along with my colleagues in on this, about the exploration of AI in general, intelligence systems in general, and also specifically cognitive consciousness in a quantum computer. I find this entrancing. So hopefully some results coming down the pike.
[01:10:11] Speaker A: Yeah, well, great. Well, we'll keep up with all that and hopefully that'll give us an opportunity to have another conversation in the future that would. I would certainly look forward to that. For all the listeners out there, I would like to encourage everybody to grab a copy of Minding the Brain, if you have not already. You know, Dr. Brinjord has really two contributions. He's has the development of the Rossian argument in there, which is really just. Absolutely. I think it's just one. It's just such a good argument, a little bit challenging again, but once you. Once you grapple with it, I just think it's really powerful. And then this other article, which is free online. I'm sure the links will all be in the relevant places. So thank you all for tuning in. Please do like share and review the podcast. We will talk to you all soon. Goodbye.
Foreign.
This has been Mind Matters News.
Explore more At Mind Matters AI that's MindMatters.
AI Mind Matters News is directed and edited by Austin Egbert. The opinions express expressed on this program are solely those of the speakers Mind Matters News is produced and copyrighted by the Walter Bradley center for Natural and Artificial Intelligence at Discovery Institute.