How to Make a Bayesian Inference to the Best Explanation

Episode 1991 December 09, 2024 00:44:46
How to Make a Bayesian Inference to the Best Explanation
Intelligent Design the Future
How to Make a Bayesian Inference to the Best Explanation

Dec 09 2024 | 00:44:46

/

Show Notes

When we gain new information about beliefs we hold, it’s good practice to update our viewpoints accordingly to avoid incoherence in our thinking. On today’s ID The Future, host Jonathan McLatchie invites professor and author Dr. Tim McGrew to the show to discuss how Bayesian reasoning can help us maintain coherence across our set of beliefs. The pair also apply Bayesian logic to the debate over Darwinian evolution to show that a confidence in design arguments can be mathematically rigorous and logically sound. Bayesian logic provides a mathematical way to update prior probabilities with new information to produce a more Read More ›
View Full Transcript

Episode Transcript

[00:00:05] Speaker A: ID the Future, a podcast about evolution and intelligent Design. [00:00:12] Speaker B: Welcome to ID the Future. I'm your host, Jonathan McClatchy, and today we're very honored to have with us Dr. Tim McGrew. Tim McGrew is a professor of philosophy at Western Michigan University, where he's taught for the past 30 years. His research and dress include formal epistemology, the history and philosophy of science, and the history and philosophy of religion. When he is not doing philosophy, he enjoys playing chess online. He's actually a national master in chess and former Michigan state champion. He coaches at his local chess club. He likes running trails and making high quality paper airplanes. He lives in southwest Michigan with his wife, Lydia McGrew, who's also an analytic philosopher and very widely published. And they're daughters. So welcome, Dr. McGrew, great to have you on the program today. [00:01:00] Speaker A: Thank you so much. [00:01:01] Speaker B: So, Dr. McGrew, for those among our listeners who aren't familiar with your work, tell us a little bit about your research interests and what you teach at the University of Western Michigan. [00:01:10] Speaker A: Sure. So at Western Michigan University, I get to teach a little bit of everything in philosophy. I get to teach logic, some history of philosophy, a mix of graduate and undergraduate classes. Had a great time here this fall, just wrapping up teaching a course entitled Defense against Against the Dark Arts, which has been a lot of fun. And as you can imagine, all the Harry Potter fans were clamoring to get into that one. I like to teach theory of knowledge, probability theory. Taught a graduate course on inference to the best explanation. I teach philosophy of religion and philosophy of science and the history of science as well. I've got an upper level epistemology course that I really love. And I have a course entitled the Modern Worldview, which is always fun because to some extent what is taught in that course is left to the discretion of the professor. And I thoroughly enjoy all of it. I really, I guess I would say I'm very blessed to be able to teach. I think it is the ideal profession for me. I wake up rejoicing that I get to go to work every day. And so that's something for which I'm very grateful. [00:02:16] Speaker B: That's wonderful. So today we're here to talk about Bayes Theorem and how it relates to inferences to the best explanation. Could you begin by telling us a little bit about what is Bayes Theorem and how does it inform how we understand evidence? [00:02:31] Speaker A: Sure. Bayes Theorem is a piece of mathematics. I'll pause there as some people want to run for the exits. And then I'll go forward. Bayes theorem is in particular an equation in probability theory that tells us how probabilities should be put together so as to avoid a certain kind of mathematical incoherence. Maybe it's easiest to illustrate this by starting with deductive logic. You cannot rationally assert that some proposition P is true and at the same time, in the same sense assert that that proposition is false. That's what we call contradicting yourself mathematically. There are analogs of this. There are things you can do wrong. If I say the probability that the New England Patriots will win the super bowl this following year is 5%, but the probability that they will not win the Super bowl is 85%, I have a problem. One or the two of those must happen, but my two numbers add up to only 90%. That is what we call not inconsistency. We have already used that word for logic, but we call it incoherence. Berry's theorem helps us to maintain coherence across our sets of beliefs. And there's an extension of this from simply snapshot of your cognitive life at a moment making sure that you are coherent to how to change your mind when you learn something new. And that's called Bayesian updating. And so that is a, again, just a mathematically expressible formula that tells you when you learn certain things. And given that you had certain other probabilities in your mind, certain other relationships among probabilities, here's how your probabilities should change if that's the only thing that you're learning at this time. There are many applications of this. It has been found to be enormously useful in several fields of science. It has been used as an underlying engine for a great deal of what's done in, in neural networks and machine learning. So the mathematics has really fanned out into something that is intensively used and exploited in quite a few areas. And it's very intuitive. I think a lot of times we say things about what's reasonable or unreasonable, and it turns out that if we think of them in terms of probabilities and Bayes theorem, a lot of things that seemed mysterious perhaps at first will make sense. And so it's very useful for those of us who like to think about thinking to be able to maintain our coherence and to consider the relevance of some new piece of evidence in this light. There are also some theorems that show you that if you're not updating your probabilities by using Bayes theorem, then in various ways you run the risk of falling into Incoherence into probabilities that don't add up. Right. And incoherence probabilistically is just as bad a thing as inconsistency. Logically, we could make a case that it's actually just sort of a special case of logical inconsistency. So I'm a big fan, and I think that it's a powerful tool in the right hands. It could be abused as well. Just like logic can be abused. Right. You put nonsense in the premises, you get nonsense back out, you have no cause to complain. So we do have to use it wisely. But that's, again, a great subject for further exploration. [00:06:13] Speaker B: Excellent. So a popular misconception about the nature of evidence is that evidence is binary. Either it's sufficient to justify a proposition, or it's of no value at all. In what way do you think this is a mistaken concept of evidence? And what is meant by cumulative case in Bayesian reasoning? [00:06:30] Speaker A: Well, it certainly is a mistake, and I think that you're right, that that's one that is disturbingly widespread. Many of the things that we like to hold up as examples of evidence are spectacular things. Right? Einstein predicts the results of the eclipse in 1919. They go and take pictures of the stars during the totality during the eclipse. And Einstein was right. Wow. Fabulous. But in many cases, the evidence that supports our beliefs in ordinary life, in science, in forensic detection, comes in smaller pieces. And just because it comes in smaller pieces, it is easier for us to ignore the fact that many pieces of evidence mounting up may present a strong case. In aggregate, there's a cognitive bias studied by cognitive psychologists and philosophers known as confirmation bias. And confirmation bias is our tendency to ignore things that don't fit with what we already believe, but notice things that do fit with what we believe. It doesn't have to be conscious. It's even more insidious when it's unconscious, because we're not even aware that we're not being so perfectly rational. Confirmation bias usually won't mask the occurrence of some one big spectacular piece of evidence. But it may very well cause us to ignore every single one of a whole series of pieces of evidence that just move the needle a little, or should, with the result that we may be in possession of a body of evidence which, taken as a whole, is very powerful, and be completely unaware that there's anything to be said on that side of the issue at all. Because we missed each piece says it came through, we sort of reset to zero. Oh, that's not enough to convince me. Neither is that neither is that neither is that. Well, you could well imagine what the prosecuting attorney would say. As the defense attorney is trying to explain away each of the pieces of evidence that point in some measure toward the guilt of the defendant. The prosecutor says, it's not enough for you to say no single piece of evidence convicts, it's the body of evidence. A circumstantial case can hang a man. And this is well known in law. But people still have a tendency not to see what they're not looking for, not to see what they're not expecting. And that's everybody's failing. That's not just on one side or another of some particular issue. Political, religious, scientific, everybody is subject to this worry. But some of us fall into it more spectacularly than others. Right. [00:09:17] Speaker B: And I think this is where a syllogistic expression of arguments for theism can be misleading at times. Because if one presents a syllogistic expression of say, the Kalam cosmological argument and then the fine tuning argument and so forth, that I think is vulnerable to allowing an atheist to evaluate each argument taken individually and saying, yeah, the Kalam cosmological argument isn't sufficient to convince me it's non decisive. Let's move on to the fine tuning argument, et cetera, rather than considering whether the body of evidence, taken in aggregate would be sufficient to justify, even overwhelmingly, the conclusion. [00:09:59] Speaker A: Exactly. And this is one of the reasons that I encourage people to think about multiple ways of casting an argument, even when they're proceeding from the same body of evidence, the same set of things that we take for granted. In a deductive argument, we would call them the premises. In a probabilistic argument, we would call these things pieces of evidence or data. And it is often the case that an argument cast one way is not wholly convincing, but looked at from a different point of view with a different set of tools, might be very convincing. So we have to shift people into thinking about things in ways, formulating things in ways that they might not have been taught, they might not have instinctively gone for. It is true that it takes a little bit of getting used to to think in probabilistic terms, but I think the payoffs are really great. And I think people, if they could be fully convinced of that, would be more willing to put a little time into it. [00:11:00] Speaker B: Absolutely. So how does Bayes Theorem help us to structure the inference to design in nature? [00:11:09] Speaker A: This is a great question. The basic outline of it is that Bayes Theorem asks us to contrast our expectations. We have an expectation of Some piece of evidence or some body of evidence, some set of pieces of evidence, supposing that there were an intelligent designing mind at work, and we have some expectation of our having such evidence supposing that there were not an intelligent designing mind at work. It is, mathematically speaking, the ratio of those two expectations, measured mathematically as probabilities, that really drives the inference. Bayes theorem, like many pieces of math, can be manipulated algebraically and displayed in different ways. Something that I would encourage people to look up. Maybe you can put in a link or something in the notes to this episode of id. The future is what is called the odds form of Bayes Theorem. The mathematically geeky among our listeners, the people who didn't head for the doors the moment that we said math, would perhaps like to see how the ratio of prior probabilities gets multiplied by this ratio, this ratio of likelihoods, in order to produce the posterior ratio, the ratio of probabilities once we've taken the evidence into consideration. So it works by finding a mismatch when we can, between our expectations, supposing design were true, and our expectations, supposing design were not true. When we have a mismatch, the bigger the mismatch, the greater that ratio in one direction or the other and the more powerful the evidence moves the ratio of probabilities of design versus non design. So it's that ratio that really does what I sometimes call the epistemic heavy lifting for the purpose of changing our minds. It might not be enough to make us believe something, but suppose that you, you took some proposition of interest. It could be a scientific claim. It could be the claim that God exists. It could be the claim that a miracle has occurred. It could be a claim that a certain item in the cell is designed. And suppose that you started, for whatever reasons, thinking, oh, there's maybe one chance in a billion that this is designed. But then income piece after piece of evidence and eventually updating your probabilities, you come to the conclusion that the probability is now there's one chance in 10 that this is designed. That shift from one chance in a billion to one chance in 10 is massive. We've just crossed eight orders of magnitude. It's true that one chance in 10 is still only 10%, right? It's not, it's, it's not over 50%, it's not in the 90s. But that almost certainly changes the way that you're going to conduct inquiry. It's a live option now. Now this is something well worth looking into if you can find ways of exploring it. So even evidence that is insufficient to bring us to the point where we would say, hey, I now believe this. This is now something that I am, you know, committing to, maybe evidence that brings us to the place where we take it seriously. There's an analogy here to a completely uncontroversial kind of example. We recommend that women of a certain age, particularly if they've got risk factors, perhaps in their family history, have an annual mammogram. And it may be that there's not very much probability that they have breast cancer, but they get an anomalous result on the mammogram. That anomalous result may move them from 1 chance in 100 to 1 chance in 5 of having breast cancer. Normally we don't simply shrug, say, well, that's still 20%, that's less than 50%, never mind. We say instead, that is a much more significant risk. Cancer would be a very big deal if you were in fact be found to have cancer. How about if we do a needle biopsy or some other form of investigation to get more data? Now that there's something that is a serious matter, and there's a serious chance that it's true, we change our attitude toward inquiry. I think that is a primary way in which a Bayesian approach to these things could inform rational discussion of the issues. It may be that a certain piece of evidence doesn't yet make a hypothesis more probable than not, but that doesn't mean no work has been done. And even evidence that simply raises a hypothesis into the realm of something that's on the table for life. Consideration is evidence we must attend to. Inquiry is not a point in time matter inquiry is a process. Inquiry takes time. And so if we're going to engage in inquiry, we need to do so in a way where we're continuously updating our credibilities. We're thinking of things as being less live options than they were before or something to be taken more seriously than it was before. And that's going to shape the way that we approach investing time, investing resources in looking for more data. It would do this in any other area. It certainly should do this in this area as well. [00:17:11] Speaker B: One of the objections we sometimes get to intelligent design is that intelligent design makes no scientific predictions. That is to say, where a scientific theory, in order to be successful, has to strongly predict a certain set of data, and then we find that data and that is confirmatory of the theory. Now, I don't agree that that's the case. I think intelligent Design does make scientific predictions. But let's suppose for the sake of argument that, that it is correct that Intelligent design doesn't make these strong predictions in that way. Theory doesn't necessarily in principle have to make scientific prediction or strong predictions in order to be successful and indeed confirmed by the scientific evidence. Because so long as the evidence doesn't have to be highly entailed and predicted by the theory in order for that data to be confirmatory offset theory, it just has to render the data more probable than it would be on the falsity of that theory. [00:18:08] Speaker A: Yeah, that's a point that can sound a little bit abstract to the non mathematically inclined. So let's just give a simple real world example without the math. I'm walking through a patch of the deep woods. There are no roads, no pathways for vehicles in sight. I come over the small rise and look down into the dell in front of me and I see a rather ramshackle cabin sitting there. And I think, wow, somebody long ago came in and made this cabin. And it looks now like it's not in very good shape. It's probably been abandoned for maybe over a century. But I'm curious, right? If you asked me to say, is the cabin inhabited? I'd be strongly inclined to say no to probably not inhabited. If you asked me, suppose that it were inhabited, what would you find when you opened the door? I say, I don't know, the inhabitant might be there or might be out for a walk. If the inhabitant gone, there might be some signs of habitation in the cabin, maybe some signs of recent habitation. You say no, be specific. How probable is it that the inhabitant has left a hot drink on the table? Not, not very. Right. I haven't got a strong reason to think that, right? If it were a hot drink, you know, what's the probability that it would be a cup of tea? I don't even know what to say. Tea, beer, coffee, water. I'm, you know, I can't give you any very high probability. How about a cup of Earl Grey tea, specifically? Gee whiz, I mean, you know, if you get that specific, it's got to be pretty small, right? I have no strong reason to suspect that the cabin, if inhabited at all, would be inhabited by someone who would have just left a cup of Earl Grey tea on the table in the cabin. I go down the slope, I knock and then push gently at the door. It swings open and there's a cup of Earl Grey tea on the table and it's steeping it's still. Steam is still rising from it. The tea bag has its little tag lapped over the side. I look into it, it's just turning a nice golden brown. I must have just missed the inhabitant. You know, despite the fact that I was in no position to make a strong prediction about what kind of beverage, if any, would be inside the cabin if it were inhabited, I now am morally certain that the cabin is inhabited. I don't have any strong reason to predict that that's what the inhabitant would have done if there were one. But it wasn't the squirrels and the acorns that steeped the tea, Right. This is. It's the mismatch between those expectations. Whatever the probability is that someone would want to have a cup of tea just now, be it great or small, it's got to be many, many, many orders of magnitude greater then the probability that there would be a fresh steeping cup of tea there if the cabinet were uninhabited. That is the kind of thing that we're looking at. Prediction is one way for us sometimes to get evidence for a theory. Strong probability that a certain kind of evidence would be forthcoming if the theory were true. But it is by no means an essential part of our coming away. Profoundly moved by the strength of the evidence. We don't even have to have an exact set of numbers as long as we're in a strong position to say, as in the case of the cup of tea, this is vastly more to be expected if it's inhabited than if it's standing empty. [00:22:03] Speaker B: Absolutely. So what would you say are some of the advantages of a Bayesian approach to making arguments for intelligent design? [00:22:13] Speaker A: Wow, this is such a wide field. I almost don't even know where to start. One good thing about them is that they provide us with a very neat way of trying to keep track of the impact of different pieces of evidence. This is especially important when you can't promise ahead of time that all the pieces of evidence will point in the same direction, some positive, some negative. We want to be able to get in there and see sort of what the total vector sum is. Just like you could have some vectors pointing north and some pointing south. And in the end, we want to know, okay, what preponderates we teach our elementary linear algebra students or vector algebra students how to do the vector sums. And similarly, we may have evidence in favor, some evidence against. We want to know, how does this cash out? So that's one thing. Another thing is that it does provide us, again with a very efficient and clear way of trying to keep track of the force of multiple small pieces of evidence that are all pointing in the same direction, that very thing that we are most prone to overlook, and that we are prone to overlook it. There is work in cognitive psychology that you can look up showing that, yes, we're very poor at estimating the aggregated force of multiple pieces of evidence Unless we have sort of pen in hand, as it were, individually allocated to them, each their separate weight, it's difficult to do unless you're trying hard to keep track of it. It's easy to ignore the accumulating weight of these things. It's a bit like compound interest in that respect. People have odd inabilities to see how compound interest works. And the accumulation of evidence is rather a similar kind of phenomenon. [00:24:09] Speaker B: It's sometimes alleged that intelligent design expresses a God of the gaps argument. How do you think you might respond to this concern? [00:24:18] Speaker A: I think that the chief merit of the objection is that it's alliterated. And so once you get past the fact that it rolls rather smoothly off the tongue, you arrive at a point where you have to look at it and say, what exactly is this objection? If the claim is that the argument is merely, I don't know what caused this phenomenon, so I'll say that God did it, well, that's a very poor piece of reasoning. But it's also not what any intelligent person would use as a design argument. It's rather a caricature of such reasoning. If it means here's a place where the naturalistic picture provides no reason for any kind of expectation of this phenomenon, even at a low level, and the assumption of an intelligent designing mind would provide a somewhat higher expectation, then there's no point in calling it a fallacy or in dismissing it. That's the kind of reasoning that we use everywhere. Again, this is the kind of reasoning that we use with a cup of tea in the cabin. It is not some kind of esoteric thing. It's not something desperately patched together by earnest fundamentalists hoping to defend their faith. This is just good reasoning in general. We can argue about the specific probabilities that should be assigned to different things. Those arguments are fine. Those discussions are good. It is okay for someone who acknowledges the force of a piece of evidence to hold out some hope that in the long run there will come along some way of explaining it that we don't have in hand right now. But to do that requires that you first be able to acknowledge that, yeah, that piece of evidence right now in this context, does seem to be pointing in the direction of design. And we need to think that out and be able to acknowledge that even if we're skeptical about the prospects of some kind of design claim globally, we have to be able to acknowledge, just as we have to be able to acknowledge evidence against design, we have to be able to acknowledge evidence for it. Everybody has a tendency not to see the evidence in the direction that they want things, you know, not to go. So we have to approach this listening to one another. But the Bayesian approach to this can help us to sort that out better, at any rate, than we would do without the tool. [00:27:02] Speaker B: Absolutely. So an objection that's sometimes raised to Bayesian approaches to hypothesis testing is the question of the priors. Could you explain for our audience what priors are? Surely assigning a prior probability on design or the business of a cosmic creator for that matter. Matter is impossible. How would you respond to that sort of objection? [00:27:22] Speaker A: So there's multiple levels of this. The very first thing that I want to say is, yes, the problem of the priors is an interesting one. And what that means is how do you arrive at any probabilities in the first place? Do you just make them up? There's a school of Bayesians that say, yeah, it's just how whatever you feel like this morning are, those are your probabilities, and as long as they are coherent with one another, you're good to go. That has never seemed to me to be a very cogent position. Some people use the word Bayesian to refer to that kind of subjective Bayesian, and to that alone. If they do, and if they're unhappy with it, I share that unhappiness. I don't think that that's a good way for us to get where we're going. But there are other things to be said. One thing to be said is just to acknowledge this is a difficulty for everyone who wants to use a Bayesian approach. It's a difficulty for Bayesian philosophy of science. It's a difficulty for Bayesian philosophy of religion. It's a difficult difficulty for theists and for atheists. It's a difficulty for people who want to defend Theory X and people who want to contest Theory X. This is something we all have to wrestle with. So it's everybody's problems is not some problem that is specific to people who want to leverage the Bayesian apparatus in one direction. The second thing to say is that sometimes we can actually make progress on the great big question of where probabilities come from in the first place by Attending to considerations of simplicity and considerations of symmetry. The chief person arguing in favor of the use of simplicity considerations in the philosophy of religion has been Richard Swinburne. And you can consult almost any work of his for the past few decades and you'll find this coming out. Simplicity is, as he says, evidence of truth. And he makes a case for this by looking at the history of science where we have routinely considered simpler explanations to be more promising than complex, comparatively complex explanations. Symmetries, as with the possible subsets of a set of elements, that's a source of symmetries of physical objects that are used to randomize things like a six sided die. These are another source of these things. But all of that can get pretty abstract sounding. So let's back way out from any of that and just say, suppose that there's a bunch of evidence that's come in and it all seems on balance to be pointing pretty strongly for the truth of a certain hypothesis and away therefore from the falsehood of that hypothesis. This likelihood ratio is pretty top heavy. Probability of the evidence given hypothesis H is many orders of magnitude greater than the probability of that evidence given the negation of hypothesis H. We haven't stopped and tried to make some kind of philosophical estimate of the prior probabilities. What can we say? What can we do? Well, one move that we can make is we can point out that there's a very considerable body of evidence here, and then we can try to use that to transfer a burden of proof, in the philosopher's sense of that phrase, over to the opposing side. Here's this body of evidence, it's pointing in this direction. Suppose that we can agree on that. In order not to be moved by this, I would have to have some consideration weighing very heavily against the hypothesis to counterbalance this weight of evidence. What do you propose? Do you propose that there actually is such evidence? Do you propose that I just invent, or you invent some degree of disbelief that would be sufficient not to be moved by this? Why would we do that? One of the things that we can do is simply say we can tell if the evidence is this good. We would have had to start out that degree pessimistic to be unmoved by it. Why should we have started that? Pessimistic. It won't do simply to back solve and say, well, okay, if your evidence is that good, I'll just say that I was even more pessimistic and I'll still resist your evidence. This is obviously special pleading. So one of the things that we can do, even in the absence of a global solution to the problem of how we get those initial or prior probabilities is to say there's a body of evidence here, why shouldn't we track with it? What is the case for not doing that? And I think that's a rational movement is one that we engage in in many cases, again much more widely than just design reasoning. I think we do this all over. [00:32:15] Speaker B: Another popular objection to design as an inference to the best explanation is that we only possess uniform and repeated experience of human agency. We do not have experience with non human designers, much less non material ones. How might you respond to that? [00:32:29] Speaker A: Okay, first of all, that's just crazy false. Have none of these people people ever looked at a beaver's dam? Right. No, that's, that's not, not true. There are clearly cases of design where there's not a human. But the other thing is, why should restricting this just to some class. Okay, well then mammals, ah, what if I point you to a, a carefully woven intricate bird's nest. Okay, Physical living creatures. Why are we grasping for some kind of restriction on this? Is it the physicality or the intelligence that's really the important thing here? Causal sufficient power to bring about the thing. We can request otherwise saying ah, this was designed, but by someone who had no ability to bring it about. That's problematic. But causally sufficient power is that thing. With regard to non human, non animal designers, why should we think of things like that? There are hypotheses we can put on the table. We would put them on the table if we found artifacts circling one of the planets that goes around Alpha Centauri. Right. If we went out there and we found machinery in orbit around it, we would unhesitatingly say, ah, this is designed and it can't be designed by humans. We weren't out there. So I think that all attempts to strangle an inference like that bespeak a motivation to shut the discussion down before it can reach those dangerous places where we're entertaining hypotheses that for reasons perhaps more political than scientific, have been ruled disfavored. And there is really no rational reason for that. There's certainly nothing in the structure of an explanatory inference or a Bayesian reconstruction of an inference to the best explanation that would limit us in that way from the application of it. It's fine again to say I don't think that your examples have the probabilistic force that they have. Here's why. I have some analysis that shows that you're misestimating the probabilities. Bring it. We should have those discussions. It is not a legitimate move to say, I hereby declare all of these other kinds of things to be out of bounds. We have enough varied examples of intelligence producing design that it does not seem to me that there's any strong ground for saying, oh, but these have only been. Well, they've only been what? They've only been within a few parsecs of our own planet. Is that really? Do you really want to hang your argument on that? Like what? At a certain point, the scope and the varied nature of the kinds of examples we can bring forward ought to just shut down that kind of objection. And indeed, in science fiction, we routinely see people making such inferences and the reader accepts not only that they are within the realm of the story, plausible, but that they're just, of course, the kinds of things that a normal rational individual would do. And that includes cases that go very, very far away from analogy with the kind of causal apparatus that we are familiar with bundling around here on our planet. [00:35:56] Speaker B: And in terms of resources on theory of knowledge and Bayes Theorem as it applies to arguments for theism and intelligent design, I would of course recommend Richard Swinburne's work, particularly his book the Existence of God. I'd also recommend Foundations of Knowledge by Tim as your book as well as I have a three part series on evolution, Muse and Science Today on Bayes Theorem, particularly as it applies to Intelligent design. We also did a previous podcast on Idea the Future where Andrew McDiarmid was interviewing myself on Bayesian applications to arguments for design. Anything else you'd add there, Tim? [00:36:33] Speaker A: Yeah, I would say if somebody wants to get into the nuts and bolts of it, there's a little book by Jonah Schupach, S C H U P B A C H put out by Oxford University Press on Bayesian philosophy of science. And that's just a lovely introduction to some of the technical details. And I think that sort of the one book that's come out that's been most explicit about this recently on the intelligent design side has been Steve Meyer's book the Return of the God Hypothesis, where he is beginning explicitly to invoke and to exploit the Bayesian approach. And I think it's very fruitful there. So I think Steve's book would be a really good place that might be a little bit less forbidding for those who want to dip a toe into it without getting into the mathematics first. [00:37:20] Speaker B: Absolutely. So changing gears now a little bit, we Were both sad to lose a mutual friend and colleague a few weeks ago to pancreatic cancer. Our good friend Tom Gilson. Now, listeners of this podcast may not know know who Tom was, but he was a longtime editor of the Idea of the Future podcast and he's also hosted podcasts on occasion. He actually interviewed myself on Irredus Book Complexity some time ago. Tim, you were one of Tom's closest friends. Can you tell us a little bit about who Tom was and your friendship with him? And in what context did you get to know Tom well? [00:37:55] Speaker A: Where to begin? Tom was many things, all of them good. He was a. An author, he was a speaker, he was a mentor to many people. He spoke passionately about a wide range of subjects. He was a committed Christian. He ran a blog called Thinking Christian for many, many years. He collected some of the essays from that into a. A volume called the Thinking Christian, which is a wonderful volume in which I would recommend people dip into. Tom saw cultural trends and changes coming before many other people did. He could look into the future and he could see the shifts over gender and ideology humming and got out in front of them and wrote a book called Critical Conversations about this before most people were alert enough to realize that this was going to be something we had to be prepared to discuss if we wanted to contend earnestly for the faith once delivered to the saints. Tom was a strategist. He was concerned with large scale problems, like the loneliness of people who work in Christian apologetics, like the problem of pastors not receiving the support and the mentoring that pastors themselves need. And he was concerned that people who thought that they were going to help out pastors are going about it the wrong way and probably just driving a wedge in. He was concerned with trends that he saw infiltrating Christian circles with things that people were saying and doing that he thought were destructive of Christian witness or even of Christian message. And he. He loved people, he loved his friends. And he received the diagnosis of pancreatic cancer this summer. And he organized the remainder of his life around achieving a few things that were smaller scale projects than some of the ones he had been working on, but that he felt like he could finish or at least bring to the place where others could finish them. And I loved him well. And I think that his life is one of those that we see in Hebrews when we're told that we're surrounded by a cloud of witnesses. And those of us who knew him best are certainly aware of that very vividly. [00:40:37] Speaker B: Now, what were some of the projects that Tom was working on towards the end of his life. I know that he was writing a book. Do you want to tell us a little bit about that? Or some of his other projects? [00:40:46] Speaker A: Yeah, so I'll just go for one of these things. Tom had written a book in which he pursued, both sort of argumentatively and devotionally, a line of thought about the character of Jesus. And he called it too good to be false. He wanted people to focus on the character of Jesus and to ask questions that we might fail to ask because we're too familiar with him, to ask questions like what sort of character traits turn up when people try, in all of literature, not just Christian, but across all of history, to depict a character of surpassing power? And what kind of personality traits do they put into their picture of someone who is surpassingly good? And his argument was that Jesus certainly is a character with both of those, but he's not the kind of character that the apostles could have invented if they wanted to, and he's not the kind of character they would have invented if they could. And again, he approached this both argumentatively and devotionally. He published that book, and it actually sold reasonably well. But he came to the conclusion that in incorporating both an argumentative and a devotional aspect to it, he had really blended together two different audiences, both of whom he wanted to think about these things, but to the point where it had diluted the fact that really the center of the argument is the argument that the accounts that we have of Jesus in the gospels are portraits drawn from life, portraits that come at very few removes, if any, from people who actually knew him, ate with him, drank with him, spoke with him, walked with him. And so he reorganized the book, cut out some of the devotional material, rewrote some of the other material, and was in the process of putting this together as a tighter and more focused argument to answer the question, who really wrote the gospels? Showing that he was not presupposing that they were written by the traditional authors, he was rather concluding that they were written by people who knew Jesus intimately. And that project was in full flight at the time of his passing away. And so Tom gave me the keys, as it were, to go and download the files, seeing the outlines he had written, the things he had redrafted, and to try to figure out how to take where he was going and round this out so that the book faithfully represents that central line of argument that he wanted to show up even more clearly. And so we worked with the publisher, and we're talking about this now, and that is a project to be completed. And of course, you know, all of the sales of the book will generate royalties that go to Tom's widow, Sarah. And I'm, I'm humbled and thrilled to be able to be a part of that project. Tom was someone worthy of the investment of time to try to complete something that was so near to his heart. [00:44:20] Speaker B: Absolutely, completely agree. Well, thanks so much, Dr. McGrew, for being on the program with us today. And for Idea of the Future, I'm Jonathan McClatchy. Thanks for listening. [00:44:32] Speaker A: Visit [email protected] and intelligentdesign.org this program is copyright Discovery Institute and recorded by its center for Science and Culture.

Other Episodes

Episode 1297

February 24, 2020 00:15:04
Episode Cover

Honoring Phillip Johnson Pt. 4: Ann Gauger

On this episode of ID the Future, we hear biologist and Center for Science and Culture senior fellow Ann Gauger speaking at a gathering...

Listen

Episode 1576

March 18, 2022 00:18:07
Episode Cover

Günter Bechly Says Goodbye to Darwinian Gradualism

On this ID the Future from the vault, paleontologist Günter Bechly and host Andrew McDiarmid discuss Bechly’s article “Ape-Man Waves Goodbye to Darwinian Gradualism.”...

Listen

Episode 946

September 19, 2016 00:10:36
Episode Cover

How Does the Eye Work? Geoffrey Simmons on Design in Vision

On this episode of ID the Future, Sarah Chaffee interviews physician Geoffrey Simmons about eyesight. Simmons discusses the sensitivity of the eyes and how...

Listen