[00:00:00] Speaker A: Foreign.
The Future, a podcast about evolution and intelligent design.
[00:00:11] Speaker B: Welcome to ID the Future. I'm your host, Eric Anderson. And today I'm pleased to have back on our show Rob Stadler. Welcome, Rob.
[00:00:18] Speaker A: Hey, Eric. Good to see you again. Good to be here.
[00:00:19] Speaker B: Yeah, you too as well. I feel like we ought to be talking about origin of life since that's one of our favorite topics. I was just out in actually with Brian Miller several days ago out at byu. He was giving a talk on origin of life and shout out to all the students and the physics faculty there. We had a great experience and it was great to meet everybody and there's been a lot in the last even couple of weeks in the news about origin of life. So we'll have to come back to that one of these days. But I know we've got another fun topic today. So for those of you, those of you who aren't familiar with Rob, Rob's got a tremendous background. We've given it before, I won't spend time on it now. But he's got a list of patents longer than your arm and a lot of great experience and background. And he's also written a book that talks about evidence and what makes for good science, what makes for poor science.
And that's what we're going to really focus on today. So I'm excited to get into it with you, Rob.
[00:01:18] Speaker A: Yeah, thanks, Eric.
[00:01:20] Speaker B: So tell us a little background. How did you come to thinking about this topic and what's your approach to this?
[00:01:29] Speaker A: Yeah, Well, I have 30 years of experience as a scientist in medical devices.
And we as a country and pretty much in the world now, we require that anything that is a medical device, anything for sale as a medical product has to be going through a regulatory process where they want to see evidence that it's safe and effective.
And because your life depends on things like this, society takes it pretty seriously. You know, not to say medicine is perfect by any means, but it's, it's backed by good evidence almost all the time. And society kind of demands that and that put pressure on, you know, on evidence in order to, to determine what is the, what is good enough evidence, you know, and actually to sort of stratify evidence that gives confidence versus evidence that really is kind of weak.
And the world of medicine has actually established an agreed upon hierarchy of levels of evidence.
And I'll show a picture of that here.
But there are these agreed upon criteria, these different levels, level ABC this is accepted throughout medicine to say that certain kind of evidence is much more Convincing than other kinds of evidence.
[00:02:50] Speaker B: Right.
[00:02:50] Speaker A: And I think that came about because people's lives depended upon this.
And what I don't see in the world of evolution is anything like this. That's an agreed upon stratification of what evidence gives you more confidence and which doesn't.
[00:03:06] Speaker B: Right. So let me just make sure I'm understanding. So when you say there's an agreed upon sort of stratification, you're talking about within the industry or within the approval process. If you're going before the FDA or another regulatory body, they're going to say, all right, you've presented all these various pieces of evidence to us. Some of these we view as more valuable than others. And you'll talk about why, I guess today as to why those are more valuable. Is that appropriate?
[00:03:33] Speaker A: That's true, yes. But it goes further than the regulatory body I've been focusing on. But it also is your doctor, when your doctor decides to prescribe a pill or give you some treatment of, you know, some procedure, they're reading the scientific manuscripts and making these kind of judgments on their own. You know, is, is the evidence more in favor of treating you this way or more in favor of treating you that way?
And, and they consider these things and it's, it's pretty remarkable that all of medicine agrees upon them. And I think that's really important. But in other fields of science, sadly, there is no such structure. And I'd just love to see that come out of it. And that, that kind of provoked me to write, to write this book honestly and try and get other areas of science to do similar things.
[00:04:19] Speaker B: Right. And remind us the name of your book for our audience.
[00:04:22] Speaker A: Yeah, it's called the Scientific Approach to Evolution. But basically, when I, when I show people these criteria that stratify medical evidence, folks that are in the evolution world, they say, oh, come on, that applies to medicine. You know, nobody does randomized clinical trials except for medicine, so that doesn't make any sense.
And the fact is that the. Yes, it's true that nobody else does randomized clinical trials, but the concepts of how to stratify evidence and the general ideas are applicable to all of science. And that's what I'm showing here in the slide, is that I've boiled it down to six criteria.
And the six criteria are not like black and white. You know, they're gradations in here. And it's kind of a spectrum of levels of confidence.
But if you can achieve in your scientific work, if you achieve these six criteria, you can be pretty confident that the result is going to hold. And conversely, if you fail all six of these, you're down on the other end of the spectrum and you don't really have something that people can put their confidence in.
[00:05:30] Speaker B: Right, okay, so talk us through the criteria that you've identified in your book.
[00:05:36] Speaker A: Yeah, I'm happy to do that. But I guess I should point out, though, that I think these are very practical in the sense that every day you get news headlines and stuff comes at you and you have to make an assessment. You know, how much do I trust what they're saying here?
And I think for me, for 30 years of being a scientist, I've developed a filter of this where I can read the paper and say, hey, is this something I trust or is this something I really don't trust very well? And these six criteria are kind of the way I've boiled it down. And so I think it's very practical, you know, so that tomorrow when you read the news, you can apply these things and get a feel for how much you can trust what they're saying.
[00:06:16] Speaker B: Right. So even for those of us who aren't necessarily doing heavy research every day and, and trying to get something approved with the FDA or anything like that, for the average listener, these are the kinds of things you're saying that can help us sort of have a filter and understand how to balance these things and how to understand whether I'm dealing with high confidence evidence or low confidence evidence.
[00:06:39] Speaker A: Exactly.
[00:06:40] Speaker B: Okay, great.
[00:06:40] Speaker A: And as, as we go through these six, hopefully you, you just see them as kind of being intuitive. I mean, that's the way it should be. Um, and I don't think there's a lot of argument against these. Some people might say, oh, you forgot one, or you should modify.
That's fine. I'm not going to quibble over stuff like that. But just to put the concept into practice, I think would be very helpful. So let's go into the first one, which is simply asking, is the evidence repeatable?
And basically, I mean, this one should be obvious. Right? But if, if, if I perform an experiment and then you perform the experiment and we get the same results, of course your level of confidence increases. And if I do it again and do it again, do it again and somebody in Japan does it, you know, the level of confidence just naturally goes up. And if you can only study something that happened once ever and you can't repeat that, obviously your, your level of confidence is going to be lower. It's just. Just a simple fact, Right?
[00:07:36] Speaker B: Yep.
[00:07:38] Speaker A: And the Second criteria is something that is, is more directly measurable or more directly observable.
And to explain this, I like this example of when I was a kid, I had a fever, or at least I didn't feel good. My mom would, would try to measure my temperature, right, to see if I had a fever. And the first thing she do is just take her hand, put it on your head, right, and determine if you had a fever or not. Well, that's, of course, pretty indirect. And it's not a very high confidence, you know, determination if you have a fever or not.
And so the more direct way, of course, is to. Is to stick a thermometer in your ear or somewhere else that pulls up your actual core body temperature, and then you can really tell if you have a fever or not.
So the more direct the better is a simple way of saying it. So applying it somewhere else. If I want to study the orbit of the moon, I think that can be done pretty directly because I could look at it over time. I can study exactly where it's going and all that.
Whereas if I want to study a black hole, you know, black holes don't actually emit anything. You study them kind of through the absence of seeing things. Right.
So they're much more direct, much further out there in space. And of course, you're looking at them long ago in time, by the time you see them. And you can't bring them into the laboratory and directly poke them and prod them and learn about them in that way. And so the more indirect, as you study a black hole, the less confident you are of what's going on there, as opposed to something like studying the Moon.
[00:09:15] Speaker B: Yeah, yeah. We'll ignore Hawking radiation for a minute, but yeah, I understand the point. It's very indirect for the most part. Yeah, yeah.
[00:09:24] Speaker A: Okay. Number three on my list of six is to study things prospectively.
[00:09:30] Speaker B: Okay.
[00:09:31] Speaker A: And what that means is that you plan ahead in advance that you're going to conduct this experiment under these conditions, plan out, especially for the purpose of controlling anything that could get in the way. Any kind of confounding factor that's going to confuse your results or confound the results.
So being prospective, you can take the time to control for all of these things.
And the more control you have over the experiment, the more confident you are in the results.
And in fact, you know, the, the ultimate goal of science, I think, is to, is to show, is to show causality, meaning this caused that, that that resulted directly from this, and you can't do that. If there's a bunch of confounding factors that are in there confusing the result.
So if you do something retrospectively, which is the opposite of prospective studies, so you're kind of looking back at some data that was collected previously, you had no control over the confounding factors in a retrospective study. You just get what you get. Right. Data is what it is.
And without that control, you. You can't really have any confidence about causality.
You can come back with, like, association. You could claim this seems to be associated with that, but you can't say this caused that unless you're really in control of the situation.
[00:10:59] Speaker B: So is the. Let me ask you to dive into this just to hear more. Maybe you're going to talk about it later, I don't know. But the prospective almost seems like it's a temporal comment.
But what I'm hearing you talk about is really control over the factors. So the problem with retrospective isn't so much in some case that it's retrospective, but perhaps that it gives us a situation where we may not know the factors involved. We may not be fully in control of the experimental setup and everything that went into it. And so at that point, in other words, I'm wondering what your thought is in terms of the temporal aspect versus the control aspect. Maybe that's what.
[00:11:43] Speaker A: No, I think you're bringing out a good point. I think it is more about the control.
[00:11:47] Speaker B: Okay.
[00:11:48] Speaker A: And even just being aware of the circumstances.
Right. You just can't be fully aware of that. And you. You can't control it unless you actually plan it yourself.
[00:11:57] Speaker B: Okay. All right.
[00:11:59] Speaker A: Yeah. Very good. So that's three. We're halfway through.
And as I like to say, the. The first three that we've covered already are more about the. The quality of the experiment that you conduct. These next three are more about the quality of the scientist.
[00:12:16] Speaker B: Okay, so again, the first three. Repeatable.
[00:12:20] Speaker A: Directly measurable.
[00:12:21] Speaker B: Measurable. Directly measurable. And perspective or control. Yeah. Okay.
[00:12:26] Speaker A: So they tell you if you're doing a good quality of experiment, the next three are a little bit more focused on the quality of the scientists. So number four is about avoiding bias.
And of course, we all have bias. Nobody can ever claim that you're not biased.
But in good science, you need to take active prospective actions to try to avoid bias, try to push bias away. And it's not just saying, I'm trying to be unbiased.
[00:12:55] Speaker B: Yeah.
[00:12:56] Speaker A: But actually taking active steps to get it out, you know, so if I am making the measurements and I stand to make A million dollars for certain results. Obviously, I am biased. Right. And I want that result so to be unbiased. I would actively say, hey, Eric, you don't stand to make any money off of this. I want you to conduct the measurement. You know, you do it, you show me the results. Right. And obviously would hopefully would reduce the bias level, you know, and make it a more fair study.
[00:13:26] Speaker B: Yeah, yeah. This is interesting because, you know, I have a legal background, and so this, this reminds me of some of the evidentiary rules that we have in federal court.
And one of the ways that you get higher confidence in evidence is if something is called against your interest. So if you make a testimony against your own interest, that's sort of the opposite end of the spectrum from getting the million dollars.
[00:13:50] Speaker A: Right.
[00:13:50] Speaker B: It's like, this could actually harm you to make this statement. But if you're going to go ahead and make this statement, then we have higher confidence that you are being unbiased, that you're telling the truth in this regard. And I kind of think, you know, not necessarily in a legal setting, but a lot of times we have researchers whose findings or papers or statements that they make receive criticism. And instead of taking that criticism to heart, unfortunately, it's ignored, it's blown off. It's, you know, they attack the person who's questioning their results. You know, how dare you question my results?
And that's certainly a form of interest, too. I'm protecting my career or my prestige or my standing within the academic community rather than, hey, maybe this person has a good criticism of what I did and I need to rethink how I did this experiment.
[00:14:46] Speaker A: Yeah, yeah. So every scientist has to try to fight against that and bring up a really good point. If you can get somebody who's actually opposed to. To what you're proposing, and you have them conduct the measurement and they come up with a result that encourages what you think is the truth. And, wow, that's. That's powerful.
[00:15:04] Speaker B: Right. Or. Or a researcher who says, you know what? I've been working on this for 10 years, and it. It's wrong. It doesn't work. I've decided, you know, based on all my experience and evidence here, that I have to change my mind and walk away from what I've been proposing for the last 10 years, that that would be much higher confidence than.
[00:15:22] Speaker A: Much higher confidence.
[00:15:23] Speaker B: Then, you know, this just goes along with the story I telling the whole time.
[00:15:27] Speaker A: Yeah, good point.
Yeah. So that's number four. Number five is about avoiding assumptions.
And assumptions are. They're Always present in science. And you do it because you want to save time, you want to save some money. You're under pressure to get this result in the next month. And so you. You adopt an assumption that helps to save you that time. The real problem is if you. You adopt the assumption and you kind of hide it, you don't openly disclose what you're assuming, and you've taken this shortcut. And I'll tell you, whenever in my career, whenever I've gotten burned, it's almost always because I had taken an assumption upfront that turned out not to be true, and I didn't take the time to validate it and all that. And then you carry it two years through the work and you're. Oh, man.
[00:16:13] Speaker B: Oh, boy.
[00:16:14] Speaker A: That was the thing that got me.
So you can't completely avoid assumptions, but try to minimize them and then openly disclose them and say, here's where I think this is justified, you know, because of.
[00:16:27] Speaker B: Right. And try to identify them up front. Right. I mean, that's. That's, you know, the first. What's the saying? The first step to change is awareness.
You got to be aware that you have these assumptions and at least identify them and then be willing to disclose them.
[00:16:43] Speaker A: Right, right, right. And it certainly can change your results if you make the wrong assumption.
[00:16:48] Speaker B: Yeah.
[00:16:50] Speaker A: Okay. And the sixth criteria then for high confidence. Science is about making reasonable claims.
And this is about claiming appropriately according to what you've studied and not sort of extrapolating far away from the experimental results.
And it's also about giving. Conveying the right amount of confidence of what you actually have, not overstating your claim. So I like to use this example of. Of a. There's a can of nuts you can buy that actually, once they claim, they want to claim that this is good for your heart.
[00:17:29] Speaker B: Right.
[00:17:29] Speaker A: Nuts.
[00:17:30] Speaker B: Yep.
[00:17:31] Speaker A: And your heart healthy. And I love the little disclaimer that's on the can because it says, quote, scientific evidence suggests, but does not prove, that eating 1.5 ounces per day of most nuts, such as almonds, blah, blah, can reduce the risk of heart disease.
[00:17:50] Speaker B: Okay. Yeah.
[00:17:51] Speaker A: So I find that to be, you know, a very. It's well stated, as far as a claim goes, that they're conveying the right level of confidence because they use these hedging terms like it's science. Scientific evidence suggests, but does not prove.
[00:18:06] Speaker B: Yeah.
[00:18:07] Speaker A: That eating these nuts may reduce the risk of heart disease.
Now, I suspect they had to use those words because the fda, they got the lawyers involved. Yeah, you got the lawyers Involved, you can't make these strong claims because your data is not that strong.
Okay. But I contrast that with this statement made in my son's high school biology textbook. This is the most popular biology textbook in the world, I think.
But this biology textbook says, when it's speaking about common ancestry, evolution, it says, for example, organisms as dissimilar as humans and bacteria share genes inherited from a very distant common ancestor.
So when you contrast those, you don't see any kind of hedging language in the, in the common ancestor claim. It's not saying we think or evidence does not prove. Yeah. And so it's almost like they're, they're kind of slamming their fists on the table saying, we are so certain of this.
And when I see claims like that, it throws a little red flag in my head that this is overstated confidence. You know, this is right. It's a, it's a sign that they're, that they really aren't so confident, but they're trying to act like they're that confident.
[00:19:22] Speaker B: There's a, there's a response that I've heard to this sort of concern from Eugenie Scott from the national center of Science Education, who of course has a pro evolution group.
But one of the things that she has said in the context of should we teach about the strengths and weaknesses of evolutionary theory?
No, we shouldn't teach any of the weaknesses because that will confuse students, that will make them confused about how great this theory is and how great the evidence is. What's your view of that and whether students are able to handle, handle the ability, you know, to distinguish between good evidence and poor evidence.
[00:20:03] Speaker A: Yeah, well, I certainly can't support what she just said. You know, I think you have to trust that students have the ability to think these things through. And one thing you don't want to teach students, of course, is to just blindly accept everything without any critical thinking. And they're not teaching students to think if that's what they're spoon feeding everyone.
[00:20:22] Speaker B: Yeah, yeah. Yep. Good.
[00:20:24] Speaker A: That's very concerning.
So we come back, we have these six criteria which, you know, I hope are easy to remember, and I hope that you can just apply them even tomorrow when you read, you know, some, some new science report, you know, that they found water on Mars or whatever the claim is going to be. You can, you can use these criteria, say how much you should trust it.
[00:20:46] Speaker B: Okay. And just, just to recap again then we had three criteria related to the experimentation.
Give us those three again real quick.
[00:20:55] Speaker A: Yeah. Is the evidence repeatable can the evidence be directly measured, directly observed? And was the evidence obtained through a prospective study?
That speaks to the quality of the experiment more.
And then the next three criteria about is bias minimized or assumptions minimized. And then did they make accurate claims, reasonable claims?
[00:21:19] Speaker B: Right.
[00:21:20] Speaker A: It's more about the quality of the, of the scientists.
[00:21:22] Speaker B: Okay. And you were sharing with me a couple of headlines that you had seen just recently on, on, you know, pretty amazing sounding claims. And, and you're suggesting, I guess that as we read these headlines, why don't you share a couple of those with us and maybe we should apply some of these criteria in our minds as we think about these this week.
[00:21:44] Speaker A: Even one of them says scientists uncover oxygen loving ancestor of all complex life.
[00:21:51] Speaker B: Yeah.
[00:21:52] Speaker A: And another one says scientists discover why high altitude protects against diabetes.
[00:21:57] Speaker B: Okay.
[00:21:58] Speaker A: Now I haven't honestly read these in detail but, but I have a little, you know, I have an intuition as I look at these of what gives you more confidence. And I know that if you're going to study high altitude and whether that affects diabetes, you can do that prospectively, you could do it repeatedly. You can directly measure if the person has diabetes or not. You know, do it again and study gives me a feeling that answering that question can you will be able to answer it with, with much confidence.
Now, the one about scientists uncovering an oxygen loving ancestor of all complex life.
Right. You can't, you can't actually see that ancestor. It's a hypothetical ancestor that existed long time ago and they're trying to explain how eukaryotic life, complex life was originated. So you're trying to understand the origin of something complex life and the evidence you have is what I mean, I have to read the paper in depth, but it's not likely to be directly observable, directly measurable kind of evidence. It's not repeatable either because you can't go back and do it again and it's not going to be perspective. So a little bit of red flag is going on with that one.
[00:23:14] Speaker B: Okay.
Okay. So that seems really helpful in terms of us sort of stepping back when we hear a claim and having a little bit of good, appropriate, objective approach to these things. Is there anything in your experience over the last 30 years you've been working as a scientist where you kind of applied these criteria and they really impacted your work?
[00:23:38] Speaker A: Yeah. In kind of a humbling kind of way.
[00:23:41] Speaker B: Or failed to apply them? Maybe we should say, you know, you,
[00:23:44] Speaker A: I think I don't know what your story is, but we all learn best through Hard knocks, right?
[00:23:50] Speaker B: Yeah.
[00:23:51] Speaker A: That's how you remember it. But no, I did a. You know, I. I told you I work with medical devices, and, you know, a particular device that is attempting to resynchronize a failing heart, heart failure. And you can help people with heart failure if you can resynchronize their heart. So we have a pacing, timing feature that is there to try to resynchronize the heart.
And we had a suspicion that this feature, which is in an implanted device, would be able to reduce the risk of developing atrial fibrillation, which is a dangerous rhythm, and it's not something you want to have.
[00:24:32] Speaker B: Yeah.
[00:24:32] Speaker A: So we. We came up with a database of something like 37,000 patients that were in this database, and their data's been recorded for years.
And here's this database. Boom. And it's looking back into the past of these 37,000 patients over the last few years. And we said, hey, why don't we just check who has that feature turned on and who has that feature turned off and see who gets atrial fibrillation.
Right. And the device can very accurately tell you if they have atrial fibrillation or not. So we have good data. And we did the study of this database and found, like, a 50% reduction in atrial fibrillation happening in those patients that had this thing turned on, this feature turned on. And so we wrote it up with three doctors, we published it, got it, manuscript published, and terrific.
But I will say that in the. In the limitations section of the manuscript, and this is something I'd like to see across all scientific publications, is a specific paragraph or two that says, these are the limitations of our study. You know, we. And we specifically said there, hey, we realize that this is looking backward into a database that's already been collected, and we had no control over who had this turned on, who had this turned off. And so there could be a selection bias, as it's called, where doctors pre selected who should get it turned on and who should get it turned off, and maybe that biased the results.
So we gave that little warning. But anyway, about four years later, another study came out that was actually prospective and they actually randomized patients. Some had it turned on, some had it turned off.
And that study, over seven years of collecting prospective data, clearly said that the feature did not have an impact on whether people got AF or not.
And there's no question that that study, because it meets all six of these criteria, gives more confidence. I don't think any doctor would debate that. So the study I did will quickly be pushed aside and forgotten because its results, even though we had 10 times the size of the study, doing it prospectively is always going to be better.
[00:26:57] Speaker B: So did you guys have an opportunity to go back and take a look at what had happened in your, in your sort of data scraping study versus the prospective study to.
[00:27:07] Speaker A: Well, we're not. That was the problem is that in the database it didn't contain why they turned it on or why they turned it off. Oh, so you don't have that data.
So like you and I were just discussing, when you're looking back in time, you can't control things because you don't know why they were the way they were. You just know that that's what you got.
[00:27:28] Speaker B: Interesting. Okay.
All right. So that's an example of a prospective meeting one of those specific criteria that you talked about and having control over the, over the parameters of the study and having high confidence because of that.
But you guys had a bunch more data.
Couldn't it be the case, somebody could argue, well, if you have a bunch of smaller criteria versus one larger criteria, that's higher confidence. Couldn't the lesser confidence stuff, you know, can't you sort of add those up, Rob, and say, well, I've got lots of low confidence, but there's a lot of it.
[00:28:05] Speaker A: Yeah.
So does lots of low confidence overcome.
[00:28:10] Speaker B: I see.
[00:28:10] Speaker A: Amount of height.
[00:28:11] Speaker B: You're right. Right, right. Yeah, generally.
[00:28:14] Speaker A: No, no, I, I mean, even 10 times, as I mentioned, I would, I would much prefer the high confidence evidence over the low confidence. Now, there is kind of an exception, it's like a statistical exception that if, if your high confidence study is so small it doesn't have the statistical power, then, okay, then it, then it's actually lower confidence result. But otherwise, you know, if it's well done and you got enough data that's, that's convincing and that's the way the FDA treats this stuff too. They will grab onto the high confidence data and just, just simply ignore the lower confidence stuff. It's almost, almost useless.
[00:28:51] Speaker B: Hmm. Yeah. Interesting. Okay. All right, well, let's figure out in our next section of our conversation here. Now that we've got these criteria, you've given us a couple of examples from the medical field.
How does this apply to what you're seeing in the field of evolution and how people are approaching evidence?
And maybe we'll see if there's any discrepancy between the way the claims are stated versus the evidence that's actually underlying those statements.
[00:29:20] Speaker A: That's where the rubber really hits the road here. So I look forward to that.
[00:29:24] Speaker B: Yeah, exactly. Yeah, we'll look forward to having you back to talk about that. Well, thanks so much, Rob, for being with us, us today. Really appreciate it.
[00:29:30] Speaker A: Thank you, Eric.
Visit
[email protected] and intelligentdesign.org this program is copyright Discovery Institute and recorded by its center for Science and Culture.