Episode Transcript
[00:00:00] Speaker A: For some people, no amount of evidence could ever be enough, but I think that's a small fraction of the people out there.
I think most people are more able to follow the evidence where it leads.
ID the Future, a podcast about evolution and Intelligent Design.
[00:00:23] Speaker B: Here's a couple of questions for you. Are you ready to defend the reasons you believe there is evidence of design and nature in the universe? And are you also ready to explain the arguments for intelligent design to your friends and associates, maybe your family members? Welcome to Idea of the Future. I'm your host, Andrew McDermott. Today I get to welcome back Dr. Michael Kent to continue discussing recent discoveries that have changed the debate about design in the universe.
Now, in case you don't know him or didn't catch part one of our conversation yet, Dr. Kent is a fellow with the center for Science and Culture at Discovery Institute, and he's a recently retired bioscientist from Sandia National Laboratories in Albuquerque, a position he has held for over three decades. He also had an appointment as a staff scientist for 15 years at the Joint Bioenergy Institute. He has published more than 90 scientific papers on a variety of research topics in chemistry, biophysics and material sciences. He was an active member of the American Physical Society, the Biophysical Society, the and the American Chemical Society, not to mention the Society for Industrial Microbiology. Michael, welcome back to the show.
[00:01:34] Speaker A: Happy to be here. Thanks for having me.
[00:01:36] Speaker B: Absolutely. Well, I enjoyed part one of our conversation where you got to explain a little bit about your background and your career as a research scientist. Can you touch on just very briefly for those who didn't get to that yet? Just, you know, what got you into this career and some of the highlights?
[00:01:56] Speaker A: Sure.
I received my Bachelor's and PhD degrees in chemical engineering and material science from the Universities of Illinois and Minnesota, respectively, and then spent two years doing postdoctoral research at the Curie Institute and the University of Paris, studying polymer physics using neutron and x ray scattering techniques.
I then took the research position at Sandia National Labs, and during my 32 years there, my research focused on interfaces in material science and also proteins associated with lipid membranes, including an important HIV protein that changes confirmation upon binding to a membrane to lipid membranes. I also worked on a new approach to develop antibodies to neutralize viruses.
And in my appointment at the Joint Bioenergy Institute, I worked extensively to convert biomass, especially lignin, to fuels and chemicals using chemical and biological techniques.
So for me, it's really a privilege to have the opportunity to use science to try to discover new things.
[00:03:20] Speaker B: Yeah, well, you also teach now? Very much so. You're doing a lot of different teaching, both virtually and in person with people. And in your teaching of the case for Intelligent Design, you use the term informational discontinuities. Can you remind us what you mean by that?
[00:03:39] Speaker A: The term comes from the recent book by John Lennox called Cosmic Chemistry.
And in our experience, objects that have an information content that is far, far beyond the reach of unintelligent natural processes, meaning chance and natural law, always come from a mind.
So there's a discontinuity in information with respect to what unintelligent natural processes can do. Simple examples include any sort of written language, including hieroglyphics, also sculptures, machines, et cetera.
With machines, it's important to distinguish between the operation of a machine and the origin.
We know that a car, for example, operates by unintelligent natural processes. But those same unintelligent natural processes can't explain where the car came from.
An input of information is required to explain the origin of a car.
And in our experience, information of this type always comes from a mind.
So likewise, in considering the enormous information content of even the simplest form of life, the simplest biological cell, we can conclude that while it operates by unintelligent natural processes, those same processes, just like the car, are unable to explain its origin.
Anyway, that's the logic.
[00:05:19] Speaker B: Yeah, yeah. It reminds me of my colleague, Dr. Stephen Meyer, who is fond of writing a message on a whiteboard when he's giving presentations.
And, you know, he'll say that the ink that is put onto the whiteboard is not the cause of the message, you know, and it's not the whiteboard itself, and it's not the force of magnetism, because he'll often use magnetic letters and make a message, but there's actually something else there, you know, governing that information and the order of it.
Now, you point out that the modern scientific method was birthed from Judeo Christian principles. Can you elaborate on the philosophical premises, such as the belief in an orderly, contingent nature that made the practice of modern science possible in the 16th and 17th centuries?
[00:06:07] Speaker A: Yeah, I think this is a really important point that a lot of people forget.
The first premise is that there is a consistent order or rationality throughout the natural world and within the Judeo Christian tradition that comes from the notion that there's one God, which, rather than many gods for different parts of nature, such as the Greeks and Romans and other cultures believed.
The second premise is that our minds are actually capable of discovering that rationality.
And that notion comes from The Judeo Christian concept that we're made in the image of God uniquely among all the creatures, and are actually commanded to be caretakers of the natural world.
And that command can be fulfilled to the greatest extent by understanding the natural world in detail.
The third premise is that the order in nature can't be deduced by any sort of logic, but it can only be discovered by examination.
And this comes from the idea that God and his free will could have created in any way, in many different ways. And we can't just deduce that from any sort of logic. We have to go into nature and see what he actually did.
And this is summarized nicely by a statement of Descartes in his Discourse on the Method, which translated into English is God might have arranged these things in countless different ways.
Which way he chose rather than the rest, we must find by observation.
[00:08:11] Speaker B: Hence science, right? The scientific pursuit, studying nature to figure out how it works, how it's done, the mechanisms and so forth.
Now, thanks to a wave of scientific materialism over the last three centuries, and in particular the pernicious myth that science and religion are at odds or at war, many scientists feel the pressure to adopt a view known as methodological naturalism, or MN for short.
Tell us what that means and why it's bad for science.
[00:08:40] Speaker A: Methodological naturalism is the view that when we do science, we must commit absolutely to the belief that the material realm is complete, that there can be no incompleteness, no input of information from outside the universe or the natural realm.
So it's fine to conclude design if the information comes from within the universe, such as in the example of the car that I mentioned earlier. But this view forbids the possibility that information might come from outside the natural realm.
And in my view, this is bad for science because if it's adopted, then the data of science are no longer the arbiter.
In fact, the data of the science become irrelevant. The data that bear on the question of design become irrelevant. No amount of evidence could ever be enough to conclude design.
So science then becomes.
Is no longer a search for the truth, it becomes a tool to support materialistic philosophy.
And I think science should be an unbiased search for the truth.
[00:10:06] Speaker B: Yeah, that's a great breakdown of methodological naturalism and the dangers it poses.
Okay, well, let's jump back into reviewing the discoveries that you've gathered together. You've made a series of videos, you're actually still working on them, so we'll come back and talk about more in the future.
But the ones that you've put out, you know, review all this evidence that that has been put out and is now available, but that not enough people know about.
So let's jump back into that. When we move from cosmology to biology, we encounter information in the form of proteins.
What is protein sequence space? And why did mathematicians recognize early that this space is too large to be searched?
[00:10:51] Speaker A: So proteins are chains of amino acids, small molecules called amino acids, and there are 20 natural amino acids.
When it was discovered that natural proteins can be several hundred amino acids long or even greater, it was immediately clear that the space of possibilities was far too large to be searched.
In the bacterial species E. Coli, the average length for enzymes is about 300 to 400amino acids, and each amino acid can be any of the 20.
So for a protein with 300amino acids, there are 20 to the 300 power possibilities, and that number is just inconceivably great. To put that number in perspective, it's estimated that there's something like 10 to the 80th power protons in the observable universe. And so the 10 to the 300 is much, much larger than that.
So from the middle of the last century, it's been clear that the really important question is what fraction of sequences is functional?
Because if, say, 10%, 1 out of 10 random sequences is functional, then functional sequences can easily be found by chance, and it's not necessary to search a large fraction of the sequence space.
[00:12:32] Speaker B: Yeah, yeah. And I, I really want to make sure people kind of grasp these numbers and this mathematical problem, you know, Stephen Meyer likes to show this. This bike lock, you know, and this is, this is akin to a bike lock with, what, 70, 80 entries? And you've got to guess every one.
And, you know, it just becomes a vanishingly small probability that.
And this is where we talk about the mathematical challenges to Darwinism, isn't it?
[00:13:04] Speaker A: That's true. That's true.
[00:13:06] Speaker B: Yeah. And, you know, the everyday Joe and Jane might not understand the idea of sequence space, but we're talking about if a Darwinist approach is accurate, it means it has to search a space of possibilities to. To build something functional. Right. Isn't that kind of what we're saying here? And, and you're saying that the numbers are just vanishingly small, that that blind process will be able to find what is necessary to build a functional protein.
[00:13:40] Speaker A: Yeah. So. So it's unambiguous that the sequence space is too large to be searched. That's. That's very clear. Yeah. However, it was not clear early on what fraction of sequences, random sequences would be useful.
And so that was the important question.
And so there are three approaches that scientists have used to try to answer that question.
And the first one is called the forward approach. And that probably is the first thing that comes to your mind. Just make a bunch of sequences and see if they fold or do something useful biologically.
However, the method can really only be used for very short sequences because the numbers are so large, and so it's not really appropriate for large domain proteins such as typical enzymes.
So the second approach is called the reverse approach.
And in that approach, you start with a folded sequence, a folded protein, and then a portion of that sequence is mutated systematically, and the investigator looks to see how many of the mutated sequences remain folded, and then they apply the observed statistics over the whole entire length of the protein.
Now, the third method that has become possible recently is to analyze existing known sequences of proteins in the protein data bank, sequences that are known to fold. And so this is a bioinformatics approach approach.
The second approach was pioneered by a protein scientist named Robert Sauer at mit, and he published the results of the first study of this type in 1990 and reported that for a protein of 92amino acids, so that's a pretty short protein, only one sequence out of 10 to the 63rd power would adopt that fold.
So that's a really small fraction.
[00:16:16] Speaker B: Yeah.
[00:16:18] Speaker A: And then Doug Axe followed up on this work by making some improvements to the method and reported his results in the Journal of molecular biology in 2004.
And he found that for a protein of 153amino acids, which is larger than the protein in the study of sour, but still much shorter than a typical enzyme in E. Coli, his results indicated that only one sequence out of 10 to the 77th would fold.
[00:16:57] Speaker B: Well.
[00:17:00] Speaker A: Now, interestingly, results from the third approach were reported in 2017 in a paper in the Biophysical Journal.
And this is a completely independent method.
And the group examined 10 proteins with common single domain folds and for which a large amount of variable sequence data is available.
And they reported that for proteins with between 120 and 180amino acids, which are still shorter than the average enzyme in E. Coli, the fraction that adopt each type of fold for all cases is much less than one in ten to the hundredth power.
So the results are very clear. These functional sequences are vanishingly rare.
[00:18:00] Speaker B: Wow.
Yeah. I mean, and we're talking here, audience, about the nuts and bolts of the mathematical challenge. You know, the numbers just don't add up when you Start to look at what it takes to build something functional in protein space.
So, next question then. How does this finding constitute a massive informational discontinuity?
[00:18:23] Speaker A: So even one functional enzyme of average length, 300 to 400amino acids, is far, far beyond the reach of chance.
So the sequence space is too large to be searched, and the fraction of functional sequences is almost infinitesimally small.
Yet we have many, many of these sequences in us.
So where do they come from?
As we'll see in the next discovery, a very large number of such enzymes and folded proteins of comparable length are required for even the simplest form of life.
So it's an informational discontinuity because it's exactly analogous to imagining a sequence of letters that form a meaningful paragraph or a chapter in a book.
It requires an input of information. It's not going to happen by chance.
[00:19:35] Speaker B: Well, one thing after the, the next, you know, in terms of these discoveries, they're just, they're just mounting evidence, aren't they? Let's talk next about minimal genome complexity. Tell us about the work by Craig Venter's team to determine the minimal genome for a free living organism and what was the resulting number of genes required.
[00:19:55] Speaker A: So this is really important because most people, I think, don't know very much about the simplest form of life and they don't know whether scientists have created life in the laboratory.
The story on origin of life has not been communicated very accurately.
But following the completion of the Human Genome Project, the team of Craig Venter demonstrated that they could synthesize an entire bacterial genome and replace the natural genome of an organism with the synthetic genetic genome produced synthetically. It's the same information, same genome, but just made synthetically.
But with that capability in hand, they then could begin to systematically delete portions of the genome to see when the organism was unable to survive.
And they published a paper on that project in the journal Science in 2016.
It's titled the Design and Synthesis of a Minimal Bacterial Genome.
The genome they started with was an organism that had 901 genes.
And after systematically deleting genes and testing for viability in rich media under very special conditions, they arrived at the estimate that 473 genes constituted the minimum bacterial genome.
In the paper, they cataloged the function of these 473 genes and found that 195 genes were involved in the expression of genetic information, 34 were involved in the preservation of genetic information.
84 genes coded for proteins in the cell membrane, such as transporters and signaling molecules, 81 genes were involved with metabolism. So those are the enzymes in the cytosol. And they were unable to assign function to 79 other genes.
438 were protein coding genes and 35 were RNA genes.
[00:22:25] Speaker B: Hmm. And you conclude from this finding that there is no such thing as a simple form of life. How does this complexity support Michael Denton's statement that the discontinuity between a living cell and non life is like this giant chasm, as vast and absolute as it's possible to conceive?
[00:22:42] Speaker A: He says he wrote that in 1987 or something like that, Far, far earlier. Earlier than all this work on the minimum bacterial genome. But he's certainly turned out to be absolutely correct.
We saw earlier that even a single functional enzyme of average size in E. Coli is beyond the reach of chance.
This work by the Venter lab shows that even the simplest form of life requires nearly 500 of these genes.
And these critical proteins and RNAs aren't just isolated, independent entities, but rather like an orchestra.
They work together in a coordinated fashion in really amazing ways to perform the many functions that are required for life.
[00:23:44] Speaker B: Yeah, which of course brings up all sorts of chicken and egg problems for a Darwinian process. You know, it's not even that it can't come up with the enough information to produce the elements of life. It's also that, you know, how do you connect those, how do you make them interdependent and how do you give them coherence? I mean, there's just so many levels of complexity and design that a Darwinian approach cannot explain satisfactorily. Well, moving on to the fifth discovery, you highlight life's digital information processing system. Now, that sounds like a combination of computer hardware and software, sort of like the smartphone in your pocket or the computer by which we are using to communicate. Is that a fair comparison, would you say?
[00:24:30] Speaker A: In some ways it is, but in some ways, this system is beyond what anything humans could make.
In this discovery in the video, I focus on just one aspect, which is how the genetic code is implemented in translating between the two languages, between two of the languages of life, the nucleic acids and the proteins.
It's done in two steps. The first step is the transcription of DNA into rna. And the second step is the translation of the RNA message into strands of amino acids that make functional proteins such as enzymes.
A translation step is performed by a very complex machine called the ribosome that's discussed in in discovery number six.
That machine uses molecules called trnas in the process of translation to make a Protein.
So these TRNA molecules are critical. They're at the very heart of the translation process.
[00:25:47] Speaker B: Yeah. And why is this system considered irreducibly complex? And why is it often called the chicken and egg problem?
[00:25:54] Speaker A: As we've been mentioning, these TRNA molecules translate between the languages of nucleic acids and proteins.
So each TRNA molecule has three nucleotides, called a codon on one end from one language and an amino acid on the other end. So that's from the other language.
And an entire series of these molecules contain the code to translate between the two languages.
So the obvious question is, well, where did they come from? And how did it come about that they contain the genetic code?
The process is too complicated.
The process of producing the TRNAs is. Is too complicated to cover in full detail here, but it's done in several steps. And I'll only mention the first step, the final step here. Okay. The final step involves a series of enzymes called TRNA synthetases.
Generally, in every organism, there are 20 of these enzymes corresponding to the 20amino acids.
There are only 20 of these enzymes, despite the fact that there are 64 triplet codons. Because the code is degenerate, that means that more than one triplet codon in MRNA will code for the same amino acid.
Each of these TRNA synthetase enzymes will only bind one amino acid.
So that means it has a cavity with the correct physical and chemical structure to selectively bind only one of the 20amino acids. That is remarkable in itself.
Each TRNA synthetase enzyme also contains a binding pocket for a small high energy molecule called ATP.
And the binding pocket is structured in such a way that when ATP binds, a chemical reaction is catalyzed that activates the amino acid.
But each TRNA synthetase enzyme also has another cavity that selectively binds only precursor TRNAs with the triplet codons that. That correspond to the amino acid according to the code.
And that specificity, to me, is insane.
Yeah, it's just. It's really unbelievable.
The enzymes are structured such that when the appropriate precursor TRNA binds to its pocket, a reaction is canalized that fuses the activated amino acid to the end of the molecule, forming the mature tRNA.
So the genetic code is written into these absolutely amazing set of TRNA synthetase enzymes.
But you might wonder, well, where do these enzymes come from and how did it come about that they have the code written into them?
And the answer to that is no one knows.
No one knows where these come from or how they could come about?
The translation System is irreducibly complex because many enzymes are essential to produce the TRNAs, including the 20amazing TRNA synthetase enzymes as well as many others that, that I didn't describe here. And that's not the full extent of it because the TRNAs, once constructed, are used by a machine called the ribosome to synthesize proteins from the sequences of MRNA molecules. It's a chicken and egg problem because the trnas are constructed by the TRNA synthetase enzymes, but those enzymes are generated, as are all proteins, by the process of translation that uses the full set of TRNAs.
[00:30:30] Speaker B: Wow.
Well, I appreciate you giving, you know, just, just enough of the technical detail to, to leave us quite stunned as to how this all works, you know, and I dare say it relates to, you know, the immateriality of the genome, you know, the, the non physical aspects that are governing how these languages are put together, how this code comes together to build the building blocks of life, you know, the proteins.
[00:30:56] Speaker A: And there's information there a lot.
[00:30:58] Speaker B: Yeah, on so many levels, you know, and you cannot take that information for granted. There must be a satisfying explanation for that. Okay, well, the last discovery we'll discuss today is your number six, molecular machines and software.
In analyzing cellular machinery, you rely on the concepts of irreducible complexity and hierarchical coherence. Can you explain how these concepts point to the characteristics of engineered systems?
[00:31:27] Speaker A: So the term irreducible complexity was coined by Michael Behe in his book Darwin's Black Box.
It means that for some system, a number of parts are absolutely essential for a system like a molecular machine or a biochemical system to carry out its specific function, so all the parts are required.
The term hierarchical coherence was used by Doug Axe in his book Undeniable to describe systems that display a functional coherence at many different levels.
So a simple example that I use is that a spark plug must fit into the proper place in the cylinder head of an automobile to ignite the air fuel mixture.
The spark plug itself is irreducibly complex because several parts are essential, are needed and essential for its function.
But the spark plug must also be connected with an electrical system that sends a high voltage pulse to the spark plug when the key is turned in the ignition.
Many other subsystems must act coherently for an automobile to function properly.
So this is hierarchical coherence and it's typical of engineered systems.
In video number six, I give several examples of biological systems that include both irreducibly complex machines and also display hierarchical coherence for Example, the ribosome that we mentioned earlier is an irreducibly complex machine made up of both proteins and RNAs.
And this machine produces proteins from sequences of MRNA's using the TRNAS that we just talked about.
But the ribosome itself is part of a much larger system that's also irreducibly complex, a system of transcription and translation that includes the TRNAs, the enzyme systems for making the TRNAs, the genetic code, the MRNA's transcribed from DNA that contain the sequences for functional proteins, the RNA polymerases that do the transcription, the DNA itself, and much more. Much, much more.
This is just a summary of some of the major components.
All the parts of this very complex system must be coherent. They have to work together.
Regarding other machines that we're more familiar with, like an automobile, we know that physics, chemistry, and engineering principles can explain how they function.
But those same principles tell us nothing about where the car comes from, because it comes from the mind of a designer.
In the same way, irreducibly complex molecular machines and biological systems that display hierarchical coherence are enormous informational discontinuities with respect to unintelligent natural processes.
Those who argue against intelligent design in life believe that these systems, the system of transcription and translation, for example, evolved somehow through unintelligent natural processes.
But they believe that in the absence of any evidence and in the face of these enormous informational discontinuities, and there's no way to even test the hypothesis, because the simplest form of life has 473genes, and these already have the full system of transcription and translation. And if you delete any parts of that, the organism isn't viable.
So their belief is really based on methodological naturalism rather than on data.
[00:35:54] Speaker B: Yeah. And I'll always remember, you know, I used to work in the same office as Dr. X, Douglas Axe, and one day I turned around to him and said, hey, you know, I hear, I see a lot of comments from detractors and people opposed to id, typically on, you know, metaphysical grounds. But I said, hey, what's the best way to falsify intelligent design? You know, and he said, all you've got to do to falsify ID is to show that there's another source for information, you know, and so far, that has not been done, you know, and so when we see all the information that's needed in life, you've. You've got to go to the source. You've got to find a mechanism that can produce this information.
Now, we know of one, but the Materialist is left without one. And that's part of the reason why even mainstream evolutionary biologists are leaving behind the neo Darwinian idea and trying to look for something else that will explain this complexity and this level of information and design.
So that's pretty telling.
[00:37:05] Speaker A: Science doesn't stop. I mean, scientists will do what they need to do to find answers that are reasonable. And so the simple mutation selection mechanism, which is so weak, is really being left behind. And there's more of a systems biology way of thinking now.
[00:37:29] Speaker B: Yeah, and let me just linger on the engineering aspect for just a few more minutes. In addition to machines, you discuss sophisticated software, algorithms.
How can complex logic, you know, like conditional, if then rules, how can they be built into molecular interactions?
[00:37:49] Speaker A: If you, if you explore this, you'll find that the software is, is just as amazing as the hardware, actually.
So the way this happens is that the, in the most general terms is that the binding of one molecule, like a protein, to another molecule, such as another protein or DNA or rna, either allows for or blocks the action, usually binding of other molecules.
So logical operations are, can be built in by highly specific and conditional interactions between the molecules.
A relatively simple example that I go through on the video is called the lac operon, the control of the regulation of the lac operon.
And that's a part of the DNA in E. Coli. And it was one of the first examples of kind of this type of logic that was discovered and unraveled.
So this is the genetic regulatory system for controlling the production of a series of enzymes that enable E. Coli to break down lactose, which is a sugar and dairy products like milk.
But E. Coli prefers to use glucose because it requires fewer steps and less energy to break down than for lactose.
So the system is regulated so that the series of lactose enzymes are only produced when lactose is present and when glucose is not present, so as not to waste energy and valuable resources unnecessarily.
So there's a system of negative regulation and also a system of positive regulation.
The details are discussed in the video, but the logical operations are made possible by the highly specific and conditional nature of the molecular interactions that include conformational changes of some key proteins.
For both negative and positive regulation systems, there is a key folded protein that binds a small molecule produced by the system to signal either the presence of lactose or low glucose concentration.
And upon binding the respective small molecule, these two critical proteins each undergo a conformational change that causes them to either bind to or or release from a specific section of DNA that promotes or blocks the transcription machinery for this series of genes.
So there is incredible specificity in those key proteins.
It's clear to me, I think, to anyone, that the fraction of protein sequences that'll have the right molecular interactions and conformational changes to carry out either of these logical operations is going to be really, really low, just like those of enzymes.
But to my knowledge, experiments analogous to those of Doug Axe for enzymes have not yet been carried out for this system.
So we don't have specific numbers for that yet.
Other examples that I mentioned on the video are the genetic regulatory network for the cell cycle of a simple bacteria called Colobacter, and that was worked out by Lucy Shapiro, a famous researcher at Stanford and colleagues, and the developmental genetic regulatory network for sea urchins worked out by Eric Davidson and his colleagues.
Before finishing, it's worth pointing out that the logic circuits in higher organisms are far, far more complex than the lac operon in E. Coli.
[00:42:18] Speaker B: Wow.
And that's just the software, right, that's applied to the hardware of life.
Again, just layers of information that demand an explanation.
Well, final question for you today, Mike, as to what to make of all this. You state that concluding design is not a God of the gaps fallacy, but rather an inference to the best explanation based on all these informational discontinuities that we've been talking about from the cosmos to the cell. Why is a mind the only known cause capable of generating specified complex.
[00:42:56] Speaker A: I borrowed Steve Meyer's language there, and the phrase inference to the best explanation that comes from the philosophy of science with these things that we've discussed, these discoveries, it's very clear now that something really, really amazing has to be true.
And there seem to be three possibilities.
The first one that's offered is perhaps there are an infinite number of universes. And then by attributing all these informational discontinuities, all this information to chance in an infinite universe's framework, that's equivalent to believing that the car just appears on the beach.
It's the same thing. And so not many people will take that seriously, I think.
Although I want to mention that Eugene Koonin, who's a senior researcher at the nih, he actually published a paper proposing this explanation for the origin of the transcription translation system that's at the core of life.
So that tells you something right there.
Many people entertain, are starting to entertain the idea that we are someone else's computer program.
And so this implies a designing intelligence, but presumably one within the natural realm, but outside of our universe somewhere And I don't take that seriously because, again, I don't think I am a computer program.
But people can make their own decision on that. Finally, designing intelligence from outside the natural realm is the third alternative. And it's the one that I think most people.
Well, it's overwhelmingly convincing to me, and I think it's the one that most people will also find convincing when they become well informed about this subject.
But remember, that option is strictly forbidden by methodological naturalism.
So therein lies a lot of confusion.
[00:45:20] Speaker B: Yeah.
Yeah. Well, you're doing a lot of work to help people get well informed so that they can make the decision.
And as you stated earlier, that decision is a personal one based on the amount of evidence that you would deem necessary to convince you that's going to be different from the next person, you know, and that's what I love about this debate. That's what I love about the approach to the evidence. You know, let it mount up, you know, either way, if you want to go materialism, well, let's stack up the evidence for it. If you want to go intelligent design, look for the evidence and be honest. Right. We've got to be honest in the end about where the evidence is pointing. We've got to make sure we don't have, you know, blinders that are going to limit what we can, you know, understand when faced with the evidence.
[00:46:13] Speaker A: So for some people, no amount of evidence could ever be enough, But I think that's a small fraction of the people out there. Yeah, I think most people are more able to follow the evidence where it leads.
[00:46:27] Speaker B: I think you're right. Well, thanks for your time again, Mike. This has been a great conversation.
[00:46:32] Speaker A: Thank you. It's been. It's been fun. Great.
[00:46:35] Speaker B: So, audience in the show notes for the episode, we'll include a link to Dr. Kent's presentation video playlist on YouTube so that you can watch these for yourself. Go back and, you know, almost memorize that some of these details so that you can then share it in conversation when you're defending your own beliefs, but also helping others to understand some of this evidence.
We need to keep telling people about this. You know, a lot of it's recent. This has only been in the last hundred years or so that we've been able to gather this information and get these discoveries. So there's a lot of work to be done, but we'll include links to Dr. Ken's presentation video playlist so that you can continue to dig into it. And While you're on YouTube, make sure you're subscribed to the Idea the Future YouTube channel. That's where you can see video of our conversations. Now, you don't just have to listen to them, although many do.
You can also watch them. And we share clips as well so that they're easy to share with your friends and family. So subscribe. Subscribe to us there and you'll be able to follow the content. I'm Andrew McDermott for ID the Future. Thanks for joining us.
[00:47:50] Speaker A: Visit us at idthefuture.com and intelligentdesign.org this program is copyright Discovery Institute and recorded by its center for Science and Culture.