Question of the week: How can we show an evolutionist that Earth suits us best because Earth was designed to be compatible with us rather than the idea that we naturally evolved to become compatible with Earth?
My answer: I get asked this question nearly every time I speak on a university or college campus. Apparently, the hypothesis that humans became compatible or naturally evolved to adapt to Earth’s present conditions has become dogma in many university course offerings, more so in the social sciences than in the physical or life sciences. Consequently, I devoted an entire book to answering this question, Improbable Planet.1 Additional evidences that our supergalaxy cluster, our galaxy cluster, our galaxy group, our galaxy, the Local Bubble, the Local Fluff, the solar system, and the Sun, Moon, and Earth have been exquisitely prepared and designed for humans are presented in my next book, Designed to the Core.2
A very brief answer is that the Milky Way Galaxy, the solar system, and Earth were designed to be the ideal home for humanity and global human civilization millions and billions of years before humans showed up. More than 400 distinct features of Earth must be fine-tuned to make our existence and civilization possible.3 Even with all this exquisite fine-tuning the time window during which humans can exist in a civilized state is extremely narrow.4 It is no random accident that we happen to be here during this narrow time window when the resources we need to sustain global civilization are uniquely available.
The evidence for purposeful design for the specific benefit of human beings is especially compelling in the context of what is needed for billions of humans to be redeemed from sin and evil. Every component and every event in the universe, Earth, and Earth’s life plays some role in making redemption possible for billions of humans.
As scientists continue to acquire knowledge of the formation and structure of the universe, their discoveries unlock mysteries that test cosmic creation models. One such mystery is the particles that make up dark matter, which astronomers know accounts for 85% of the universe’s matter. However, thanks to sophisticated instrumentation, scientists may soon be able to identify particles that make up most of the universe’s dark matter and may help resolve cosmic mysteries.
Searching for Sterile Neutrinos It’s possible that a large fraction of the universe’s dark matter consists of sterile neutrinos. Thus, researchers spend considerable effort trying to detect them. In 2011, I wrote five articles about sterile neutrinos.1 Sterile neutrinos are distinguished from the active neutrinos that I described in last week’s article, Neutrino Breakthroughs: More Evidence for Cosmic Creation and Design.2 Active neutrinos interact very weakly with photons, protons, neutrons, and electrons through the weak nuclear force and the gravitational force. Active neutrinos come in three “flavors” or types: electron, muon, and tau. Sterile neutrinos are hypothetical particles that are believed to interact only through the gravitational force.
The standard particle creation model requires that there be exactly three different types of active neutrinos. However, if sterile neutrinos exist, there must be at least three different types of sterile neutrinos.3
So far, the only dark matter particles that astronomers and particle physicists have detected are the three active neutrinos. As I stated in my previous article, new measurements establish the sum of the individual electron, muon, and tau neutrino masses = 0.05841–0.087 electron volts (eV). This mass range implies that active neutrinos comprise just a small fraction of the universe’s dark matter.
Astronomers and physicists have proposed that either sterile neutrinos or axions (another hypothetical elementary particle) or both could make up the majority of the universe’s dark matter. These two particles hold the potential of explaining the following cosmic mysteries:
Why the first stars apparently form as early in cosmic history as they do
Why the universe produces slightly more baryons (protons and neutrons) than antibaryons
Why core-collapse supernovae produce unexpectedly high abundances of certain elements with atomic weight greater than 100
Why supernova shocks are so highly energetic
Why dark matter halos are relatively symmetrical and smooth
Why supermassive black holes form as early as they do in cosmic history
How to account for a small amount of warm dark matter to accompany the predominant cold dark matter that astronomers observe
Consequently, for the past two decades astronomers and physicists have sought to discover—both in the lab and in the sky—the existence of sterile neutrinos and/or axions. In the next few sections, I summarize the results of various lines of research. It’s technical, so skim if desired and get the overall picture as you proceed to “Philosophical Implications.”
Laboratory Sterile Neutrino Detections? In 2018, the MiniBooNE Collaboration announced that they had discovered an excess of electron neutrino oscillations in their MiniBooNE short-baseline neutrino experiment.4 They interpreted this excess as evidence for the existence of a fourth neutrino type at a significance level of 4.7 standard deviations (equivalent to 99.99% certainty). An excess of neutrino oscillation events was also detected by the Liquid Scintillator Neutrino Detector (LSND) with a similar level of certainty for the existence of a fourth neutrino type.5
Theoretical physicist Joachim Kopp, staff scientist at the CERN particle accelerator in Geneva, Switzerland, explained in a brief article why the signal detected by the MiniBooNE and LSND experiments is evidence for a sterile neutrino.6 Additional evidence for a fourth neutrino type came from an antineutrino anomaly observed in a French nuclear reactor that is best explained as an excess of electron neutrino oscillations7 and from measurements of antineutrinos in the Daya Bay Reactor Neutrino Experiment in China.8 The Daya Bay reactor produced 6% fewer antineutrinos than would be the case if only three neutrino types existed. However, combining the antineutrino flux and spectra of the Daya Bay results suggests that the antineutrinos might not be missing after all. It is possible that the predictions from nuclear theory could be incomplete.
Astronomical Sterile Neutrino Detections? In 2014, a team of astronomers led by Esra Bulbul detected a weak x-ray emission line in the stacked x-ray spectrum of 73 clusters of galaxies.9 Bulbul’s team demonstrated how the decay of sterile neutrinos with a mass of 7.1 keV best explains this spectral line. Also in 2014, a team of astronomers led by Alexey Boyarsky detected the same x-ray emission line in the core of the Andromeda Galaxy and in the Perseus Galaxy Cluster.10
Neutrinos suppress the growth of large-scale structure in the universe in proportion to the total mass of the neutrino types. Neutrinos also affect the expansion rate history of the universe. Therefore, observations of the clustering of galaxies and galaxy clusters plus maps of the cosmic microwave background radiation (the radiation remaining from the cosmic creation event) place constraints on the number of neutrino types and on the total mass of the different neutrino type particles.
The most sensitive maps of the cosmic microwave background radiation (CMBR) yield a measurement of the effective number of neutrino types. Since the three active neutrino types were not completely decoupled at the moment of electron-positron annihilation that occurred when the universe was only a few seconds old, these three types, by themselves, would give a measure for the effective number of neutrino types, Neff = 3.046.11 The best map of the CMBR, the Planck 2018 map, produced a measure of Neff = 2.99 ± 0.17.12 This measurement implies with 95% certainty that Neff must be less than 3.34. Furthermore, observational constraints on the primordial abundances of helium, deuterium, and lithium13 make a value of Neff = 4 highly unlikely.14 As the Planck Collaboration wrote in their paper, “The presence of a light thermalized sterile neutrino is in strong contradiction with cosmological data.”15 Even where the production of sterile neutrinos is suppressed by nonstandard interactions, the sterile neutrino mass cannot be any greater than 0.23 eV. Combining the Planck and Daya Bay data provides an upper limit of 0.2 eV for the sterile neutrino mass in all possible scenarios.15
Latest Constraints on Sterile Neutrinos Three physicists in Britain, Italy, and Spain combined the latest CMBR, baryon acoustic oscillation, type Ia supernovae, and cosmic structure growth rate observations to produce the tightest constraint on the total number of neutrino types. Their result was Neff = 3.05 ± 0.16, which means with 95% certainty that Neff must be less than 3.37.16 Meanwhile, the MiniBooNE Collaboration upgraded their experiment, dramatically improving its sensitivity. It is now called the MicroBooNE experiment.
In a preprint posted on October 29, 2021, the MicroBooNE Collaboration presented results from their initial observations of electron neutrino interactions from the Fermilab Booster Neutrino Beam using the MicroBooNE liquid argon time projection chamber.17 They achieved greater sensitivity than with MiniBooNE earlier, and found no excess of electron neutrino oscillation events. That is, they found no hint for the existence of sterile neutrinos.
Undeterred, researchers will reemploy MicroBooNE, which is set to deliver even more sensitive results. Another laboratory experiment, the STEREO experiment, is primed to achieve high-sensitivity output.18 Meanwhile, the X-ray sky is about to be probed by the eROSITA and Athena missions19 and the KM3NeT/ORCA telescope.20 If sterile neutrinos are lurking somewhere in the universe, they cannot remain hidden for long.
Constraints on Axions As I explained in previous articles, the existence of substantial numbers of axions would cause white dwarf stars to cool at more rapid rates.21 As far back as 1992, observations of white dwarf cooling had established that axions, if they exist, could not have a particle mass greater than 0.01 eV.22 About a decade ago, two different teams of astronomers demonstrated that the excess cooling of white dwarfs is well explained by axion emission where the axion particle mass is just a few milli-eV.23 While this excess cooling yielded the first positive indication that axions exist, it implied that axions provide only a small fraction of the universe’s dark matter.
The existence of axions was firmed up by the analysis of additional observations made by one of the two teams. The team led by Jordi Isern noted that the observed excess cooling of white dwarf stars could be an artifact introduced by the star formation rate. However, white dwarf populations in our galaxy’s thin disk, thick disk, and halo each have different star formation rates. The fact that astronomers observe the same excess cooling in all three white dwarf populations means that the excess cooling cannot be an artifact of the star formation rate. It is likely due to axion emission. Isern’s team derived an axion particle mass in the range of 4–10 milli-eV.24
The future of axion astronomy looks promising. More extensive observations of white dwarf cooling curves are underway and an axion telescope, the solar axioscope IAXO, is under development.25 If axions are part of the universe’s undetected dark matter, astronomers will likely know soon.
Philosophical Implications The constraints on the possible existence of sterile neutrinos have reached a point where, even if they do exist, they cannot make up a significant fraction of dark matter in the universe. Likewise, it is becoming increasingly evident that axions do not comprise a substantial fraction of the universe’s dark matter.
The universe’s dark matter is predominantly cold dark matter that’s comprised of particles traveling at much less than light’s velocity. However, a tiny fraction of the universe’s dark matter is warm dark matter that’s comprised of particles moving at a significant fraction of light’s velocity. Sterile neutrinos, if they exist, would be warm dark matter. It is possible, given current detection limits, that sterile neutrinos make up all, or most, of the universe’s warm dark matter. Axions, on the other hand, are cold dark matter particles.
That sterile neutrinos and/or axions do not comprise a substantial fraction of the universe’s dark matter does not mean that dark matter theories are in trouble. Astronomers and physicists have over thirty other candidate particles that could comprise the universe’s dark matter. However, sterile neutrinos and/or axions, if they do make up most of the universe’s dark matter, hold the greatest prospect for detection. The search for other dark matter candidate particles will be more challenging technologically. This is how science advances. It often takes many small steps to achieve breakthroughs. That’s why scientists test and retest.
As for the biblically predicted big bang creation model,26 all these new dark matter particle findings and the prospects for future dark matter particle discoveries are consistent and anticipated by the big bang creation models. Big bang models that permit the possible existence of physical life predict a specified quantity of dark matter where the dark matter is comprised of particles, a quantity that is consistent with astronomers’ best measurements.27 These findings provide further scientific demonstration that the more we learn about the universe, the more evidence we discover for the intentional, supernatural handiwork of the Being beyond the universe who created and designed it.
A. A. Aguilar-Arevalo et al. (MiniBooNE Collaboration), “Significant Excess of Electronlike Events in the MiniBooNE Short-Baseline Neutrino Experiment,” Physical Review Letters 121, no. 22 (November 30, 2018): id. 221801, doi:10.1103/PhysRevLett.121.221801.
C. Athanassopoulos et al., “Candidate Events in a Search for νmu → νe Oscillations,” Physical Review Letters 75, no. 14 (October 2, 1995): id. 2650, doi:10.1103/PhysRevLett.75.2650; A. Aguilar et al. (LSND Collaboration), “Evidence for Neutrino Oscillations from the Observation of νe Appearance in a νmu Beam,” Physical Review D 64, no. 11 (December 1, 2001): id. 112007, doi:10.1103/PhysRevD.64.112007.
F. P. An et al. (Daya Bay Collaboration), “Measurement of the Reactor Antineutrino Flux and Spectrum at Daya Bay,” Physical Review Letters 116, no. 6 (February 12, 2016): id. 061801, doi:10.1103/PhysRevLett.116.061801.
Esra Bulbul et al., “Detection of an Unidentified Emission Line in the Stacked X-Ray Spectrum of Galaxy Clusters,” Astrophysical Journal 789, no. 1 (June 2014): id. 13, doi:10.1088/0004-637X/789/1/13.
A. Boyarsky et al., “Unidentified Line in X-Ray Spectra of the Andromeda Galaxy and Perseus Galaxy Cluster,” Physical Review Letters 113, no. 25 (December 19, 2014): id. 251301, doi:10.1103/PhysRevLett.113.251301; Kevork N. Abazajian, “X-Ray Line May Have Dark Matter Origin,” Physics 7 (December 15, 2014): id. 128, doi:10.1103/Physics.7.128.
Gianpiero Mangano et al., “Relic Neutrino Decoupling including Flavour Oscillations,” Nuclear Physics B 729, nos. 1–2 (November 21, 2005): 221–234, doi:10.1016/j.nuclpjysb.2005.09.041.
N. Aghanim et al. (Planck Collaboration), “Planck 2018 Results VI. Cosmological Parameters,” Astronomy & Astrophysics 641 (September 2020): id. A6, doi:10.1051/0004-6361/201833910.
Aghanim et al. (Planck Collaboration), “Planck 2018 Results.”
Matthew Adams et al., “Direct Comparison of Sterile Neutrino Constraints from Cosmological Data, νe Disappearance Data and νmu → νe Appearance Data in a 3 + 1 Model,” European Physical Journal C 80, no. 8 (August 19, 2020): id. 758, doi:10.1140/epjc/s10052-020-8197-y.
Eleonora Di Valentino, Stefano Gariazzo, and Olga Mena, “Most Constraining Cosmological Neutrino Mass Bounds,” Physical Review D 104, no. 8 (October 15, 2021): id. 083504, doi:10.1103/PhysRevD.104.083504.
P. Abratenko et al. (MicrorBooNE Collaboration), “Search for an Excess of Electron Neutrino Interactions in MicroBooNE Using Multiple Final State Topologies,” (October 29, 2021), arXiv:2110.14054.
H. Almazán et al. (STEREO Collaboration), “Improved Sterile Neutrino Constraints from the STEREO Experiment with 179 Days of Reactor-On Data,” Physical Review D 102, no. 5 (September 1, 2020): id. 052002, doi:10.1103/PhysRevD.102.052002.
Andrea Caputo, Marco Regis, and Marco Taoso, “Searching for Sterile Neutrino with X-Ray Intensity Mapping,” Journal of Cosmology and Astroparticle Physics 2020, no. 03 (March 2, 2020): id. 002, doi:10.1088/1475-7516/2020/03/001.
S. Aiello et al. (KM3NeT Collaboration), “Sensitivity to Light Sterile Neutrino Mixing Parameters with KLM3NeT/ORCA,” Journal of High Energy Physics 2021, no. 10 (October 21, 2021): id. 180, doi:10.1007/JHEP10(2021)180.
Ross, “Candidates Compete for Top Billing.”
Jin Wang, “Constraints of Axions from White Dwarf Cooling,” Modern Physics Letters A 7, no. 17 (June 7, 1992): 1497–1502, doi:10.1142/S0217732392001166.
J. Isern et al., “Axions and the White Dwarf Luminosity Function,” Journal of Physics: Conference Series 172 (June 2009): id. 012005, doi:10.1088/1742-6596/171/1/012005; Georg G. Raffelt, Javier Redondo, and Nicolas Viaux Maira, “The meV Mass Frontier of Axion Physics,” Physical Review D 84, no. 10 (November 15, 2011): id. 103008, doi:10.1103/PhysRevD.84.103008.
J. Isern et al., “Axions and the Luminosity Function of White Dwarfs: The Thin and Thick Discs, and the Halo,” Monthly Notices of the Royal Astronomical Society 478, no. 2 (August 2018): 2569–2575, doi:10.1093/mnras/sty1162.
Sebastian Hoof, Joerg Jaeckel, and Lennert J. Thormaehlen, “Quantifying Uncertainties in the Solar Axion Flux and Their Impact on Determining Axion Model Parameters,” Journal of Cosmology and Astroparticle Physics 2021, no. 9 (September 6, 2021): id. 006, doi:10.1088/1475-7516/2021/09/006.
“I’ve been dealing with viral outbreaks for the last 40 years. I’ve never seen a single virus—that is, one pathogen—have a range where 20% to 40% of the people have no symptoms.”
—Dr. Anthony Fauci, National Institute of Allergy and Infectious Diseases
Many of the disease characteristics of COVID-19 continue to baffle life scientists and biomedical practitioners. The confusion is not limited to the high rate of asymptomatic infections—other aspects of COVID-19 are puzzling as well. For example, some patients who have recovered from an initial COVID-19 infection will return positive PCR tests for SARS-CoV-2 weeks, even months, after their recovery. (The PCR test detects the presence and levels of SARS-CoV-2 genetic material in the patient.) These patients show no indication that they were reinfected and they pose no threat to spread the virus to others.
Recently, a team of life scientists from MIT offered an explanation for these unexpected observations.1 These investigators discovered that once the SARS-CoV-2 virus gains entrance into human cells, its genetic material (which is made up of a single plus strand of RNA) can be converted into DNA by cellular enzymes called reverse transcriptases. In turn, the SARS-CoV-2 DNA can then become integrated into the infected cell’s genome. Once in the genome, the SARS-CoV-2 DNA sequences can be transcribed, producing viral RNA that is detected by PCR tests.
This important insight adds to our understanding of the biology and replication cycle of SARS-CoV-2. It also holds significance for assessing the performance of antiviral therapies.
This discovery also has unexpected significance for the RTB creation model. It suggests a possible explanation for the presence of sequence elements called endogenous retroviruses (ERVs) in the human genome (and the genomes of other organisms). Many people regard the ERV sequences in the human genome as the most compelling evidence for human evolution. But, instead of baffling those of us who embrace a creation model for biology, the insights from the MIT team now make it possible to view ERVs as intentional features of genomes designed to serve a variety of purposes—including functioning as part of the innate immune system, offering protection from retrovirus invaders.
Before unpacking the impact of this study on the RTB creation model, a discussion of the work of the MIT scientists (which in and of itself is fascinating and important) is in order. And to do this well, we first need to briefly review the biology and replication of SARS-CoV-2.
SARS-CoV-2 Biology and Replication2 Viral invasion begins when SARS-CoV-2 virions attach to the surface of the host cell. Mediating this binding event is the interaction between the spike proteins that decorate the surface of the SARS-CoV-2 viral particles and the ACE-2 proteins which reside on the host cell’s surface. Once binding takes place, a protease in the host cell membrane cleaves the spike proteins. (Proteases are proteins that break apart protein chains.) This cleavage causes the host cell to engulf the attached SARS-CoV-2 virion, forming an endosome (which is a membrane-bound vesicle that arises from an invagination of the cell membrane). Once formed, the endosome migrates to the cell’s cytoplasm. Within the endosome, an unsheathing process takes place releasing the viral RNA from the protein capsid that surrounds it. As a result of the unsheathing process, the viral RNA finds its way into the host cell’s cytoplasm.
Here, the viral RNA makes its way to a ribosome where it is translated into two large polyproteins. Each of these large polyproteins consists of several individual protein sequences catenated together. Two of the individual proteins contained within the polyproteins are proteases. These internal proteases self-catalyze the cleavage of the two polypeptides. In doing so, they cause the other protein molecules harbored in the two polyproteins to be released as free-standing proteins.
Once released from the two polyproteins, some of the newly released proteins interact with one another to form a complex called the viral replication and transcription complex. This complex produces more copies of the viral genetic material through the activity of several enzymes, including one called an RNA-dependent RNA polymerase. The resulting viral RNA molecules are, in turn, translated to form more copies of the viral replication and transcription complex. They are also translated to produce the structural proteins needed to assemble more viral particles.
The viral replication and transcription complex associate with the host cell’s endoplasmic reticulum. This association leads to the formation of convoluted areas in the membrane of the endoplasmic reticulum. These convoluted areas form protected spaces that allow the viral RNA (produced by the viral replication and transcription complex) to be packaged with capsid proteins. Other structural proteins, such as the spike protein, which are found in the lipid envelope surrounding the viral RNA-protein capsid complex incorporate into the convoluted membranes of the endoplasmic reticulum. From here the encapsulated viral RNA and viral membrane proteins can be processed through the Golgi apparatus to be secreted into the extracellular space through the process of exocytosis.
A Bold Hypothesis Though life scientists possess detailed insight into much of the biology and the replication of SARS-CoV-2, nothing in their understanding immediately sheds light on the positive PCR tests that persist after COVID–19 patients recover from their initial infection. The MIT investigators think that we may have overlooked a significant aspect of the molecular biology of SARS-CoV-2. Along these lines, they speculate that the SARS-CoV-2 genetic material becomes incorporated into the host cell’s genome and then becomes expressed, generating an ongoing source of viral genetic material.
Their hypothesis is a bit daring, because based on our current understanding of the replication of SARS-CoV-2—which relies on an RNA-dependent RNA polymerase to replicate its genetic material—researchers are hard-pressed to identify a mechanism for the SARS-CoV-2 material to make its way into the host genome.
However, retroviruses (which are also RNA viruses) have a well-characterized mechanism for incorporating their genetic material into the genome of the host cell. Before replication takes place, the retroviral RNA becomes converted into DNA. An enzyme called reverse transcriptase carries out this conversion. The genetic instructions needed to make reverse transcriptase are encoded in the retroviral genome. This protein is packaged in the retroviral capsid along with the retroviral RNA.
The newly made retroviral DNA can then use the invaded cell’s biosynthetic pathways to direct the production of new retroviral particles. The DNA copy of the retroviral genetic material can also become incorporated into the host cell’s genome. When this insertion takes place, the retroviral DNA becomes part of the host cell’s genome. This process is called endogenization.
Even though SARS-CoV-2 doesn’t encode a reverse transcriptase in its genome, the MIT scientists postulated that the SARS-CoV-2 RNA may still be reverse transcribed at some point during its replication cycle by endogenous copies of reverse transcriptase, encoded in the host genome. Biologists know that DNA sequences associated with retrotransposons (such as LINE sequences) found in the host cell’s genome encode for proteins with reverse transcriptase activity.
LINE sequences comprise about 20 percent of the human genome. These mobile DNA elements can make copies of themselves, with the copies becoming randomly inserted throughout the genome. Some LINE DNA sequences lie dormant in the human genome. Others can be transcribed into mRNA that, in turn, can be translated into a protein with reverse transcriptase activity. The LINE reverse transcriptase can make a copy of the LINE mRNA, converting it into DNA that can be integrated into the genome at a new location.
The research team speculated that once it has been synthesized by endogenous reverse transcriptase activity, the SARS-CoV-2 DNA can become integrated into the host cell’s genome. Here, the SARS-CoV-2 DNA sequences can be transcribed, producing viral RNAs that are detected by PCR tests.
An Unexpected Discovery In support of their hypothesis, the MIT team discovered that:
1) Cultured human cells exposed to SARS-CoV-2 in the laboratory and cells taken from patients infected with SARS-CoV-2 both produce mRNA molecules that include those with hybrid sequences made up of host cell genes and viral genes. This observation suggests that viral genetic material has been incorporated in the host cell’s genome. 2) The predominant viral gene sequences that occur in the hybrid RNA molecules encode the capsid proteins. mRNA molecules that encode the capsid proteins are the most abundant of the viral mRNAs in the cell. These mRNA molecules would be readily available to be transcribed by endogenously sourced reverse transcriptase. 3) In the laboratory, when cells belonging to the HEK293 line are forced to overexpress LINE DNA sequences, SARS-CoV-2 RNA becomes reverse transcribed upon exposure of the cells to virions. The resulting SARS-CoV-2 DNA becomes incorporated into the genome of the HEK293 cells. 4) Exposure of cells to SARS-CoV-2 virions induces LINE expression.
Collectively, these findings provide compelling circumstantial evidence that endogenous reverse transcriptase converts SARS-CoV-2 RNA into DNA and this DNA, in turn, becomes integrated into the host cell’s genome. The evidence also indicates that only portions of the SARS-CoV-2 genome become incorporated into the host cell’s genome. As a result, when the host cell’s machinery transcribes these sequences to produce RNA, it can’t produce virions that can be transmitted to other people.
This discovery carries several important scientific and biomedical implications.
• It explains the unusual PCR results returned for patients who have recovered from COVID–19.
• It will help guide the development of antiviral therapies if the presence of SARS-CoV-2 genetic material in the patient is used to monitor the effectiveness of the antiviral treatment.
• It indicates that the reverse transcription of viral genetic material and its incorporation into the host cell genome may not be limited to SARS-CoV-2. It might be a general feature of other RNA viruses.
This discovery also has implications for the RTB creation model. It gives insight as to why ERV sequences are found in genomes. By doing so, it becomes increasingly reasonable to view ERV sequence elements as the intentional handiwork of a Creator, instead of a reflection of our evolutionary history.
Endogenous Retroviruses and the Case for Human Evolution On becoming incorporated into an organism’s genome, retroviral DNA is called an ERV. If the ERV infects a germline cell (a sperm cell or an egg cell), it can be inherited, transmitted from generation to generation as a permanent feature of the genome.
If the ERV DNA suffers severe mutations, it becomes disabled, remaining in the genome as nonfunctional, junk DNA. As it turns out, about 8 percent of the human genome consists of ERVs.
Evolutionary biologists consider the endogenous retroviral populations found in the human genome as evidence that humans have an evolutionary history shared with the great apes. Many human ERVs are also found in the genomes of chimpanzees, bonobos, gorillas, and orangutans. Not only do these ERVs share many of the same sequence patterns, but they also appear in corresponding locations in the genomes.
Evolutionary biologists explain this data by assuming that the shared ancestor of humans and chimpanzees, for example, became infected by these specific retroviruses. Later, the endogenized retroviruses experienced mutations that disabled them. These ERV sequences were retained in the genomes of humans and chimpanzees as their separate evolutionary lineages diverged from the common ancestor. According to the model, the ERVs shared by humans and chimpanzees represent the molecular artifacts of infections that occurred millions of years ago and left their imprint on contemporary genomes via this (presumed) shared ancestor.
Many people consider the presence of ERVs in the human genome (and the genomes of other organisms) to be baffling for the RTB creation model.
• Why would the Creator introduce the same nonfunctional sequence elements in the same locations within the genomes of organisms that naturally group together (based on other biological features)?
• And why would he create these shared sequence elements to bear such strong similarity to retroviruses?
Yet, the RTB creation model predicts that the Creator would have intentionally designed and incorporated ERVs into genomes to serve vital roles. The unexpected discovery that SARS-CoV-2 genetic material becomes incorporated into the host cell’s genome bears on this key prediction. Once incorporated into the genome of host cells, the activity and impact of the SARS-CoV-2 sequences suggest at least one reason why a Creator would intentionally incorporate ERV sequences into the human genome (and the genomes of other creatures) and why these sequences share so much similarity to retroviral sequences. Impact of SARS-CoV-2 Sequences in the Host Cell Genome The MIT researchers believe that the SARS-CoV-2 genetic material only becomes incorporated into the genomes of a limited number of cells. And of those cells, only a limited number express the viral genes. Still, they argue that this expression could have important consequences. They speculate that viral proteins that result from the expression of the incorporated viral genes could continuously stimulate the immune system. In doing so, the viral proteins provide the patient with ongoing immunity against SARS-CoV-2, serving as part of the repertoire of proteins and cells that imparts immune memory to the infected individual long after they clear the infection. Toward this end, the incorporated SARS-CoV-2 genetic material housed in the genomes of host cells acts as a type of endogenous DNA vaccine.
As it turns out, the MIT team’s discovery is not the first time that life scientists have detected genetic material from nonretroviral RNA viruses becoming incorporated into the genome of host cells during a viral infection. In 1997, a Swiss research team reported that the genetic material from lymphocytic choriomeningitis virus (LCMV)—an RNA virus—also becomes reverse transcribed into DNA by endogenous reverse transcriptase activity. In turn, the viral DNA then becomes incorporated in the genomes of mouse cells. This DNA serves as an ongoing source of viral proteins that also appear to contribute to immune memory in mice.3
A Proposed Function for ERVs Based on this insight, I propose a similar role for ERVs in the human genome (and genomes of other animals). That is, I predict that one of the roles of ERVs is to function as a type of DNA vaccine designed into the genomes of organisms, with the proteins produced from the expressed ERV proteins stimulating the immune system, helping it ward off retroviral infections.
In 2019, a research team from China discovered that ERVs in the mouse genome produced RNA transcripts at high expression levels during exposure to RNA viruses.4 These ERV transcripts played a role in activating genes that led to interferon production, contributing to innate immunity in mice. While key aspects of this mechanism differ from the one I propose, it does demonstrate that RNA viral infections upregulate the expression of ERV sequences and establishes the link between expression of ERV DNA sequences and innate immunity.
Considering my proposal, it is also worth noting the work of researchers from the United States, Germany, and Australia. These investigators demonstrated that ERVs in the koala genome serve an antiretroviral role by disrupting the endogenization process of the koala endogenous retrovirus (KoRV). (See “Koala Endogenous Retroviruses (ERVs) Protect Against Retroviral Infections.”) This mechanism is also distinct from the one I propose. Yet it highlights the fact that ERV sequences may play an antiviral role through a variety of distinct mechanisms.
ERVs: Common Descent or Common Design? Many life scientists regard the shared biological features possessed by organisms (that naturally cluster together) as evidence for their shared evolutionary ancestry. Yet, it is possible to advance an alternative explanation for biological similarities. Instead of evincing common descent, they could be interpreted as shared biological designs. In fact, prior to Charles Darwin, Sir Richard Owen produced a theoretical framework to interpret anatomical and physiological similarities shared among organisms. Owen saw these mutual features as manifestations of a common blueprint—an archetype that arose out of the Mind of the One True Cause.
The RTB creation model employs Owen’s insight by interpreting the shared features in the genomes of organisms as manifestations of genomic archetypes. In other words, the genetic similarities in the genomes of humans and the great apes were intentionally introduced by the Creator. To justify this interpretation, the shared genomic features must serve a function. And, indeed, this is the case for ERVs. These sequence elements appear to function as part of the innate immune system, helping to ward off retroviral infections through a variety of mechanisms that life scientists are just beginning to understand.
The antiviral role played by ERVs is largely possible because of the similarity between these DNA sequences and the genetic material of retroviruses. This requirement explains why a Creator would introduce genetic elements into the human genome (and the genome of other creatures) that share sequence elements with retroviruses.
If the last decade or so has taught us anything, it is this: Science is in its infancy when it comes to understanding the human genome. Increasingly, features that we originally thought were junk sequences turn out to make critical contributions. Such is the case for ERVs. The more we learn about these abundant sequence elements in the human genome, the more sense these sequence elements make for those of us who view biology through the lens of a creation model.
1. Liguo Zhang et al., “Reverse-Transcribed SARS-CoV-2 RNA Can Integrate into the Genome of Cultured Human Cells and Can Be Expressed in Patient-Derived Tissues,” Proceedings of the National Academy of Sciences, USA 118, no. 21 (May 25, 2021): e2105968118, doi:10.1073/pnas.2105968118.
2. Philip V’kovski et al., “Coronavirus Biology and Replication: Implications for SARS-CoV-2” Nature Reviews Microbiology 19 (March 2021): 155–170, doi:10.1038/s41579-020-00468-6.
3. Paul Klenerman, Hans Hengartner and Rolf M. Zinkernagel, “A Non-Retroviral RNA Virus Persists in DNA Form,” Nature 390 (November 20, 1997): 298–301 doi:10.1038/36876.
4. Bin Zhou et al., “Endogenous Retrovirus-Derived Long Noncoding RNA Enhances Innate Immune Responses via Derepressing RELA Expression,” mBio 10, no. 4 (July/August 2019): e00937-19, doi:10.1128/mBio.00937-19. Biology
Where is everybody? This common query represents the bewilderment of decades of futility in the search for extraterrestrial intelligent life (ETI). Part of the more recent disappointment stems from what scientists call the red sky paradox.1 It refers to at least five enigmatic facts.
The first is that red dwarf stars are by far the most numerous stars in the universe, accounting for 78% of all nuclear-burning stars (stars that are actively fusing lighter elements into heavier elements) in the Sun’s vicinity.2 Second, red dwarf stars sustain nuclear burning much longer than other stars. Compared to the Sun, red dwarf stars maintain nuclear burning 20 times longer on average. Third, red dwarf stars, during their nuclear–burning history, brighten at a much slower pace than all other nuclear-burning stars. This slower brightening pace means more stable temperatures on the surfaces of planets orbiting red dwarf stars.
The fourth fact is that astronomers have observed an abundance of rocky planets orbiting red dwarf stars. About 59% of all rocky planets discovered so far are hosted by red dwarf stars.3 Fifth, about a third of these rocky planets are temperate, meaning they orbit their host stars at distances where it is possible for liquid water to exist on at least small parts of the planets’ surfaces for at least some fraction of their host star’s nuclear–burning history.
Assuming that all temperate rocky planets host microbial life, and further assuming that given sufficient time, namely a few billion years, microbial life will evolve into advanced intelligent life, then we should have discovered ETI by now on a large number of planets orbiting red dwarf stars. The fact that we have not yet discovered ETI constitutes the paradox. In other words, given that red dwarf stars are so predominant, long-lived, luminosity stable (average brightness changes gradually), and prolific in producing rocky planets, why do we not find ourselves orbiting a red dwarf star?
On the other hand, if the evolution of intelligent life from microbial life is not an ensured and universally rapid (occurring within a few billion years) process, then the advantage of red dwarf stars, owing to their much greater longevity, becomes even bigger. The red sky paradox becomes all the more perplexing.
Proposed Resolutions to the Red Sky Paradox In a paper published in the Proceedings of the National Academy of Sciences USA, computational astrophysicist David Kipping proposed four non-mutually exclusive resolutions to the red sky paradox.4
Proposal #1:The first of his proposed resolutions is that we humans find ourselves orbiting a yellow dwarf star rather than a red dwarf star by an extraordinary stroke of random chance. That is, our star, planet, and species are extreme statistical outliers.
Kipping did not find his first proposal at all satisfying. In his words, this proposal has the effect of “softening the red sky paradox, but exacerbating the classic Fermi paradox.5 The Fermi paradox, named after physicist Enrico Fermi, is the apparent contradiction between the calculated high probabilities for the existence of ETI and the complete lack of substantiated evidence for ETI.
Proposal #2: Kipping’s second proposal is that life on a planet with a red sky (sky illuminated by a red dwarf star) may be inhibited. That is, it could be that life requires significant, enduring exposure to 300–450 nanometer wavelengths (long ultraviolet, visible violet, and blue wavelengths) and not just the 600–800 nanometer wavelengths (visible orange, visible red, and infrared wavelengths) that radiate from red dwarf stars.
Proposal #3: Kipping’s third proposal is that the time window for advanced life on a planet with a red sky (sky illuminated by a red dwarf star) may be brief. Red dwarf stars spend much more time than other stars in the pre-main sequence phase. The pre-main sequence phase is the time between star formation and the ignition of nuclear fusion. Red dwarf stars spend 0.1–1.1 billion years in the pre-main sequence phase. During this phase, red dwarf stars’ luminosities are much higher than they are during the main sequence phase of nuclear fusion burning.6 These greater luminosities imply high probabilities of generating moist greenhouse threshold events on liquid-water-bearing planets hosted by these stars.7
A moist greenhouse threshold event occurs when the brightening of a planet’s host star causes a greater portion of the planet’s liquid water to be transformed into water vapor. Water vapor is a greenhouse gas. More water vapor in the planet’s atmosphere leads to a higher surface temperature, which causes yet more liquid water to be transformed into water vapor, producing even higher surface temperatures, and so on.
Eventually, so much water vapor builds up in the planet’s troposphere that this water vapor escapes into the planet’s stratosphere. The host star’s radiation dissociates this stratospheric water vapor, splitting water molecules into hydrogen and oxygen atoms. The resultant hydrogen escapes to interplanetary space. With sufficient time, this hydrogen escape results in the complete desiccation of the planet.8 Given how long red dwarf stars spend in the pre-main-sequence phase, most liquid-water-bearing planets orbiting red dwarf stars end up bone–dry long before their host stars enter the main sequence phase where the stars’ luminosities become stable enough to make life possible.
Proposal #4: Kipping’s fourth proposal is that the occurrence rate of Earth-sized planets hosted by red dwarf stars where the planets are both temperate and moist may be very low. Currently, the technology available to astronomers, with rare exceptions, is only able to detect planets orbiting the brighter red dwarf stars. It is not known if the number of planets orbiting the brighter red dwarf stars represents the total number orbiting all red dwarf stars. It seems unlikely that planets as large as Earth orbit the smaller and much more numerous red dwarf stars as abundantly as they orbit the largest red dwarf stars.
Astronomers typically assume that planets with surface temperatures that permit the existence of liquid water will indeed possess surface liquid water. For red dwarf stars, this assumption is likely incorrect. The detected population of Earth-sized planets orbiting red dwarf stars could be dominated by photo-evaporated cores of super-Earths—planets that were larger than Earth that have had their atmospheres and oceans stripped away by their host stars’ radiation.
Additional Red Sky Paradox Resolutions In last week’sarticle, Eyes, Sun, and Earth Designed to Prevent Myopia, I explained the necessity of regular exposure to visible violet light to maintain the functionality of eyes. In 2016 (Overlap of Habitable Zones Gets Much Smaller), I explained why all life, not just organisms with eyes, requires a certain intensity and wavelength range of ultraviolet light to exist. This requirement means that, for life to possibly exist on a planet, it must orbit its star within the ultraviolet habitable zone. For planets orbiting red dwarf stars, the ultraviolet habitable zone never overlaps the liquid water habitable zone.
Since requirements for every conceivable physical life–form include that it must exist on a planet or moon orbiting its host star in both the liquid water and ultraviolet habitable zones, all red dwarf stars are eliminated as candidates to host life-bearing planets. For this reason alone, the red sky paradox is resolved.
Red dwarf stars are much cooler than the Sun. Therefore, for planets orbiting red dwarf stars to be temperate enough for life, they must orbit at distances much closer than Earth orbits the Sun. This requirement poses a problem. Any planet with an atmosphere thicker than 1% of Earth’s (a requirement for life) that is closer to its host star than about 90% of Earth’s distance from the Sun will very likely possess an atmospheric electric field strong enough to completely dry out the planet.9 Venus, for example, has an atmospheric electric field of 10 volts.10
Additionally, all red dwarf stars emit powerful ultraviolet and X-ray flares. Astronomers used the MUSCLES (Measurements of the Ultraviolet Spectral Characteristics of Low-Mass Exoplanetary Systems) Treasury Survey to establish that, even for the quietest red dwarf stars, the flux of ultraviolet and soft X-rays is more than sufficient to erode away the atmospheres and hydrospheres of planets orbiting these stars that reside inside the liquid water habitable zone.11
Plus, three-dimensional magnetohydrodynamic models of the impact of the stellar winds of red dwarf stars show that, for planets orbiting these stars in the liquid water habitable zone, the stellar wind pressure on the planets is 100–100,000 times greater than the solar wind pressure on Earth.12 This greater stellar wind pressure will compress any possibly existing magnetospheres around the planets to a degree that erosion of the planetary atmospheres cannot be prevented. That is, the greater stellar wind pressure will speed up the erosion of planetary atmospheres and hydrospheres generated by the stellar flux of ultraviolet and soft X-rays.
Lastly, the tidal forces a star exerts on one of its planets increases with the fourth power of the inverse of the distance between the star and the planet. Therefore, a planet need only be slightly closer to its host star than Earth is to the Sun before tidal forces result in the planet’s rotation period becoming equal or nearly equal to its period of revolution. For such a tidally locked planet, one hemisphere will be blistering hot while the other hemisphere will be freezing cold. All planets orbiting red dwarf stars that are within the liquid water habitable zone will be tidally locked.13 While liquid water conceivably could exist in the twilight zone on a planet’s surface (a transition zone at the edge of stellar illumination), atmospheric transport would move water to the coldest parts of the planet where it would freeze.14
Philosophical Implications The resolution of the red sky paradox means that red dwarf stars cannot possibly host any habitable planets. The combination of no atmosphere, no surface water, deadly persistent X-rays and far–ultraviolet radiation, and frequent major flare events on their host stars renders these planets unfit to support any kind of physical life. There is no escaping the fact that 78% of all stars are noncandidates for hosting planets on which physical life is possible.
For a planet to be truly habitable, it must reside not only in the temperate and liquid water habitable zones but also in all the other known planetary habitable zones. Thirteen such habitable zones are now known to exist.15 Of the 4,801 planets discovered to date,16 only one simultaneously resides in all 13 known planetary habitable zones. This planet also has all the design features needed to make advanced life and a global, high-technology civilization possible. It is Earth—the one planet that testifies of the exquisite handiwork, intelligence, and power of the Creator God of the Bible.
Jacob Haqq-Misra, Ravi Kumar Kopparapu, and Eric T. Wolf, “Why Do We Find Ourselves Around a Yellow Star Instead of a Red Star?” International Journal of Astrobiology 17, no. 1 (January 2018): 77–86, doi:10.1017/S1473550417000118; David Kipping, “Formulation and Resolutions of the Red Sky Paradox,” Proceedings of the National Academy of Sciences USA 118, no. 26 (June 2021): e2026808118, doi:10.1073/pnas.2026808118.
Glyn Collinson et al., “The Electric Wind of Venus: A Global and Persistent ‘Polar Wind’-Like Ambipolar Electric Field Sufficient for the Direct Escape of Heavy Ionospheric Ions: Venus Has Potential,” Geophysical Research Letters 43 (June 2016): 5926–5934, doi:10.1002/2016GL068327.
Collinson et al., “The Electric Wind.”
Allison Youngblood et al., “The MUSCLES Treasury Survey. IV. Scaling Relations for Ultraviolet, Ca II K, and Energetic Particle Fluxes from M Dwarfs.” Astrophysical Journal 843 (June 2017): 31, doi:10.3847/1538-4357/aa76dd.
Cecilia Garraffo et al., “The Threatening Magnetic and Plasma Environment of the TRAPPIST-1 Planets,” Astrophysical Journal Letters 843 (July 2017): L33, doi:10.3847/2041-8213/aa79ed.
Rory Barnes, “Tidal Locking of Habitable Exoplanets,” Celestial Mechanics and Dynamical Astronomy 129 (December 2017): 509–536, doi:10.1007/s10569-017-9783-7.
Prayer—consistent, focused prayer—is a vital part of effective ministry. Sign up to be an RTB Prayer Partner below, then watch your inbox for our current prayer requests and related Scripture passages.Thank you for taking the time to bring these requests before the Lord!https://get.reasons.org/dear-reader/Origin of Life & Astrobiology
Biochemist Fuz Rana provides a compelling case for the use of Turing machines as a possible theoretical framework that could help biologists gain greater insight into life’s operation at its most basic level (see “Biochemical Turing Machines ‘Reboot’ the Watchmaker Argument”). His discussion is a modern formulation of a long-standing apologetics tradition called the teleological argument––or most commonly known as the argument from design.
The Judeo-Christian worldview describes the world as displaying divine design (Ps. 19:1, Rom. 1:19–21). While these verses don’t spell out that specific features of creation are evidence of God’s intelligent nature, they do presuppose that the universe exhibits features of detectable divine design.
Medieval scholar St. Thomas Aquinas (1225–1274) long ago laid the foundational ideas for the modern conception of the teleological argument:
We see that things which lack knowledge, such as natural bodies, act for an end, and this is evident from their acting always, or nearly always, in the same way, so as to obtain the best result. Hence it is plain that they achieve their end, not fortuitously, but designedly… Therefore some intelligent being exists by whom all natural things are directed to their end; and this being we call God (Aquinas, Summa Theologica, Article 3, Question 2).
Aquinas focused his version of the design argument on an object’s existence as having a purpose or end. Aquinas and those who followed him used analogies to argue that the presence of an end-directed system can most reasonably be explained by the existence of an intelligent Deity who put the system in place and directs it toward a goal.
The next step in the maturity of the teleological argument built on Aquinas’ thought. Seventeenth and eighteenth century scholars such as William Paley (1743–1805) developed a more sophisticated version of design by incorporating the idea of analogy as a reliable indicator of intelligent design. Paley’s Watchmaker argument is the quintessential example of this progression. The watch’s ability to keep time depends on a precise arrangement of its parts, suggesting that the watch was designed to meet these specifications. We can then draw an analogy between the watch and the universe, observing that both exhibit the same kind of functional complexity. Since various aspects of nature possess functional complexity, which is a reliable indicator of an intelligent Agent, we can reasonably conclude that an intelligent Agent created these features with this property.
Proponents of the contemporary Intelligent Design movement have developed increasingly complex arguments founded on Aquinas and Paley’s initial ideas. Scholars such as William Dembski and Michael Behe describe various biological systems and their complexity and then use this data to make a case for divine design. Others such as Stephen Meyer look at information systems and probability arguments. Fuz Rana’s exploration of Turing machines stands in this grand stream of Christian apologetics and courses over new territory in its proposal to develop fresh aspects of the teleological argument in the biological realm.Design
The time my wife Amy and I spent in graduate school studying biochemistry were some of the best days of our lives. But it wasn’t all fun and games. For the most part, we spent long days and nights working in the lab.
But we weren’t alone. Most of the graduate students in the chemistry department at Ohio University kept the same hours we did, with all-nighters broken up around midnight by “Dew n’ Donut” runs to the local 7-Eleven. Even though everybody worked hard, some people were just more productive than others. I soon came to realize that activity and productivity were two entirely different things. Some of the busiest people I knew in graduate school rarely accomplished anything.
This same dichotomy lies at the heart of an important scientific debate taking place about the meaning of the ENCODE project results. This controversy centers around the question: Is the biochemical activity measured for the human genome merely biochemical noise or is it productive for the cell? Or to phrase the question the way a biochemist would: Is biochemical activity associated with the human genome the same thing as biochemical function?
The answer to this question doesn’t just have scientific implications. It impacts questions surrounding humanity’s origin. Did we arise through evolutionary processes or are we the product of a Creator’s handiwork?
If valid, the ENCODE results force a radical revision of the way scientists view the human genome. Instead of a wasteland littered with junk DNA sequences (as the evolutionary paradigm predicts), the human genome (and the genomes of other organisms) is packed with functional elements (as expected if a Creator brought human beings into existence).
Is Biochemical Activity the Same Thing As Function?
One of the technical complaints relates to how the ENCODE consortium determined biochemical function. Critics argue that ENCODE scientists conflated biochemical activity with function. For example, the ENCODE Project determined that about 60% of the human genome is transcribed to produceRNA. ENCODE skeptics argue that most of these transcripts lack function. Evolutionary biologist Dan Graur has asserted that “some studies even indicate that 90% of transcripts generated by RNA polymerase II may represent transcriptional noise.”1 In other words, the biochemical activity measured by the ENCODE project can be likened to busy but nonproductive graduate students who hustle and bustle about the lab but fail to get anything done.
Transcription is an energy- and resource-intensive process. Therefore, it would be untenable to believe that most transcripts are mere biochemical noise. Such a view ignores cellular energetics. Transcribing 60% of the genome when most of the transcripts serve no useful function would routinely waste a significant amount of the organism’s energy and material stores. If such an inefficient practice existed, surely natural selection would eliminate it and streamline transcription to produce transcripts that contribute to the organism’s fitness.
Most RNA Transcripts Are Functional
Recent work supports my intuition as a biochemist. Genomics scientists are quickly realizing that most of the RNA molecule transcribed from the human genome serve critical functional roles.
For example, a recently published report from the Second Aegean International Conference on the Long and the Short of Non-Coding RNAs (held in Greece between June 9–14, 2017) highlights this growing consensus. Based on the papers presented at the conference, the authors of the report conclude, “Non-coding RNAs . . . are not simply transcriptional by-products, or splicing artefacts, but comprise a diverse population of actively synthesized and regulated RNA transcripts. These transcripts can—and do—function within the contexts of cellular homeostasis and human pathogenesis.”2
Shortly before this conference was held, a consortium of scientists from the RIKEN Center for Life Science Technologies in Japan published an atlas of long non-coding RNAs transcribed from the human genome. (Long non-coding RNAs are a subset of RNA transcripts produced from the human genome.) They identified nearly 28,000 distinct long non-coding RNA transcripts and determined that nearly 19,200 of these play some functional role, with the possibility that this number may increase as they and other scientific teams continue to study long non-coding RNAs.3 One of the researchers involved in this project acknowledges that “There is strong debate in the scientific community on whether the thousands of long non-coding RNAs generated from our genomes are functional or simply byproducts of a noisy transcriptional machinery . . . we find compelling evidence that the majority of these long non-coding RNAs appear to be functional.”4
Copied by Design
Based on these results, it becomes increasingly difficult for ENCODE skeptics to dismiss the findings of the ENCODE project. Independent studies affirm the findings of the ENCODE consortium—namely, that a vast proportion of the human genome is functional.
We have come a long way from the early days of the human genome project. When completed in 2003, many scientists at that time estimated that around 95% of the human genome consisted of junk DNA. And in doing so, they seemingly provided compelling evidence that humans must be the product of an evolutionary history.
But, here we are, nearly 15 years later. And the more we learn about the structure and function of genomes, the more elegant and sophisticated they appear to be. And the more reasons we have to think that the human genome is the handiwork of our Creator.
Dan Graur et al., “On the Immortality of Television Sets: ‘Function’ in the Human Genome According to the Evolution-Free Gospel of ENCODE,” Genome Biology and Evolution5 (March 1, 2013): 578–90, doi:10.1093/gbe/evt028.
Jun-An Chen and Simon Conn, “Canonical mRNA is the Exception, Rather than the Rule,” Genome Biology 18 (July 7, 2017): 133, doi:10.1186/s13059-017-1268-1.
Chung-Chau Hon et al., “An Atlas of Human Long Non-Coding RNAs with Accurate 5′ Ends,” Nature 543 (March 9, 2017): 199–204, doi:10.1038/nature21374.
Having worked at science-faith apologetics organization Reasons to Believe for more than 20 years, I’ve observed that scientists and philosophers often think differently about the world. With the types of specialized training in their academic backgrounds, scientists and philosophers tend to ask different kinds of questions about reality and truth. Unfortunately, they also have a tendency to talk past one other. Recently, I had a social media interaction with a scientist about whether the findings of quantum mechanics invalidate the logical law of noncontradiction.
Here, in part 1 of 3 in this series, I’ll provide a little background on the laws of logic and the theory of physics known as quantum mechanics. Then I’ll share some of my interaction with the scientist about the relationship between the two topics.
Three Foundational Laws of Logic
The study of logic recognizes three laws of thought as bedrock principles: the law of noncontradiction, the law of excluded middle, and the law of identity. Their importance to human thought and discourse cannot be overstated. These logical anchors, so to speak, can be stated to reflect a metaphysical perspective (what is or is not—being) or an epistemological perspective (what can be true or not true—truth).1
Here are the three logical laws stated and explained:
1. The law of noncontradiction: A thing, A, cannot at once be and not be (A cannot equal A and equal non-A at the same time and in the same way); they are mutually exclusive (not both). A dog cannot be a dog and be a non-dog.
2. The law of excluded middle: A thing, A, is or it is not, but not both or neither (either A or non-A), they are jointly exhaustive—one of them must be true. There is no middle ground between a dog and a non-dog.
3. The law of identity: A thing, A, is what it is (A is A). A dog is a dog.
Law of Noncontradiction (LNC)
To help explain further, here is an example of a logical contradiction from the claims of two world religions:
A. Jesus Christ is God incarnate (Christianity).
B. Jesus Christ is not God incarnate (Islam).
According to the LNC, these two statements (represented as A and B) negate or deny one another. In other words, if statement A is true, then statement B is false, and conversely. Thus, logically, both of these statements cannot be true. So contradictory relationships reflect a “not both true” status.
For a basic understanding of quantum mechanics, Live Science defines it this way:
Quantum mechanics is the branch of physics relating to the very small.
It results in what may appear to be some very strange conclusions about the physical world. At the scale of atoms and electrons, many of the equations of classical mechanics, which describe how things move at everyday sizes and speeds, cease to be useful. In classical mechanics, objects exist in a specific place at a specific time. However, in quantum mechanics, objects instead exist in a haze of probability; they have a certain chance of being at point A, another chance of being at point B and so on.2
The challenge of QM in the context of the LNC is that light (a subatomic object) seems to be both a wave and a particle simultaneously, thus A and non-A.
Here is what a scientist said to me on social media:
The law of noncontradiction is violated by solid empirical science. At the quantum level, a subatomic particle can be in multiple locations at the same time. A particle can be both a wave and a particle. At the quantum level, cause may occur after effect. If this is true at the molecular base of our reality, how strongly can we hold on to the law of noncontradiction?
I responded by thanking the scientist and saying that philosophers and scientists need to dialogue with each other more on these kinds of topics. I then offered my brief take on the issue.
The LNC cast metaphysically (in terms of being) states the following: “Nothing can both be and not be at the same time and in the same respect.” I don’t think quantum mechanics actually denies the law of noncontradiction. What we can say is that under certain experimental conditions, light (a subatomic object) appears as a wave. But under other experimental conditions, light appears as a particle. So subatomic objects are not particles that are also nonparticles or waves that are also nonwaves; they are objects that behave sometimes like particles and sometimes like waves. Light behaves as a wave and a particle in different experimental conditions and, thus, in different logical respects. Hence, the experimental results of QM do not invalidate the LNC (A cannot equal A and equal non-A at the same time and in the same relationship).
The fundamental problem with any denial of the LNC is that the laws of logic make rational thought possible. In this very case, both a scientist and a philosopher exchanged ideas under the assumption of existing laws of logic. Thus, philosophers need input from scientists just as scientists need input from philosophers. And Christians would do well to populate both critical disciplines.
If I were to summarize the issue so you can use it on social media, I would say that quantum mechanics is counterintuitive to our ordinary notion of how larger objects react, but it is not a genuine violation of the law of noncontradiction. The laws of logic are considered necessary and inescapable because all thought, correspondence, and action presuppose their truth and application.
Reflections: Your Turn
Can you concisely state and explain the three laws of logic? Have you used them in your interactions? Visit Reflections on WordPress to comment with your response.
My answer: Based on Genesis 8:22 alone, the answer is no: “As long as the earth endures, seedtime and harvest, cold and heat, summer and winter, and day and night will not cease.” Scientifically, any stoppage in Earth’s rotation or dramatic alterations of the positions of the Sun and Moon would have wiped out all life on Earth. The miracle described in Joshua 10 likely is limited to the geography of the Valley of Aijalon. This valley is small.
Joshua 10 is not an easy text to translate into English. The biblical Hebrew original text here can mean either that God brought about an extended period of light of roughly 24 hours in duration into the Valley of Aijalon or that he brought about an extended period of darkness lasting about 24 hours. Which option is preferred depends on whether one thinks Joshua’s troops needed light to see their enemies or they needed the coolness of night to continue the battle. Since the Valley of Aijalon is located in the Negev Desert, which is noted for its heat, most Old Testament scholars prefer the latter interpretation.
One possible explanation for the miracle of Joshua’s long day would be for God to alter the local weather so as to bring very heavy, low-lying clouds over the Valley of Aijalon sufficient to make the valley as dark, or nearly as dark, as night even at noon. While such a meteorological event occurs on rare occasions at mid- and high-latitude rain forests, it is unheard of in the Negev Desert. Joshua and his troops immediately would have recognized the event as a divine miracle.
Another possible explanation for the miracle of Joshua’s long day would be for God to do something akin to the miracles he performed during the exodus from Egypt. There God either put a “pillar of fire” or a “pillar of cloud” adjacent to the Israelite encampment. In the Valley of Aijalon God could have illuminated the valley with “fire” or darkened the valley with “cloud.”
God is light both spiritually and physically. Yet another option for Joshua’s long day would be for God to shine his Shekinah glory into the valley. God also has the power to cause a dark shadow to enshroud the valley.
Joshua 10 gives no specific details on the nature of the extended darkness or extended light in the Valley of Aijalon or exactly how God performed the miracle. Therefore, the list of options for the miracle of Joshua’s long day I have just described is not complete. I provide a more thorough explanation of the miracle of Joshua’s long day in a DVD entitled Mysteries Examined, a video recording of a television episode in which I describe the miracles of Joshua’s long day, Hezekiah’s sundial, and Jonah and the whale.Bible
Question of the week: You have said that the Bible is unique in that it possesses predictive power. Can you give a couple of examples?
My answer: There are hundreds of examples in the Bible where it predicts either future events in human affairs/civilization or it predicts future scientific discoveries. Most of these predictions have since been fulfilled. The Bible has never been proven wrong in its predictions.
The Book of Daniel gives many specific predictions about the rise and fall of future empires. For example, Daniel 8 records a vision Daniel received while in Babylon during the third year of King Belshazzar’s reign. The recorded interpretation of the vision that Daniel receives states that an alliance of Media (base of the Medes) and Persia will establish an empire that will replace the Babylonian empire and become much larger. Later, a powerful king from the west, in Greece, will conquer the Median-Persian empire. However, at the height of his power his empire will be divided into four kingdoms. History reveals that these events were fulfilled exactly as predicted in the Book of Daniel. Skeptics of the Bible do not deny any of this but try to claim that Daniel was written in the fifth or sixth century AD and not during the sixth century BC reign of the Babylonian king, Belshazzar. However, the words and style of writing in Daniel 8 are the same as that of sixth century Babylonian literature and radically different from fifth or sixth century AD literature. Furthermore, portions of the Book of Daniel appear in the Dead Sea Scrolls and all of Daniel is present in the Septuagint, affirming that the Book of Daniel must predate the second century BC by several centuries.
An example of the Bible correctly predicting future scientific discoveries are its repeated statements about the fundamental features of big bang cosmology. Different Bible authors state that the universe has a beginning, that it expands from its cosmic beginning, that the physical laws governing the universe have not changed, and that one of those laws is a pervasive law of ongoing decay. I and Old Testament scholar John Rea cite the Bible passages and provide an exposition of them in the context of the word definitions and grammar of the original languages in this article.1
Another example of the Bible correctly predicting future scientific discoveries is the stated order and description of creation events in Genesis 1. I have written an entire book documenting the Bible’s predictive power in describing future scientific discoveries in the first few chapters of Genesis. That book is Navigating Genesis.2 Anyone can get a free chapter of the book at reasons.org/ross. I also have since written two blogs, here and here, describing new scientific discoveries that provide additional evidence for the Bible’s predictive success in describing future scientific discoveries.3
A being so powerful and so full of knowledge as a God who could create the universe, is to our finite minds omnipotent and omniscient, and it revolts our understanding to suppose that his benevolence is not unbounded, for what advantage can there be in the sufferings of millions of lower animals throughout almost endless time? This very old argument from the existence of suffering against the existence of an intelligent first cause seems to me a strong one; whereas, as just remarked, the presence of much suffering agrees well with the view that all organic beings have been developed through variation and natural selection.1
—Charles Darwin, The Autobiography of Charles Darwin
If God exists and if he is all-powerful, all-knowing, and all-good, why is there so much pain and suffering in the world? This conundrum keeps many skeptics and seekers from the Christian faith and even troubles some Christians.
Perhaps nothing epitomizes the problem of pain and suffering more than the cruelty observed in nature. Indeed, what advantage can there be in the suffering of millions of animals?
Often, the pain and suffering animals experience is accompanied by unimaginable and seemingly unnecessary cruelty.
Take nematodes (roundworms) as an example. There are over 10,000 species of nematodes. Some are free-living. Others are parasitic. Nematode parasites infect humans, animals, plants, and insects, causing untold pain and suffering. But their typical life cycle in insects seems especially cruel.
Nematodes that parasitize insects usually are free-living in their adult form but infest their host in the juvenile stage. The infection begins when the juvenile form of the parasite enters into the insect host, usually through a body opening, such as the mouth or anus. Sometimes the juveniles drill through the insect’s cuticle.
Once inside the host, the juveniles release bacteria that infect and kill the host, liquefying its internal tissues. As long as the supply of host tissue holds out, the juveniles will live within the insect’s body, even reproducing. When the food supply runs out, the nematodes exit the insect and seek out another host.
Figure 1: An Entomopathogenic Nematode Juvenile. Image credit: Shutterstock
Why would God create a world with parasitism? Could God really be responsible for a world like the one we inhabit? Many skeptics would answer “no” and conclude that God must not exist.
A Christian Response to the Problem of Evil
One way to defend God’s existence and goodness in the face of animal pain and suffering is to posit that there just might be good reasons for God to create the world the way it is. Perhaps what we are quick to label as evil may actually serve a necessary function.
This perspective gains support based on some recent insights into the benefits that insect parasites impart to ecosystems. A research team from the University of Georgia (UGA) recently unearthed one example of the important role played by these parasites.2 These researchers demonstrated that nematode-infected horned passalus beetles (bess beetles) are more effective at breaking down dead logs in the forest than their parasite-free counterparts—and this difference benefits the ecosystem. Here’s how.
The Benefit Parasites Provide to the Ecosystem
The horned passalus lives in decaying logs. The beetles consume wood through a multistep process. After ingesting the wood, these insects excrete it in a partially digested form. The wood excrement becomes colonized by bacteria and fungi and then is later re-consumed by the beetle.
These insects can become infected by a nematode parasite (Chondronema passali). The parasite inhabits the abdominal cavity of the beetle (though not its gastrointestinal tract). When infected, the horned passalus can harbor thousands of individual nematodes.
To study the effect of this parasite on the horned passalus and the forest ecosystem inhabited by the insect, researchers collected 113 individuals from the woods near the UGA campus. They also collected pieces of wood from the logs bearing the beetles.
In the laboratory, they placed each of the beetles in separate containers that also contained pieces of wood. After three months, they discovered that the beetles infected with the nematode parasite processed 15 percent more wood than beetles that were parasite-free. Apparently, the beetles compensate for the nematode infection by consuming more food. One possible reason for the increased wood consumption may be due to the fact that the parasites draw away essential nutrients from the beetle host, requiring the insect to consume more food.
While it isn’t clear if the parasite infestation harms the beetle (infected beetles have reduced mobility and loss of motor function), it is clear that the infestation benefits the ecosystem. These beetles play a key role in breaking down dead logs and returning nutrients to the forest soil. By increasing the beetles’ wood consumption, the nematodes accelerate this process, benefiting the ecosystem’s overall health.
Cody Prouty, one of the project’s researchers, points out “that although the beetle and the nematode have a parasitic relationship, the ecosystem benefits from not only the beetle performing its function, but the parasite increasing the efficiency of the beetle. Over the course of a few years, the parasitized beetles could process many more logs than unparasitized beetles, and lead to an increase of organic matter in soils.”3
This study is not the first to discover benefits parasites impart to ecosystems. Parasites play a role in shaping ecosystem biodiversity and they intertwine with the food web. The researchers close their article this way: “Countering long-standing unpopular views of parasites is certainly challenging, but perhaps evidence like that presented here will be of use in this effort.”4
Such evidence does not “revolt our understanding,” as Darwin might suggest, but instead enhances our insights into the creation and helps counter the challenge of the problem of evil. Even creatures as gruesome as parasites can serve a beneficial purpose in creation and maybe could rightfully be understood as good.
Charles Darwin, The Autobiography of Charles Darwin: 1809–1882 (New York: W. W. Norton, 1969), 90.
Andrew K. Davis and Cody Prouty, “The Sicker the Better: Nematode-Infected Passalus Beetles Provide Enhanced Ecosystem Services,” Biology Letters 15, no. 5 (2019): 20180842, doi:10.1098/rsbl.2018.0842.
I am an "Intelligent Design" writer who has the Christian faith. Part of my background is that I have a degree in physics, and have been inducted into the National Physics Honor Society. Sigma Pi Sigma, for life. My interest has lead me into metaphysics, farther into Christianity. Optimum metaphysics becomes religion.