Big Bang With A Simply Explanation

Big bang cosmology is an explosive topic.

Heated reactions—and bitter resistance—have arisen from opposite directions in the last century but, ironically, for the same type of reasons: religious reasons. One group of big bang opponents includes those who understand the theory’s implications, and the other, those who misunderstand them.

People in the first group understand that the big bang denies the notion of an uncreated or self-existent universe. The big bang theory, based on the accumulated data of centuries, points to a supernatural beginning and a purposeful (hence personal), transcendent (beyond the boundaries of space, time, matter, and energy) Beginner. Those who reject the reality of God or the knowability of God would, of course, find such an idea repugnant, an affront to their philosophical worldview. Similarly, it would offend those who want to spell universe with a capital U, who have been trained to view the universe itself as ultimate reality and as the totality of all that is real. Again, their response is religious.

People in the second group hate the big bang because they mistakenly think it arguesfor rather than against a godless theory of origins. They associate “big bang” with blind chance. They see it as a random, chaotic, uncaused explosion when it actually represents exactly the opposite. They reject the date it gives for the beginning of the universe, thinking that to acknowledge a few billion years is to discredit the authority of their holy books, whether the Koran, the book of Mormon, or the Bible.1,2Understandably, these people either predict the theory’s ultimate overthrow or choose to live with a contradiction at the core of their belief system.

Despite opposition from outspoken enemies, the fundamentals of the big bang model, which is actually a cluster of slightly differing models, stands secure. In fact, it stands more firmly than ever with the aid of its most potent and important allies: the facts of nature and the technological marvels that bring them to light, as well as the men and women who pursue and report those facts.3 The following pages offer a summary of the accumulated data supporting the big bang, giving special attention to eight of the most recent and significant confirmations.

A problematic term

The big bang is NOT a big “bang” as most lay people would comprehend the term. This expression conjures up images of bomb blasts or exploding dynamite. Such a “bang” would yield disorder and destruction. In truth, this “bang” represents an immensely powerful yet carefully planned and controlled release of matter, energy, space, and time within the strict confines of very carefully fine-tuned physical constants and laws which govern their behavior and interactions.4 The power and care this explosion reveals exceeds human potential for design by multiple orders of magnitude.

Why, then, would astronomers retain the term? The simplest answer is that nicknames, for better or for worse, tend to stick. In this case the term came not from proponents of the theory but rather, as one might guess, from a hostile opponent. British astronomer Sir Fred Hoyle coined the expression in the 1950s as an attempt to ridicule the big bang, the up-and-coming challenger to his “steady state” hypothesis. He objected to any theory that would place the origin, or Cause, of the universe outside the universe itself, hence, to his thinking, outside the realm of scientific inquiry.5

For whatever reasons, perhaps because of its simplicity and its catchy alliteration, the term stuck. No one found a more memorable, short-hand label for the “precisely controlled cosmic expansion from an infinitely or near infinitely compact, hot cosmic ‘seed,’ brought into existence by a Creator who lives beyond the cosmos.” The accurate but unwieldy gave way to the wieldy but misleading.

A multiplicity of models

The first attempts to describe the big bang universe, as many as a dozen, proved solid in the broad simple strokes but weak in the complex details. So, they have been replaced by more refined models. Scientists are used to this process of proposing and refining theoretical models. News reporters—even textbook writers—sometimes misunderstand, though, and inadvertently misrepresent what is happening.

Reports of the overthrow of the “standard big bang model” illustrate the point. That model, developed in the 1960s, identified matter as the one factor determining the rate at which the universe expands from its starting point. It also assumed that all matter in the universe is ordinary matter, the kind that interacts in familiar ways with gravity and radiation. Subsequent discoveries showed that the situation is much more complex. Matter is just one of the determiners of the expansion rate, and an extraordinary kind of matter (called “exotic” matter) not only exists but more strongly influences the development of the universe than does ordinary matter.

The reported demise of the “standard big bang” model was interpreted by some readers as the end of the big bang. On the contrary, the discoveries that contradicted the standard model gave rise to a more robust model, actually a set of models attempting to answer new questions. More than once, as one of these models has been replaced with a more refined variant, news articles heralded the overthrow of thebig bang theory when they should have specified a big bang model.

Currently, cosmologists (those who study the origin and characteristics of the universe) are investigating at least three or four dozen newer variations on the big bang theme. Scientists expect still more to arise as technological advances make new data accessible. This proliferation of slightly variant big bang models actually speaks of the vitality and viability of the theory.

It makes sense that the first models proposed were simple and sketchy. The observations at that time, while adequate to support the fundamental principles of the big bang, were insufficient to explore and account for the details. As the evidences have become more numerous and more precise, astronomers have discovered additional details and subtleties, features previously beyond their capability to discern.

New details, of course, mean more accurate “reconstructions” of what actually occurred “in the beginning.” Each generation of newer, more detailed big bang models permits researchers to make more accurate predictions of what should bediscovered with the help of new instruments and techniques.

As each wave of predictions proves true, researchers gain more certainty that they are on the right track, and they gain new material with which to construct more accurate and more intricate models. The testing of these models, in turn, gives rise to a new level of certainty and a new generation of predictions and advances. This process has been ongoing for many decades now, and its successes are documented not only in the technical journals but in newspaper headlines worldwide.

Overview of big bang evidences

Most textbooks currently in use at middle schools, high schools, and colleges describe only three or four evidences supporting big bang cosmology. The short list makes sense to a scientist, who sees no need to reiterate evidences for a roundish earth or for protons and electrons. But scientists who write textbooks may lack an appreciation for the clouds of doubt and confusion still hovering in the minds of non-scientists.

One purpose of this article is to help bridge the gap between the frontiers of science and popular awareness. This purpose, however, can be only partially realized in the scope of a magazine. Space does not permit an explanation or even an adequate description of each discovery supporting the big bang. It does permit two things, however. First, it allows a simple listing of thirty evidences (with one or two primary sources cited and a secondary source that gives an extensive list of other primary sources) demonstrating the breadth and depth of that evidence. Second, it allows for a more detailed description of the most powerful new findings that support a big bang creation event.

Summary List of Evidences for a Big Bang Creation Event

  1. Existence and temperature of the cosmic background radiation
    Ralph Alpher and Robert Herman calculated in 1948 that cooling from a big bang creation event would yield a faint cosmic background radiation with a current temperature of roughly 5° Kelvin (-455°F).7 In 1965 Arno Penzias and Robert Wilson detected a cosmic background radiation and determined that its temperature was about 3° Kelvin (-457°F).8
  2. Black body character of the cosmic background radiation
    Differences between the spectrum of the cosmic background radiation and the spectrum expected from a perfect radiator measured to be less than 0.03 percent over the entire range of observed wavelengths.10 The only possible explanation for such an extremely close fit is that the entire universe must have expanded from an infinitely or near infinitely hot and compact beginning.
  3. Cooling rate of the cosmic background radiation11 
    According to the big bang, the older and more expanded the universe becomes, the cooler its cosmic background radiation. Measurements of the cosmic background radiation at distances so great that we are looking back to when the universe was just a half, a quarter, or an eighth of its present age show temperature measures that are hotter than the present 2.726°K by exactly the amount that the big bang theory predicts.12 That is, astronomers actually witness the universe growing cooler and cooler through time.
  4. Temperature uniformity of the cosmic background radiation13
    The temperature of the cosmic background radiation varies by no more than one part in ten thousand everywhere astronomers look from one direction in the heavens to another.14 Such high uniformity can be explained only if the background radiation arises from one extremely hot primordial creation event.
  5. Ratio of photons to baryons15 
    The ratio of photons to baryons (protons and neutrons) in the universe exceeds 100,000,000 to 1.16 This ratio means that the universe is so extremely entropic (efficient in radiating heat and light) it can only be explained as a rapid explosion from an infinitely or nearly infinitely hot, dense state.
  6. Temperature fluctuations in the cosmic background radiation17 
    For galaxies and galaxy clusters to form out of a big bang creation event, temperature fluctuations in maps of the cosmic background radiation should measure at a level of about one part in a hundred thousand. The predicted fluctuations were detected at the expected level.18
  7. Power spectrum of the temperature fluctuations in the cosmic background radiation19 
    For a big bang universe with a geometry suitable for the formation of stars and life supporting planets, the temperature fluctuations in the cosmic background radiation must peak at an angular resolution close to one degree with a few much smaller spikes at other resolutions. In other words, the power spectrum graph will look like a bell curve with a few sub-peaks to the side of the main peak. The Boomerang balloon experiment this past April confirmed this big bang prediction.20 (See section in this article on deuterium and lithium abundances for another confirmation of this discovery.)
  8. Cosmic expansion rate21 
    A big bang creation event implies a universal expansion of the universe from a beginning several billion years ago. The most careful measurements of the velocities of galaxies establish that such a cosmic expansion has been proceeding for the past 14.9 billion years,22 a cosmic age measure that is consistent with measurements made by other means.23 (Some of the other measurements are described in the paragraphs to follow.)
  9. Stable orbits of stars and planets24 
    Our universe allows stable orbits of planets about stars and of stars about the nuclei of galaxies. Such stable orbits are physically impossible unless the universe is comprised of three very large and rapidly expanding dimensions of space. (An explanation of this proof follows.)
  10. Existence of life and humans25 
    Life and humans require a stable star like our sun. However, if the universe cools down too slowly, galaxies trap radiation so effectively as to prevent any fragmentation into stars. If the universe cools too rapidly, no galaxies or stars can condense out of the cosmic gas. If the universe expands too slowly, the universe collapses before solar-type stars reach their stable burning phase. If it expands too rapidly, no galaxies or stars can condense from the general expansion.
  11. Abundance of helium in the universe26 
    (explained in the following paragraphs.)
  12. Abundance of deuterium (heavy hydrogen) in the universe27 
    (explained in the following paragraphs.)
  13. Abundance of lithium in the universe27 
    (explained in the following paragraphs.)
  14. Evidences for general relativity28 
    Recent measurements of the theory of general relativity affirm it as the most exhaustively tested and best proven principle in all of physics.29 The solution to the equations of general relativity demonstrate that the universe must be expanding from a beginning in the finite past.
  15. Space-time theorem of general relativity30 
    A mathematical theorem developed by Stephen Hawking and Roger Penrose in 1970 establishes that if the universe contains mass, and if its dynamics are governed by general relativity, then time itself must be finite and must have been created when the universe was created.31 It proves there must exist a CAUSE responsible for bringing the universe into existence, a cause that exists and operates “transcendently,” outside and independent of matter, energy, and all cosmic space-time dimensions.
  16. Space energy density measurements32 
    Albert Einstein and Arthur Eddington sought to escape the big bang by altering the theory of relativity to include a cosmic space energy density term (a.k.a. the cosmological constant) and by assigning a particular value to that term. Recently, astronomers determined that indeed a cosmic space energy density term does exist.33 Its value, however, proves that Einstein’s and Eddington’s alternative models are incorrect. The measured value actually increases the evidence for the big bang, establishing that the universe will continue to expand at an ever-increasing rate.
  17. Ten-dimensional creation calculation34 
    In 1995, a team of scholars led by Andrew Strominger demonstrated that only in a universe framed in ten space-time dimensions, six of which stopped expanding when the universe was a ten millionth of a trillionth of a trillionth of a trillionth of a second old, is it possible for gravity and quantum mechanics to coexist.35-37 Their demonstration also successfully confirmed both special and general relativity and solved a number of outstanding problems in both particle physics and black hole physics. This finding implies that the big bang and the laws of physics are valid all the way back to the creation event itself.
  18. Stellar ages38 
    According to the big bang theory, different types of stars form at different epochs. The colors and surface temperatures of stars tell astronomers how long the stars have been burning. These measured burning times are consistent with the big bang. They also are consistent with all other methods for measuring the time back to the cosmic creation event. (See this article for the latest measurements.)
  19. Galaxy ages39 
    According to the big bang theory, nearly all the galaxies in the universe formed early in its history, within about a four billion year window of time. Indeed, astronomers measure the galaxies to be as old as the model predicts.40
  20. Decrease in galaxy crowding41 
    The big bang predicts that galaxies spread farther and farther apart from one another as the universe expands. Hubble Space Telescope images show that the farther away in the cosmos one looks (and, thus, because of light’s finite velocity, the farther back in time) the more closely packed the galaxies are.42 In fact, looking back to when the universe was but a third of its present age, the Space Telescope images reveal galaxies so tightly packed together that they literally are ripping spiral arms away from one another.
  21. Photo album history of the universe43 
    Since the big bang predicts that nearly all the galaxies form at about the same time (see #18), and since galaxies change their appearance significantly as they age, images of portions of the universe at progressively greater and greater distances (and, because of light’s finite velocity, farther and farther back in time) can be expected to show dramatic changes in the appearance of the galaxies. Hubble Space Telescope images verify the predicted changes.44 (For more details see paragraphs to follow.)
  22. Ratio of ordinary matter to exotic matter45 
    In a big bang universe, galaxies and stars can develop as suitable life-support sites only if the cosmos exhibits a certain ratio of exotic matter (matter that does not interact well with radiation) to ordinary matter (matter that strongly interacts with radiation). That crucial ratio is roughly five or six to one. Recent measurements reveal such a ratio for the universe.46
  23. Abundance of beryllium and boron in elderly stars47 
    Long before the first stars form, during the first few minutes after it bursts into existence, the big bang fireball generates tiny amounts of boron and beryllium–that is if, and only if, the universe contains a significant amount of exotic matter. Astronomers have confirmed that primordial boron and beryllium exist in the amounts predicted by the big bang theory and by the measured amount of exotic matter.48
  24. Numbers of Population I, II, and III stars 
    (See paragraphs to follow.)
  25. Population, locations, and types of black holes and neutron stars.49 
    After many billions of years of star burning, a big bang universe with the right characteristics for life support produces a relatively small population of stellar mass black holes and a larger population of neutron stars. Large galaxies produce supermassive (exceeding a million solar masses) black holes in their central cores. Astronomers, in fact, observe the predicted populations, locations, and types of black holes and neutron stars.50
  26. Dispersion of star clusters and galaxy clusters51 
    The big bang predicts that as the universe expands, different types of star clusters and galaxy clusters will disperse at specific (and increasing) rates. It also predicts that the densest star clusters hold together, but the stars’ orbital velocities about the cluster’s center “evolves” toward a predictable randomized condition known as virialization. The virial times depend on the cluster mass and size and on the individual masses of the stars. Astronomers observe the dispersal rates and virial times predicted by the big bang.
  27. Number and type of space-time dimensions52 
    A big bang universe of the type so that a site suitable for the support of physical life will be possible must begin with ten rapidly expanding space-time dimensions. At about 10-43 seconds (about a ten millionth of a trillionth of a trillionth of a trillionth of a second) after the creation event six of the ten dimensions must cease expanding while the other four continue to expand at a rapid rate. Several experiments and calculations confirm that we live in such a universe.
  28. Masses and flavors of neutrinos53 
    All currently viable big bang models require that the dominant form of matter in the universe be a form of exotic matter called “cold dark matter.” Astronomers and physicists already know that neutrinos are very plentiful in the universe and that they are “cold” and “dark.” Recent experiments establish that neutrinos oscillate (that is, transform) from one flavor or type to another (the three neutrino flavors are electron, muon, and tau).54 This oscillation implies that a neutrino particle must have a mass between a few billionths and a millionth of an electron mass. Such a range of masses for the neutrino satisfies the requirement for the viable big bang models.
  29. Populations and types of fundamental particles.55, 56 
    In the big bang the rapid cooling of the universe from a near infinitely high temperature and a near infinitely dense state will generate a zoo of different fundamental particles of predictable properties and predictable populations. Particle accelerator experiments which duplicate the temperature and density conditions of the early universe have verified all the types and populations of particles predicted that are within the energy limits of the particle accelerators.
  30. Cosmic density of protons and neutrons 
    (See paragraphs to follow.)

A big bang picture album

The simplest-to-grasp evidence in support of the big bang comes from pictures. With the help of various imaging devices, one can actually enjoy a kind of time-lapse photo of the big bang. The images show the universe in its various “growing up” stages, much as a time-lapse camera captures the opening of a flower, or as a photo album documents the development of a person from birth onward.

Such an album is made possible by light (or radiation) travel time. Observing a distant galaxy, for example, some 5 billion light-years distant is equivalent to seeing that galaxy 5 billion years ago, when the light now entering an earth-based telescope began its journey through space. In one sense, astronomers can only capture glimpses of the past, not of the present, as they peer out into space.

Thanks to the Keck and Hubble Space Telescopes, astronomers now have a photo history of the universe that covers nearly 14 billion years. It begins when the universe was only about half a billion years old and follows it to “middle age,” where it yet remains. The sequence of images [images not available online] presents highlights from this cosmic photo album. Photo (a) shows the universe at the equivalent of infancy, before galaxies exist; (b) depicts the “toddler” stage, when newly-formed galaxies are so tightly packed as to rip the spiral arms off one another; (c) shows the youthful universe, a time when most of the galaxies are still actively generating new stars and galaxy collisions are frequent; and (d) captures the universe’s entrance to middle age, a time when nearly all galaxies have ceased forming new stars and galaxy collisions are rare.

Figure X deserves special attention. It captures that moment in cosmic history when light first separated from darkness, before any stars or galaxies existed. It shows us the universe at just 300,000 years of age, only 0.002 percent of its current age.

These images testify that the universe is anything but static. It expanded from a tiny volume and changed according to a predictable pattern as it grew, a big bang pattern. A picture is still worth a thousand words, perhaps more.57

Helium abundance matches big bang prediction

The big bang theory says that most of the helium in the universe formed very soon after the creation event. According to the big bang, the universe was infinitely or nearly infinitely hot at the creation moment. As the cosmos expanded, it cooled, much like the combustion chamber in a piston engine.

By the time the universe was one millisecond old it had settled down into a sea of protons and neutrons. The only element in existence at that time was simple hydrogen, described by a single proton. For about 20 seconds, when the universe was a little less than four minutes old, it reached the right temperature for nuclear fusion to occur. At that point, protons and neutrons fused together to form elements heavier than simple hydrogen.

According to the theory, almost exactly one-fourth of the universe’s hydrogen, by mass, was converted into helium during that 20-second period. Except for tiny amounts of lithium, beryllium, boron, and deuterium (which is hydrogen with both a proton and a neutron in its nucleus), all other elements that exist in the universe were produced much later, along with a little extra helium, in the nuclear furnaces at the cores of stars.

One of the ways astronomers can test the big bang theory is to measure the amount of helium in objects that are so far away (and, hence, are being viewed so far back in time) that they predate significant stellar burning. A second way is to examine objects in which little stellar burning has ever occurred. That is, astronomers can find and make measurements on relatively nearby objects in which star formation shut down quickly, too quickly to contribute significantly to the total helium abundance.

In 1994 astronomers measured for the first time the abundance of helium in very distant intergalactic gas clouds.58 These measurements, recently confirmed by additional measurements,59 revealed the presence of helium in the quantity predicted by the big bang model.

In the last 1999 issue of the Astrophysical Journal, a team of American and Ukrainian astronomers published yet another proof for the hot big bang creation event.60 The six researchers used the Multiple Mirror and Keck telescopes to check the quantity of helium in two of the most heavy-element-deficient galaxies known (blue compact galaxies I Zwicky 18 and SBS 0335-052). They determined that helium comprised 0.2462 ± 0.0015 of the total mass of those galaxies. After subtracting the tiny amount of star-produced helium in the two galaxies, they derived a primordial helium abundance of 0.2452 ± 0.0015, consistent with the findings in distant, ancient objects. This value is so close to the big bang prediction that the team concluded it “strongly supports the standard big bang nucleosynthesis theory.”61

During the months since that publication was released, Canadian astronomers have refined the data of the American-Ukrainian team.62 Their correction (based on the elimination of data from hot-star-excited nebulae within the galaxies) yielded a primordial helium abundance 1.5 percent higher and 20 percent more accurate than the first set of figures. The new value is so very close to the theoretically expected value as to be indistinguishable.63

Deuterium and lithium abundances match big bang prediction

Whatever quantity of deuterium (heavy hydrogen) and lithium exists today was produced during the first four minutes of creation, the big bang theory tells us. Not all that deuterium and lithium remains, however, for stellar burning gobbles up those elements, rather than producing more. In seeking to measure the abundance of deuterium and lithium and to compare that amount with the amount predicted by the big bang model, astronomers focused again on extremely distant systems, also on nearer systems in which little stellar burning has occurred. With significant help from the Keck telescopes 64-66 and from the “Hubble Deep Field” image (a “picture” assembled from layers upon layers of Hubble Space Telescope exposures to the same part of the sky),67 five different teams produced measurements.68, 69 In their words, the deuterium and lithium abundances fit the big bang predictions “extremely well.” 70

Density of protons and neutrons

The big bang theory fails to produce the stars and planets necessary for life and the elements necessary for life unless the cosmic density of baryons (protons and neutrons) takes on a specific value. This value is about four or five percent of the mass density that would be necessary, by itself, to bring the expansion of the universe to an eventual halt, what astronomers refer to as the critical density. Therefore, an obvious test of the big bang would be to see if the baryon density is close to this 4-5 percent of the critical density.

Until recently, the determination of primordial helium, deuterium, or lithium abundances was the only reliable way to get a measure of the density of baryons in the universe. The best results came from the five teams mentioned in the section above. They determined that the cosmic baryon density is equal to 0.04 to 0.05 of the critical density.

During the last year astronomers have developed three new and independent methods for measuring the cosmic baryon density. The most spectacular and accurate of these three new methods comes from the Boomerang maps of the temperature fluctuations in the cosmic background radiation (see the last issue ofFacts for Faith for details). From the North American test flight of the Boomerang high altitude balloon, the cosmic baryon density was measured at 0.05 of the critical density.71 The other two methods gave an average value of roughly 0.03.72-74 These independent confirmations of the cosmic baryon density deduced from primordial helium, deuterium, and lithium abundances give yet more evidence for a big bang creation event.

Cosmic expansion velocity matches big bang prediction

An obvious way to test the big bang is to affirm that the universe is indeed expanding from an infinitesimal volume and to measure the rate of its expansion from the beginning up to the present moment. While this task may seem simple in principle, in practice it is not. Measurements of adequate precision are enormously difficult to make. Only in the last few years have measurements as accurate (or nearly so) as the other big bang proofs become possible.

Five methods (some independent, some slightly dependent) for measuring the cosmic expansion rate have now been developed and applied (see Table 2). The average of the five yields a rate of 64 kilometers per second per megaparsec (a megaparsec = the distance light travels in 3.26 million years). Running the expansion backward at this rate implies that the universe is approximately 14.6 billion years old.

The newly discovered “energy density term” adds another half billion years, suggesting that the universe is about 15.1 billion years old.75, 76 This figure serves as a confirmation of the model because of its consistency with other age indicators, including the cosmic background radiation, the abundance of various radiometric elements,77 and the measured ages of the oldest stars (see below).

Table 1: Latest Measurements of the Cosmic Expansion Rate

Astronomers have developed and refined five measuring tools for determining the rate of expansion for the universe, or what they call the “Hubble constant.” A megaparsec = the distance light travels in 3.26 million years.

Method Hubble Constant Value
gravitational lensing 66 km/sec/megaparsec78-82
Tully-Fisher 61 km/sec/megaparsec83-86
cepheid distances to galaxies 62 km/sec/megaparsec87-90
type Ia supernovae 61 km/sec/megaparsec91-94
geometric distance measures 71 km/sec/megaparsec95-98
average of measured values 64 km/sec/megaparsec
age calculation based on average of values 14.6 billion years
correction for energy density term +0.5 billion years
corrected age calculation 15.1 billion years

Star populations fit big bang scenario

Big bang theory proposes that three distinct generations of stars formed at certain intervals after the creation event. Astronomers creatively refer to these generations as Population III, Population II, and Population I stars. The numbering system seems reversed, since Population III stars are the oldest, but the latter were the last to be discovered and studied; hence, the confusing numbering system.

According to the big bang, Population III stars formed when the universe was barely a half billion years old. By that time, matter had condensed adequately for stars to begin coalescing. However, since the universe had expanded so little as yet, the average density of gases was much higher than today’s observed density. Thus, the earliest stars were mostly supergiant stars. Such stars burn up very quickly (astronomically speaking), in less than ten million years. They end with catastrophic explosions, dispersing their ashes throughout the cosmos.

Given the brief burning time and early formation of such stars, big bang theorists conclude that few, if any, Population III stars should still be observable. However, their remains should be. Population III stars leave a distinctive signature of elements in their scattered ashes. This signature is found in all the distant gas clouds of the universe.

Recently, there has emerged evidence that some of the rare low-mass Population III stars may have been found.99, 100 Their low mass means that they can burn long enough for astronomers to be able to find them today. They have been difficult to detect, though, because they absorb the ashes of the giant Pop IIIs, thus taking on a disguise. Recently, however, stellar physicists have developed tools for distinguishing Population III survivors from the younger Population II stars that form from the ashes of Population III supergiants.101, 102

The big bang theory makes three major predictions about Population II stars: 1) this group should be the largest of the star populations, given that it formed when galaxies were young and at their peak star-forming efficiency; 2) they should be more numerous in certain locations, such as globular clusters, where early star formation proceeds most efficiently, and 3) they should come in all sizes, all mass categories from low to high, not favoring one category over another. All three predictions are borne out by astronomers’ observations over the last few decades.

The third generation of stars, the Population I stars (including Earth’s sun), formed from the scattered ashes of the largest Population II stars. These ashes are easy to distinguish from Population III ashes because they are at least 50 percent richer in heavy elements (those heavier than helium). The gaseous nebulae (or gas clouds) scattered throughout the spiral arms of the Milky Way and gas streams the Milky Way galaxy steals from nearby dwarf galaxies are actually “ash heaps” of giant Population II stars.

The big bang theory says that star formation shut down for the most part shortly after the formation of Population II stars. Thus, most galaxies are devoid, or nearly devoid, of Population I stars. The big bang also says that in the few galaxies where Population I stars do form, the most intense period of star formation was the past few billion years, and the most intense regions of star formation are the densest areas, such as the nuclei and spiral arms. (Some also would have formed in what astronomers call “irregular” galaxies.) All these characteristics have proved true, confirmed by observations.

Does the big bang allow for Population IV stars to form in the future? Yes, it does. But, it predicts that this population should be tiny compared to the other three. Everywhere astronomers look in the universe, they see signs that star formation will soon shut down totally, even in those galaxies still active in forming stars. (“Soon” to an astronomer is not tomorrow or next year but a few billion years hence.) Astronomers anticipate, for example, that the Milky Way galaxy will experience a “brief” burst of star formation when it pulls the Large Magellanic Cloud (its companion galaxy) into its core region some four or five billion years from now. Already the universe is old enough to make such incidents rare.

Oldest stars tell their story

Since the big bang theory indicates when the Population II stars formed—the era when galaxies began to take shape, roughly .5 billion to 1.5 billion years after the creation event—astronomers can test the theory by determining the age of the oldest visible stars. By adding .5 to 1.5 billion years to that age, they can compare the sum with the creation dates suggested by other independent measures.

One difficulty of this seemingly simple test is that stars, like some people, sometimes hide their age well. Stars in dense clusters, however, can be more easily dated than others, and globular clusters appear to comprise the oldest of the Population II stars. Table 3 lists the most accurate dating of globular cluster stars in five different galaxies. It also includes the limit researchers recently placed on the oldest white dwarf stars in Earth’s galaxy.

Table 2: Latest Measurements of the Oldest Population II Stars

Star Group Measured Ages (billions of years)
average of all globular clusters in our galaxy 12.9 ± 1.5103
47 Tucanae (oldest globular cluster in our galaxy) 14.1 ± 1.0104
Large Magellanic Cloud globulars same as for Milky Way105
globular cluster in WLM dwarf galaxy 14.8 ±0.06106
globular clusters in Fornax dwarf galaxy same as for Milky Way107
average of all globulars in our galaxy less than 14.0108
oldest white dwarfs in our galaxy more than 12.6109
average of all globular clusters in M87 (a supergiant galaxy) 13.0110

* average of all results = 13.5 billion years

The numbers indicate that globular clusters formed within a two- to three-billion-year time window, roughly consistent from galaxy to galaxy.111 If one adds to their ages the years prior to Population II star formation (1 billion ± 0.5 billion years), the derived age fits remarkably well all other methods for determining how long the universe has been expanding from the creation event.

Stability of stars and orbits fits big bang picture

Stable orbits and stable stars are possible only in a big bang universe. Their existence ranks among the most clear-cut proofs for the big bang. (Incidentally, life would be impossible unless planets orbit with stability, stars burn with stability, and stars orbit galaxy cores with stability.112, 113)

Such stability demands gravity, not just any force of gravity, but gravity operating according to the inverse square law. Gravity operating at that level demands three dimensions of space—the big bang universe.

In two dimensions of space, gravity would obey a different law (objects with mass would attract one another in proportion to the inverse of the distance separating them). In four space dimensions, gravity would obey a different law (massive bodies would attract one another in proportion to the inverse of the cube of the distance separating them).

Stability under the influence of gravity in turn demands that the three space dimensions be large (significantly unwound from their original tight curl). Otherwise galaxies would be so close together as to wreak havoc on stellar orbits, and stars would be so close together as to wreak havoc on planets’ orbits. When galaxies are too close together, galaxy collisions and close encounters catastrophically disturb stars’ orbits. Likewise, when stars are too close together, their mutual gravitational tugs catastrophically disturb the orbits of their planets.

The three dimensions of space must be expanding at a particular rate, as well. A universe that expands too slowly will produce only neutron stars and black holes. A universe that expands too rapidly will produce no stars at all and thus no planets and, of course, no stable orbits.

The simple fact is this: humans do observe that galaxies, stars, and planets exist, and that they exist with adequate stability to allow humans to exist and observe them. This fact, in itself, argues for the big bang, In fact, it argues for a specific subset of big bang models. Even this narrowing and refining of the original theory serves as evidence that the theory is correct.

Apologetics impact of big bang cosmology

Though the case for the big bang, i.e., creation event, rests on compelling, some might say overwhelming, evidence, the theory still has its critics. Some skepticism may be attributable to the communication gap between scientists and the rest of the world. Some of the evidences are so new that most people have yet to hear of them. Some of the evidences, including the older ones, are so technical that few people understand their significance. The need for better education and clearer communication remains. In fact, it motivates the publication of this article.

Communication and education gaps explain only some of the skepticism, however. Spiritual issues are also involved. The few astronomers who still oppose the big bang openly object not on scientific grounds but on personal, theological grounds.

The Fingerprint of God tells the story of astronomers’ early reaction to findings that affirmed a cosmic beginning, hence Beginner. Some openly stated their view of the big bang as “philosophically repugnant.” For decades they invented one cosmic hypothesis after another in a futile attempt to get around the glaring facts. When all their hypotheses failed the tests of observational checks, many of those astronomers conceded, perhaps reluctantly, the cosmic prize to the big bang.

Today, only a handful of astronomers still hold out against the big bang. Their resistance, however, is based not on what observations and experiments can test but rather on what observations and experiments can never test. Though their articles appear in science journals, they engage in metaphysics rather than in physics, in theology (more accurately, anti-theology) rather than science. The big bang supporting evidences clearly point beyond the “superior reasoning power” Einstein acknowledged or some ill-defined “intelligent Designer” gaining popularity today. The physical evidence points clearly and consistently to the God of the Bible.

General relativity theory, which gave rise to the big bang, stipulates that the universe had a beginning and specifically a “transcendent” beginning. The space-time theorem of general relativity states that matter, energy, and all the space-time dimensions associated with the universe began in finite time, and that the Cause of the universe brings all the matter, energy, and space-time dimensions of the universe into existence from a reality beyond matter, energy, space, and time. The extreme fine-tuning of the big bang parameters that are necessary for physical life to be possible in the universe exceeds by many orders of magnitude the design capabilities of human beings. The worldview significance of these conclusions cannot be avoided. No philosophical system or religious doctrines in the world fits them as does the Bible. It not only fits them, it anticipates them by several thousand years. (HR,RTB))

*** Will Myers

Please “Donate a penny” or any amount to support the ministry’s research and development. Just click the link below:

https://www.paypal.com/cgi-bin/webscr?cmd=_s-xclick&hosted_button_id=DKCQUR7YG7W5U

Share this:

Posted in Uncategorized | Leave a comment

Multiverse Musings: Is It Testable?

People regularly question whether the multiverse belongs in the arena of scientific investigation. The answers often center on a key query: Is the multiverse testable? Those within the scientific community respond to this concern with different answers. Some say yes, some say no. Certain aspects of the multiverse remain beyond our ability to test currently and maybe indefinitely. In fact, the speculative nature of these models provides insight into how scientists approach difficult problems.

****Any scientist will tell you that testing forms the basis for legitimate scientific inquiry. So, how can we classify a theory based on alternate universes that scientists can never detect as science? The fact that multiverse models stand on the scientific frontier, where we have sparse hard data, makes answering this question even more difficult. Though many scientists disagree about the usefulness of models containing some form of a multiverse there are some reasons for including these models in the realm of scientific investigation, even though they are speculative (in the theoretical, not pejorative, sense) in nature.

First, multiverse models are not new to the cosmology scene. For example, shortly after Einstein developed the equations for the theory of general relativity, he realized that the solutions to those equations indicated we lived in an expanding universe. This meant the universe began to exist in the relatively recent past (few billion years ago) and was not eternal. Motivated by philosophical opposition to a genuine beginning, Einstein proposed a multiverse model known as the oscillating universe, where the universe alternated between expanding and contracting phases. It begins with a big bang, expands until gravity halts and reverses the expansion, then contracts until it ends in a big crunch. And then the cycle starts again leading to an infinite number of universes, one of which is where we reside. Eventually, calculations and measurements of our universe ruled out the oscillating universe as a viable cosmological model (although scientists have since proposed an updated version known as the cyclic universe).

Second, the current batch of multiverse models gained popularity primarily because they arose from investigations of other phenomena. Scientists did not simply invent a multiverse in order to explain away the beginning of the universe or to account for its life-friendly fine-tuning. The most popular multiverse model (a level II bubble multiverse filled with level I universes) arises from efforts to find an explanation for how inflation works. Granted the multiverse scenario arises after huge extrapolations of well-tested physical models, but most versions of inflation that produce a universe that looks like ours also produce a multiverse.

Third, some current multiverse models do make testable predictions. Stated another way, they have consequences in our universe that future measurements could validate or falsify. For example, some models predict that another universe might have collided with ours during its earliest phases. Such a collision would produce measurable signatures in the cosmic microwave background (CMB). Similarly, a multiverse would naturally cause asymmetries in the CMB that some scientists claim to have found.

Fourth, scientists recognize the need to find a way to test multiverse models although they disagree about whether they we will ultimately have the ability to conduct such tests. Distinguished cosmologist George Ellis rightfully argues that multiverse models require huge extrapolations from known physics and may undermine core scientific principles. Equally distinguished cosmologists Alexander Vilenkin and Max Tegmark agree about the large extrapolations but argue that multiverse models provide important explanations about the ultimate origin and character of our universe. Theoretical physicist Sean Carroll argues that evidence has driven scientists to accept the idea of a multiverse and, as a scientific model, it is here to stay. In response, theoretical physicist and mathematician Peter Woit contends that the multiverse evidence rests on circular reasoning.

It seems that, as a whole, the scientific community remains agnostic about the existence of a multiverse. We may find evidence for or against it as scientists continue to investigate—or we may not. This uncertainty means that any attempt to declare the multiverse out-of-bounds scientifically is premature. Yet we must also approach the topic with appropriate caution lest we undermine the foundations of the scientific enterprise.

*** Will Myers

Please “Donate a penny” or any amount to support the ministry’s research and development. Just click the link below:

https://www.paypal.com/cgi-bin/webscr?cmd=_s-xclick&hosted_button_id=DKCQUR7YG7W5U

Share this:

Posted in Uncategorized | Leave a comment

Love Instilled Prompts One to Break Loose from the Material Bound

We are bound to Earth by gravity. We depend on materials for nourishment. We learn from the physical world ways to better our lives.  At present, science is king and savior of the world. We are moving away from God for the sake of material rewards. Faith in science and technology is replacing faith in God.

It doesn’t take much imagination to realize the long journey from our early state unto a pure spiritual state as being enormously extensive. We must also consider the fact that man might self destruct. This is highly probable, and believed to be an eventuality among many scientist.

We don’t have to be bound to the material; anyone can be born again in the name of Jesus, the Son of God; the Giver of Life. Love that is in the heart of a person makes him a leading candidate for receiving God’s Spirit; thereby, one receives a taste of eternal life now and forever. God’s presence has all things and spirit that has ever existed and ever will exist, locked into one eternal moment (logos).

Now, the question of hell should be addressed. Does God burn sinners in an eternal flame forever as punishment for rejecting Him?  “God is Love.” God created all souls, and since God is all-knowing, than God has created souls who He knew would burn forever in hell. This is not a representation of an all loving God. Jesus said that God shall throw some souls into a lake of fire, an eternal flame, but did not state for how long.  Preserving “God is love” to the optimum, we have God destroying a soul and creating a new soul for the person’s conscience, then returning this soul into “heaviness” for further disciplinary actions. This vary from the present “reincarnation” belief where an entity is chosen such as cows in which a soul enter when the soul returns into this world. The return into heaviness for further disciplinary actions can be other than this universe, and not any known form to mankind. It is totally activity only known to God; where love stops our theology stops.

*** Will Myers

Please “Donate a penny” or any amount to support the ministry’s research and development. Just click the link below:

https://www.paypal.com/cgi-bin/webscr?cmd=_s-xclick&hosted_button_id=DKCQUR7YG7W5U

Share this:

Posted in Uncategorized | Leave a comment

Inner Sanctuary Of The Mind As Related To A Free Democracy

One of our most known law is that anything accomplished by coercion or duress perpetrated upon the consenting person shall not be recognized by the courts. The most substantive application of this law is what’s most commonly known such as contracts with much materials involved. The spirit of this law also has a much broader application such as voters must not be threatened near the voting polls.

Elements of our society has always used methods of intimidation to persuade the masses to consent to their desires such as an owner announcing that his business shall close if a certain bill becomes law even if it would benefit the employees, they vote against their best interest.

There is another more far-reaching method to intimidate an person to compel the individual to act in a way which would benefit the perpetrators.  This far-reaching method begins innocently with the science of psychology that acts legally under the freedom of speak and should never be policed due to the fact that it would cause a serious breach in the freedom of speak constitutional right; serious and devastating effects would occur alone with mass paranoia. The far-reaching method is when social machinery, special interest groups, begin targeting private citizens. In a sense, the government becomes our parents.

During the 1950’s and into the 60’s there existed a governmental institution call The Political And Warfare College (Name maybe not exact but credentials are) in which gathered the name and history of all humans on Earth as accurately as possible. It was told that Leader such as Eric The Red were studied to the extent of proposing His dialog during his reign as Viking Leader. The study was contemporary in nature also; developing dossiers on every citizen on Earth.  The Institute is no longer a part of our government, but the information was given to many private think tanks for the purpose of forming social policy; subtly implementing those policies into our society.

Many citizens believe that the superpowers have parental capabilities of invading the inner sanctuary of each citizen’s mind . The social machinery only need fully implementing. This implementation began years ago, and was accelerated with the event of 9/11 which engendered the Patriot Act which almost eliminated all privacy rights. George Orwell’s novel “1984” maybe an exaggeration in a carnal and substantive sense of what a police state would be like, but the spirit of the novel  gives a fair depiction.

The appeal of this devilish entity is the giving of an advantage to a person over another person in many ways; the giving of personal information; establishing an environment that focuses scrutiny upon a certain person, projecting and sustaining this scrutiny,and using the most advanced information gathering and disseminating methods to anticipate the person’s every move and thought. This is not science fiction at this time.

The popularity of such undermining of  private citizens will cause a hollowing out of our democracy; furthermore, the eventual fall of  our free democratic system will occur when our system is hollowed to a weak, thin shell of democracy. The destruction shall come from within and out. Corporate powers have always had psychological evaluations of the employees as a group, but now it is going to be very personal dossiers (Psychological Profiles) about their employees; this shall lead toward fascism, and police powers shall move government away from a free democracy and toward a communist state because government owns the intellectual property of the state; the minds of the people.

In order for continual progress of a high quality free democracy with individual liberties and freedoms we must freely have original thoughts, and privacy within the inner sanctuary of our mind without the hairs standing up on the back of your neck and insecurities skyrocketing if unknowingly act contrary to the desires of your employer or government. There is only one effective antidote to this increasingly poisonous state; “do unto others as you would have them do unto you”, and have strong faith in God.  Let the truth be told, had we all obeyed this golden rule there wouldn’t have been all of the mass shootings and killings by a lone gunmen in our nation.

This Golden Rule will save American Free Democracy. We all must become resistors and not conduits who aid in the invasion of one’s inner mind that must stay private to keep up a high quality free democracy enjoyed by every citizen rather than perpetuating mental enslavement. We are our brother’s keeper or else our brother shall not keep us; instead, shall destroy us.

Ephesians 6:12

For we wrestle not against flesh and blood, but against principalities, against powers, against the rulers of the darkness of this world, against spiritual wickedness in high places.
Leviticus 19:18

You shall not take vengeance, nor bear any grudge against the sons of your people, but you shall love your neighbor as yourself; I am the Lord.
*** Will Myers

Please “Donate a penny” or any amount to support the ministry’s research and development. Just click the link below:

https://www.paypal.com/cgi-bin/webscr?cmd=_s-xclick&hosted_button_id=DKCQUR7YG7W5U

Share this:

Posted in Uncategorized | 1 Comment

Deception Is Becoming King

 We live in a world of misrepresentations and distractions. Tons of cocaine comes into America daily, but now we have a movement to stop those bad prescription drugs while making it difficult for people who are suffering severe pain to get what is needed to stop their pain. Listen to your truly. Violent crime has sky rocketed, but we are having a movement to eliminate guns from our society. In the name of patriotism honoring our free society while we are losing more freedoms than ever.  The very element to preserve a high quality free democracy which is privacy for each citizen is being taken rapidly; especially after 9/11. There is virtually no privacy. While more people are below the poverty line since the great depression politicians are screaming socialism when the government attempts to help the poor. Now I am hearing in the media the press for the government to restrict calories in foods because people are becoming too fat.

I suggest a movement called “Telling the flat ass, unadulterated truth” and concentrating on people’s real needs.” Our USA brain trust (Think tanks) are acting in their best interest while behaving as if their best interest is my best interest. Not so! A happy and satisfaction life depends on the citizen’s private inner sanctuary of mind with only God allowed in this private space. “In God We Trust” only.

We are living in the last days of this age. The world shall notably change soon.

  1. Acts 2:17
    And it shall come to pass in the last days, saith God, I will pour out of my Spirit upon all flesh: and your sons and your daughters shall prophesy, and your young men shall see visions, and your old men shall dream dreams:


  2. 2 Timothy 3:1
    This know also, that in the last days perilous times shall come.


  3. Hebrews 1:2
    Hath in these last days spoken unto us by his Son, whom he hath appointed heir of all things, by whom also he made the worlds;


  4. James 5:3
    Your gold and silver is cankered; and the rust of them shall be a witness against you, and shall eat your flesh as it were fire. Ye have heaped treasure together for the last days.


  5. 2 Peter 3:3
    Knowing this first, that there shall come in the last days scoffers, walking after their own lusts,
*** Will Myers

Please “Donate a penny” or any amount to support the ministry’s research and development. Just click the link below:

https://www.paypal.com/cgi-bin/webscr?cmd=_s-xclick&hosted_button_id=DKCQUR7YG7W5U

Share this:

Posted in Uncategorized | Leave a comment

GOD’S AGREEMENTS WITH MAN WHO BRINGS US INTO GOD’S KINGDOM

Most people reject God Who is thought to be bad because of the crusades in the 12th century alone with A series of killings through out the bible. The problem is that people confuse the covenants that state a relationship with God, an agreement. Firstly, no one or group can challenge God as how to make an eternal being. So, God has chosen to allow man to grow into knowledge of Eternal God. As man grow into more knowledge God gives another advanced covenant, liken unto precept after precept…[T]he last covenant is the Blood Covenant or the Son of God Covenant of Lord Jesus Christ. This Covenant is also called the LOVE Covenant. In this covenant God has promised not to condemn man If man accepts His Son. This is the final and full covenant or agreement between a man and God. We could call this covenant the universal covenant.Covenants with God are similar to Federal rights and states rights. Whatever state law or policy that doesn’t brake a federal law is Ok. So goes with prior covenants with God. If the prior covenants does brake a spiritual law of the last covenant than it continues to be good. So, before knocking God as being also evil due to the killings in the past check your covenants. Get the knowledge. God has never been separated from man; although, man has been continually separate from God.
Our New Testament Covenant, an agreement with God, is our final and full covenant. All other covenants were more about a collection of specific things, and not being the Agreement which totally encompassed all possible experiences of man (Abrahamic Covenant: I will make you the Father of All Nations). The Son Of God Covenant encompasses All experiences and possible experiences of human beings. In the name of Jesus we can taste Eternal Life; we can take a straight and narrow path to the Kingdom of God.
  1. Isaiah 8:14
    And he shall be for a sanctuary; but for a stone of stumbling and for a rock of offence to both the houses of Israel, for a gin and for a snare to the inhabitants of Jerusalem.


  2. 1 Peter 2:8
    And a stone of stumbling, and a rock of offence, even to them which stumble at the word, being disobedient: whereunto also they were appointed.

God does roll heads and kick rear ends. The reason being to remove any and all obstacles which are in the way of God’s children come into the presence of God, our Heavenly Father.

*** Will Myers

Please “Donate a penny” or any amount to support the ministry’s research and development. Just click the link below:

https://www.paypal.com/cgi-bin/webscr?cmd=_s-xclick&hosted_button_id=DKCQUR7YG7W5U

Share this:

Posted in Uncategorized | Leave a comment

It Is Truly Good News To Tell Others Of The Son Of God

Man developed to become very sophisticated as compared to the beginning. Cave man began burying their dead and putting articles in the grave for the love one to use in the after life. These gestures indicate the beginning of funeral celebrations and a belief in the after life which led to a belief in God or should I say a belief in many Gods associated with the after life.

As man continued to progress as masters of their environment many problems in life abounded. One problem of homicide is unique as compared to all other animals, and is the epitome of evil solely associated with the human.  It is clear that man had many more problems than all of life forms. As men collectively work to solve as many problems as possible, more problems crop up.

We now have a strong atheistic movement in the USA. I am assuming that they think that man’s intellect shall eventually solve all problems. I say no because as lone as there are two men they shall oppose each other; thereby, generating even more problems.

I happily proclaim that the Living Word of God, the Son of God, Jesus, with the B.I.B.L.E. has the answers for any human and any culture. So, let each person seek God and believe that their true self is defined by this relationship with God; instead of seeking a consensus with people. They are to be respected and viewed as the reality of the present in this fallen world.

One doesn’t have to be a rocket scientist to understand the Word of God; it only takes patience, prayer, and meditation with the Lord Jesus. But, you must remember that One must want a “Jesus world” for this wisdom to be received from God.

Faith without doubt is very powerful now and forever. I am working with God in the name of Jesus to achieve the power to raise the dead. “THE POWER REALLY EXIST NOW IN THIS WORLD” says the Son of God.

  1. Matthew 6:13
    And lead us not into temptation, but deliver us from evil: For thine is the kingdom, and the power, and the glory, for ever. Amen.


  2. Matthew 9:6
    But that ye may know that the Son of man hath power on earth to forgive sins, (then saith he to the sick of the palsy,) Arise, take up thy bed, and go unto thine house.


  3. Matthew 9:8
    But when the multitudes saw it, they marvelled, and glorified God, which had given such power unto men.


  4. Matthew 10:1
    And when he had called unto him his twelve disciples, he gave them power against unclean spirits, to cast them out, and to heal all manner of sickness and all manner of disease.


  5. Matthew 22:29
    Jesus answered and said unto them, Ye do err, not knowing the scriptures, nor the power of God.
  6. Matthew 28:18
    And Jesus came and spake unto them, saying, All power is given unto me in heaven and in earth.

*** Will Myers

Please “Donate a penny” or any amount to support the ministry’s research and development. Just click the link below:

https://www.paypal.com/cgi-bin/webscr?cmd=_s-xclick&hosted_button_id=DKCQUR7YG7W5U

Share this:

Posted in Uncategorized | Leave a comment

Modern Mathematics – Its Relation to Physical Science and Theology

CONTENTS

Introduction This is just that, an introduction. No explicit definition for the discipline of Mathematics is given. However, a practical definition is implied. The least element, written communication, associated with mathematics is defined.

Mathematical Activities There exist three areas of mathematical activity. The activity of computation is briefly discussed.

Mathematical Truth Mathematical truth is explicitly defined and shown to correspond completely to the operational process of writing an acceptable mathematical proof.

Aspects of Proving Statements Mathematically This second of the mathematical activities is discussed in some detail. The mysterious and mythical aspects are removed. How a mathematician knows that a statement is a theorem and how one goes about “proving” that the statement is a theorem is discussed experientially.

Mathematical Modeling This process, the most important aspect for application to physical science and theology, is discussed in some detail. How one constructs a mathematical model for another discipline is illustrated. The notions of theinterpretation, the domain of discourse and the like are defined and illustrated. The important fact that the interpretation be a fixed correspondence is discussed. It is shown how the interpretation of the mathematical theory within another discipline transfers mathematical truth to assumed predictive or observed behavior, or that it simply represents a rational description within the other discipline where these notions within the other discipline must satisfy the same logical patterns as those exhibited by the process of producing mathematical truth.

Mathematical Truth and Its Correspondence to Discipline Truth In this short section, the exact correspondence between these two often distinct concepts is discussed. In particular, the vague correspondence often associated with these notions will be eliminated and the actual strict correspondence is described.

Errors in Mathematical Modeling How certain areas of modern science have used incorrectly the methods of mathematical modeling. Specific illustrations are given with respect to scientific measures defined in terms of a restrictive language that show that these methods generate logical error. Realism with respect to infinitesimals is briefly mentioned.

Theology and Theorem Proving The one and only one way in which this mathematical activity corresponds to theology is discussed. This mathematical activity can be associated with theology only under the assumption that God created humankind so that humankind could comprehend God’s creation and that He is the source from which all our reasoning power comes.

Theology and Modeling This is the most important section in this article. If you are conversant with the information continued within the previous sections, then this section is the only one that you need read.

Introduction[In this article, only the most significant aspects will be discussed. My intent is to present enough basic material so that the actual relation between modern mathematics as a discipline, the physical sciences and certain theological concepts can be discussed properly.]

It is a great event in the life of a graduate student when he/she is awarded a Ph. D. in Mathematics. Does such an exalted professional degree make one a mathematician? This depends upon your definition of mathematics, but, in actuality, a Ph. D. is not a necessary element in being a mathematician where mathematics includes all the areas that are defined as such. What should be, but is often not, the most significant aspect of a Ph. D. is its research aspect. If one does not do mathematical research, such a degree is often not necessary. But it’s estimated that only a small percentage of those that earn a Ph. D. in mathematics actually contribute significantly to mathematical research in its purest sense. In this regard, the Ph. D. only indicates a minimal research capability. [The same holds for such academic awards given in all areas of physical science.] The educational requirements necessary to become a mathematician are, however, somewhat nebulous.

In order for one to understand my comments as well as other significant factors relative to the subject area termed Mathematics, it’s necessary to given an explanation of the basic mathematical areas, what a mathematician does, and how mathematics affects certain important philosophical questions.

First, however, a definition for the discipline called “Mathematics” seems appropriate. The strangest and most worthless definition I’ve heard is when one is informed that “Mathematics is what mathematics do.” Of course, this is a circular definition that says nothing for what does the word “do” signify? Actually, I’m not going to define the discipline called Mathematics except to say that almost all the a stuff housed in the mathematics section of the local library is mathematics. But the discipline of Mathematics includes numerous volumes hidden in other library sections. You can find them in the philosophy area, the science area, and even the theology area. Librarians often don’t know where to house a volume that appears to contain what they perceive is mathematics. This means, however, that at the least mathematics is something disseminated in written form or its equivalent. Yes, notwithstanding the subjective processes used to obtain the written forms, it’s totally objective in nature. This discipline requires that, whatever mathematics is, it must be disseminated to others. To say, “I’ve a great mathematical result, but can’t describe it to you” has no meaning within the discipline of mathematics. Notice that “I’ve a great result, but can’t describe it to you” may have meaning for other disciplines.

Mathematical ActivitiesThe serious investigation of the foundational aspects of mathematics as a discipline only began about one hundred and fifty years ago. Thus all of the material presented here is relative to what would be termed as “Modern Mathematics.” It is probably not possible to determine all of the intuitive notions that led to mathematical concepts in the ancient past since many such ideas actually lead to contradictions. Modern mathematics attempts to avoid as many of the recognized contradictions as possible. Yet, the most fundamental aspects of modern mathematics depend heavily upon human experiences, especially with written languages. Concepts such as writing symbols in succession next to one another [Bourbaki, 1968, p. 15], writing symbols to the right or left of another symbol [Bourbaki, 1968, p. 17], being able to follow the directions of “replacing” one symbol xwherever it occurs in a collection of symbols A by another symbol B, recognizing when strings of symbols from a language are different or similar intuitively [Bourbaki, 1968, p. 17], knowing what it means to write symbols in columns or rows, and many other common and not mathematically defined notions such as orientation. These intuitive notions and, usually, being able to apply a few procedures from classical logic are fundamental and required to obtain definitions and mathematical proofs.

It is entirely false to say that mathematics is something absolute or pure, and exists without considerable human invention and personal experiences.

There exists three broad areas of “mathematical activity” (i.e. areas relative to the contents of those library books) where teaching is excluded from these areas. Teaching should be considered as a special mathematical activity distinct from these three.

The mathematics activity used the most is the activity called calculation. This is what the general public considers as mathematics. The “schoolmath,” as it is being called, is the basis for this type of mathematics. But also included under this heading is any activity where an individual applies known mathematical procedures to calculate numerical quantities or obtain any rational conclusion based upon numerical quantities. This area also includes basic geometry not as an application of logical discourse but as it approximates the physical world. The known procedures also include the meanings – the interpretations – given to the conclusions. It often requires great knowledge and a strong intuitive understanding of a particular mathematical theory and its relation to the entities being calculated. I’ve not stated what is being calculated. It can be used to calculate mathematical entities like large prime numbers or, more usually, nonmathematical entities where a nonmathematical entity has a name taken from another discipline. Of course, certain mathematical terms often take on multiple meanings, a mathematical meaning within a mathematical discipline as well as distinct meanings in other disciplines. The concept of “volume” can be associated with the pure mathematical subject called “measure theory,” refer to a concept from the mathematical discipline of geometry, or refer to brick lying. The words that surround the term in a written expression – the context – determine its meaning.

Prior to categorizing the two other areas of mathematics, it’s necessary that the concept some termed as mathematical “truth” as related to “fact” be discussed.

Mathematical TruthDealing as I do with many philosophical questions, I dislike using the term “truth” within the discipline of mathematics. Within mathematics, mathematical “truth” is not a significant “truth.” But on the other hand, it’s a concept which I, sincerely, believe is as close as humankind can come to a perceived physical fact when properly applied. It is customary for mathematicians to use the terms “true,” “holds” or “established” but what is meant by this term is an “explicit” fact. It means that it is an explicit fact that a certain fixed finite set of words – a set of statements, the hypotheses – have been written down. [The mathematician often uses certain terms when the hypotheses are written down such as “Suppose that these hold.”] It means that it is an explicit fact that other statements using words from the first fixed set of statements has been written down. It means that it is an explicit fact that a small group of individuals within the mathematical community have concluded that the last statement written down was obtained from the first statements by the methods they allow. In other words, “truth” for this form of mathematics requires something to be acceptably and explicitly written down – a “proof.”

It’s only after such a proof, that the term “true” might be assigned to the statement that has been “proved.” To say that the statement “every positive integer not equal to one can be expressed in a unique way as the product of one or more prime numbers” is (mathematically) “true” simply means that there exists an acceptable informal (or sometimes formal) proof for this statement. There is nothing mysterious or hidden or deep being communicated. It doesn’t correspond to some philosophic notion of “truth” in a broader sense. For those that agree that the “proof” is correct, this ends any further consideration as to its “truth.” The case is closed. Thus, in this context, mathematical truth is an explicitly demonstrated fact plus expert witnesses. [In abstract model theory, there is also a notion of “truth” relative to an explicit set-theoretic definition.]

In mathematics, the term “satisfies” is employed when specific methods of substitution are applied. These methods allow for numerical or other terms to be substituted into expressions that contain “variable” type expressions. The term “satisfies,” “holds” or “true” applies to the cases when these substitutes do not lead to a contradiction. For example, one can write x + y = z. For x = 1, y = 3, z = 4, one has 1 +3 = 4. The 4 on the right hand side does not contradict the definition for these symbols and their addition. This is not so for y = 6. Hence, x =1, y = 3, z= 4 “satisfies” this equation or the equation “holds for” x =1, y = 3, z= 4.

In mathematical logic, the term “satisfies” is used when a special substitution process is used so that a formal set of formula, such as {A&B, (A&B)&C}, yields a “T” for each of these. In abstract model theory, one takes “language constants.” a,b,c etc. from a formal language and assigns each to specific members a’,b’, c’ etc. of a set taken from set-theory. Say a’ = 1, b’ = 2, c’ = 3, etc. Considers the predicates “x is going to y.” For this, the x and y are ordered, so we let them be represented by an ordered pairs. [These are the same objects one usually sees in basic Cartesian graphing.] For this predicate, in the set theory, the a’, b’, c’, . . . are assigned to a set order pair {(a’,b’), (b’,c’) etc.} = P. But P does not contain (a’, c’). The a’,b’,c’ etc. can be specific mathematical objects like the natural numbers.

Now suppose that “a” denotes John, “b” denotes Bob, “c” denotes Pete. Then upon substituting (a,b), via changing of notation, if (a’,b’) is a member of P, then one states that P “satisfies” (a,b). Notice that this is for the particular predicate “x is going to y.” Also notice that (a,c) does not satisfy P. So, this is a rather specific substitution processes. Then P is said to (formally) model (a,b) and not (formally) model (a,c).

I have used the word “informal.” This means that the words used to relate the beginning and ending statements in a proof include a natural native language such as English, French, Russian and the like that has content. [Content means all of the impressions that a native language evokes within the mind of the reader.] Within many native languages you have words or their equivalent such as and, or, not and a few others. The meaning of these words is intuitive and comes from their use in everyday conversation. A description in words evokes within the mind images or impressions that, in many cases, take the place of a perceived event that can occur exterior to the mind – in objective reality. In this case, you can associate the term “truth” with the occurrence of a perceived physical event. Using this operational definition, a majority of individuals would agree, I think, that the following is linguistically correct. In a laboratory experiment, you describe a set of initial conditions H (the hypotheses). You then make two observations. A physical event described by D_1 occurs. A physical event described by D_2 occurs. As you prepare your notes you write, “Based upon the conditions H, physical event D_1 occurs and physical event D_2 occurs” to convey the results of the experiment.

Explicitly, using different modes of expression, this idea is translated to mathematical “truth.” From a given a finite set of hypotheses H, a statement “E is proved” is explicitly demonstrated. From the same set of hypotheses, a statement “S is proved” is explicitly demonstrated. Putting the two proofs together and applying the accepted rules of informal classical logic, the statement “E and S is proved” is fact via an acceptable collection of written symbols, where “E and S” is the last step of the “proof.” “Thus it has been demonstrated.”

One can then apply other accepted rules for classical logic and obtain the informal word-forms, E or S; S or E; if E, then S; if S, then E; E if and only if S; S if and only if E explicitly as the last step in a proof using H. I point out that, technically, each member of H can be considered as “proved” since, even if not used for any further demonstration, you may write each member of H as a step in the proof. Then each of the statements “E is proved,” S is proved,” E and S is proved,” “S and E is proved,” “E or S is proved,” “S or E is proved,” “if E, then S is proved,” “if S, then E is proved,” “E if and only S is proved,” “ S if and only E is proved” is a demonstrated fact. These ten statements are not actually part of the informal mathematical proof itself but are external observations. Using classical logic external to the proofs themselves, these results can be stated as follows: (L) if H is given, then (substitute any one of these 10 statements).

No matter what the word “true” might entail, there is a pattern of how this word applies to classical logic. If one substitutes for the word “proved” in (L) the word “true,” then the classical “truth-table” pattern is produced. Thus classical truth is being modeled by explicit procedures involving allowed linguistic processes. Personally, I almost never use the word “true” in constructing an informal proof. I do use the word “holds,” meaning that you are in fact “given” or have explicitly “proved” some statement and, hence, the statement can be used in a proof.

The important word here is “modeled” where this has the informal meaning “behaves in a similar manner or has a similar pattern.” At this point in this discussion, this translation of classical truth to mathematical truth has nothing else to say about the application of this classical truth concept to any other discipline.

There is one very important aspect to this modeling procedure. Assuming classical consistency, and I stress consistency is often assumed from empirical evidence only, if a statement “E is proved” holds for a consistent set of hypotheses, then the statement “not E cannot be proved” or, for our purposes, “not E is not proved” holds for the same set of hypotheses. Under the same consistency requirement, if “not E is proved”, then “E is not proved.” Assuming classical logic a pencil-and-paper activity can yield a proof or cannot yield a proof, and not both can occur. That is, either “E is proved” or “not E is proved” and not both can be so proved. Notice that if you substitute, in the case of consistency, for “proved” the word “true” and for the words “not proved” the word “false” you get the classical truth-table pattern for “true” and “false.” [You can also substitute the symbols “1” and “0,” respectively, and this yields the binary-tables.] [It is possible to show mathematically that with respect to certain conditions neither a statement E nor its negation can be established by the methods allowed. For example, the famous axiom of choice cannot be established from the other usual axioms of set-theory with respect to abstract model theory.] Thus classical mathematics is said to be two-valued in terms of proving a theorem. Only two things are possible, one or the other but not both. For this aspect of “mathematics,” the aspect that yields the Fields Medal, if you’re young enough, “truth” must be explicitly displayed by an explicit proof.

For basic two-valued mathematical truth, in the sense of proof, the concept of what is “mathematically true (holds)” or “mathematically false (does not hold)” is not a vague physical or philosophic concept but, rather, it is very explicit and absolute in character.

Aspects of Proving Statements MathematicallyThe most mysterious and myth prone area of mathematics is the area of acceptably “proving” a theorem. I don’t mean by this category, re-proving a statement although this is done a great deal of the time. I mean proving new things. This means “writing down” a statement that has never appeared before at the end of an informal proof. Then attempting to write an acceptable proof which has this statement as its conclusion. What is most misunderstood about this process is how it begins.

There are two seemingly distinct types of beginnings to this proof procedure. You have a beginning that is very concrete in nature and another beginning which is vague and highly experiential. We are sometimes told that the true mathematics is “the pure mathematics, the abstract mathematics” where the written statements are empty of content or nonsense symbols. This is one of the great myths. For the very definition of an abstraction says that you start with concrete processes, physical processes or human processes of writing down pretty patterns of symbols.

In the beginning when one learns to abstract, one seeks a feature that is common to more than one of these processes. For example, there are many different patters for the symbols we use to represent a positive integer. Consider string of symbols 2 = 2^1, 3 = 3^1, 4 = 2^2, 5 = 5^1, 6 = 2^1 x 3^1, 7 = 7^1, 8 = 2^3. Each displayed symbol 2, 3, 4, 5, 6, 7, 8 has what appears are common characteristics. The numbers 2, 3, 5 and 7 are prime numbers. Each of these 7 positive integers is written in a form where the prime numbers, if there is more than one, are written in an increasing form as one proceeds from left-to-right. (Notice you must know your left from your right.) Using these forms, I cannot find a different way to express the number that involves a different set of exponents. In the minds of some individuals, the idea is that maybe such characteristics persist for all the symbols one might use for a positive integer. So, one “abstracts” the characteristics by definition using, at the least, the informal axioms for the positive integers.

Using classical logic one “proves” that an arbitrary positive integer satisfies the characteristics. Finally, using the logical notion of “generalization” either explicitly stated or implicitly, the “Fundamental Theorem for Arithmetic” is established. However, to establish the “uniqueness” of this form, since it is a mathematical form, one needs intuitive knowledge as to what it means to “write symbols next to each other from left-to-right” so as to reproduce the symbolic forms. In general, one uses various definitions that yield common features. This, along with a set of axioms, yields the basic hypotheses from which a mathematical theory is created.

For the another beginning, while building a theory or just for the fun of it, an individual might postulate, from vast experience, that other patterns of behavior or relations between the defined entities may present themselves, where the behavior or the relation is the important concept. These other patterns may not have been the originally conceived common feature but from “intuition” a theory builder simply might “feel” that “interesting” mathematical results could be produced. Among these interesting theorems are different ways of expressing equivalent notions, ways that help mathematicians comprehend mathematical ideas.

The ability to give an acceptable “proof” is the great art of mathematics. It’s learned by first reading thousands of proofs produced by others within the mathematical community. Then one practices thousands of proofs for theorems that it’s claim can be proved since they are problems in textbooks. Usually, some other individual checks your proofs, states why they are NOT acceptable and sends you back to start over again. Finally, your proofs are accepted, at least on the student level. All of this work has somehow or other altered your patterns of thought. It’s this that leads to the “intuition.” But it’s usually impossible to describe in words a fixed set of rules that would always lead to the “feeling” that pursuing a certain avenue of thought will produce an “interesting” mathematical result. Thousands of new mathematical truths are produced everyday, they are probably interesting to one or two individuals, but only a few are judged publishable and, hence, interesting to the mathematics community.

Three facts are important. First, there are infinitely may different and new statements that might be a provable theorem or might not be a probable theorem. Once again, I can’t give a fixed set of rules that says that one statement is likely to be a theorem and another is not. No mathematician wants to attempt a proof if it’s not likely that the statement is a theorem; that is, that it can be acceptably proved. Then even if it’s likely to be a theorem, you must convince yourself that you have the ability to construct an acceptable proof. Otherwise, the statement is simply a conjecture. The second fact is that there are within each mathematical discipline “tricks” or procedures that allow one to construct a proof by drawing little pictures, by taking finite examples, by examining the patterns presented by finite strings of symbols, by adding hypotheses, and all sorts of concrete visual processes, which are then translated into words and statements that don’t reveal the original “tricks.”

The last fact is related to exactly how much one writes in a proof. A proof is written in a special style and only in enough detail to be convincing to those individuals you expect to read the proof. I’ve been privileged to write a few thousand interesting and acceptable proofs for new statements. I’ve learned to write them in extreme detail. Then I cut down on the detail to a great degree, but always have available in my notes the detail removed just in case someone needs to be more deeply convinced. One can consider such pure mathematics as an art. But an art that has an extreme absolute character and that’s controlled by explicit logical rules. The research (abstract) mathematician finds the discovery of new mathematical truths (results) a very exciting and rewarding endeavor. [For a recent and interesting perspective on this area of mathematical activity, see Thurston, 1994.] Unfortunately, the Nobel committee is forbidden to award a prize for this type of pure non-applied abstract “science.” Later I’ll show that in one and only one case does this theorem proving aspect, this absolute mathematical truth aspect, have any relation to theological questions.

Mathematical ModelingFor areas that are not considered as sub-disciplines of Mathematics, mathematical modeling is the most important of all categories. This can be a difficult mathematical activity depending upon the material with which one works.

The phrase abstract modeling usually means that you model a particular mathematical theory in terms of general set-theory or some other mathematical theory (also called a structure). This is the least important to the non-mathematics community. In general, the most significant aspect is when a mathematical theory is used to model terms and descriptions that depict entities or processes described by a language taken from a scientific discipline distinct from pure mathematics.

This is often confusing since a particular mathematical theory might use some of the same terms that appear in the other discipline’s dictionary. Many textbooks and research papers do NOT distinguish between these different uses of the same term or expression.

In a physics paper, the reader is to simply “know” that the term “velocity vector” refers to an object that measures two quantities associated with the physical concept of motion and not the pure mathematical entity associated with the mathematical vector as used in the subject linear algebra. Do basic physics book ever mention the differences between these two concepts?There are two types of processes terms as mathematical modeling. The first type requires that there be at hand an actual mathematical theory and, at least, a discipline dictionary. Giving a general description for the modeling procedure is very easy but applying the vague operations of the procedure can be very difficult even in this first case. For informally presented mathematical theories, the most straightforward approach to this first type of mathematical modeling is to define a domainor universe of discourse for a mathematical theory. This is a collection of mathematical objects, usually from the informal mathematical theory, that comprise the entities from which mathematical functions and relations are built. After this is done, the next step is to construct a fixed correspondence between entities that are named in the discipline dictionary and elements in the domain of the mathematical theory. Then you take relations that are assumed to exist between the discipline entities and correspond them to mathematical relations between elements of the domain. The correspondence constructed is called aninterpretation. (These ideas can all be related to a weak set-theory structure and the objects take on names from formal logic such as predicates, variables, constants. A formal notion of interpretation is relative to such a weak set-theory.)

It is not the purpose of this article to discuss, in any detail, how an interpretation is constructed. It takes many, many years of technical training within various disciplines before the interpretation methods are understood intuitively. Even after such training, the actual construction process requires a certain amount of talent and creativity that cannot be described properly. One must intuitively know, somehow of other, that certain paths will not lead to such a construction while other paths are the most likely to pursue. And, one must also know the “acceptable” construction methods as well since each construction will be examined by others in order to see that it conforms to specific rules.

Within each physical science discipline certain basic statements are accepted.

It is very important to realize that these statements include information about entities that can be observed by the human sense, or that cannot be so observed. The entities that cannot be so observed are often originally conceived of as “real” or “imaginary” during the development of a mathematical model. Then as the model develops assumed real entities are accepted as imaginary or assumed imaginary entities accepted as real.

Infinitesimal quantities and instantaneous velocities and accelerations were apparently considered by Newton as real measures that correspond to real physical behavior. The theory of indivisiblerequires one to believe that certain measures, dx, for real physical behavior are of such a “small” nature that they can not be further subdivided into yet “smaller” measures. The basic criticism of all this postulated behavior is that it does not correspond to observable real world objects. However, the extreme success achieved in modeling and predicting behavior of gross matter that can be observed using these postulated measures is what has produced almost all of our technical advances. New mathematical processes have shown that the behavior of accepted physical entities actually yields, through rational argument, the necessity that infinitesimal measures exist for other distinct “physical” entities. It is but a matter of choice as to whether or not these new entities are accepted as real objects in some form of objective reality that requires infinitesimal quantities in order to characterize their behavior.When photons (apparently not the original term used) were first postulated, Einstein stated that they were imaginary representations for energy. But since Einstein win a Nobel Prize based upon the postulate that these Planck defined energy elements are “instantaneously absorbed” by an electron and the electron acquires the total energy of the photon, almost all members of the scientific community insist that they are real physical objects. The same imaginary concepts were first assigned to other famous virtual particles and processes, and especially to “strings.” In all of these areas where there are new entities postulated that cannot be directly observed by human senses or machines, it is not their ability to predict, often but approximately, the behavior of gross matter that has led to their acceptance as objectively real objects. The reasons for any such acceptance are largely philosophical, political and economic rather than scientific. Notwithstanding this fact, the methods of mathematical modeling are not dependent upon such acceptance and apply whether or not the postulated objects, along with their postulated behavior, exist in object reality.

In order to construct a mathematical model, physical statements must be expressed in the same manner as the mathematical theory (usually the first-order predicate calculus) so that they can be interpreted. When these statements are interpreted, they must also be consistent with respect to the axioms that generate the mathematical theory. When this is the case, then the discipline statements can be adjoined to the mathematical theory as additional hypotheses. There are two general methods, with combinations of these methods, used to construct an interpretation relative to the physical sciences. To examine the behavior of a particular physical system, measuring devices are constructed from a strict set of rules. Although human senses are not considered as infallible devices, it is sometimes necessary to include such senses among the measuring devices. These devices are then placed in a strictly defined specific manner to record specific information. The next step is to correspond this information, in a strict and never varying manner, to entities within an existing mathematical theory. The measurements are used as a representation for physical behavioral characteristics. Often, when a physical law is stated, it’s stated in a manner where it is understood that the law states a relation between those measurements that characterize the behavior.

There is a second approach to this first type that is often necessary and that depends upon human comprehension of simple descriptive terms. In quantum physics, you have such a statement as “. . . all interactions between any two particles take place through the emission of a ‘field particle’ from the first particle and its subsequent absorption by the second particle.” [Duff, 1986, p. 26] The descriptive terms “emission” and “absorption” can be strictly related to mathematical “operators” that when interpreted yield the exact same description as appears in this quotation [Herrmann, 1983].

Then there is the concept of “symmetry” within quantum physics and how this concept is modeled by abstract mathematical group theory. There is no measuring device, as such, other than human mental imagery that yields a “picture” for symmetry or non-symmetry at the microscopic level of quantum physics. Group theoretic symmetry corresponds in a strict and absolute manner to a description for the behavior of fundamental particles. Duff states relative to quantum electrodynamics, for example, how the concept of symmetry correspondences to a description for behavior. When one uses this theory to describe the probabilistic behavior for a particle such as an electron, this description has associated with it a wave-function with an arbitrary “phase-factor.”

A physical alteration in the behavior of the electron is often modeled by a “mathematical transformation” of this wave-function or various aspects of this function such as the phase factor. In this case, “Successive transformations, each of which changes the phase, have an end result which does not depend upon the order of the sequence in which they are applied.” [Duff, 1986, p. 58] This is further related to the behavior of emission and absorption of virtual photons. Such transformations are modeled by the general notions associated with the mathematical theory of Abelian groups. Hence, such physical concepts correspond not to the numerical measurements recorded on a device, but correspond to behavior patterns as they can be comprehended by the human mind – patterns that are interpreted as being the same patterns exhibited by an abstract mathematical structure.

The previous example for physical modeling is related to a strict correspondence between specific terms taken from a physics dictionary where entities are described that appear to yield the described behavior. There is a very important example where we only model actual observed behavior by means of a strict correspondence and are not concerned with hidden entities that yield such behavior.

Humans apply various logical processes to collections of words, sentences and other written forms. All of these forms can be considered as a collection of finitely long strings of symbols. In the discipline of linguistics, the entire collection of symbol strings formed from a finite alphabet forms an intuitive set. Denote this set by the symbol  W  where  W  is the name for the set within the mathematical theory called standard ZF set-theory.

The logical processes used are applied to members of  W. This is interpreted by saying that we apply a logical process to “subsets” of  W  or, in the standard set theory notation, the logical processes are applied to any  A  subset of  W. Thus a logical process can be interpreted by a special function that exists within standard set theory and is often denoted by  C. When this logical process is applied to subsets of  W, subsets of the same set  W  are obtained. This is interpreted by the informal statement “for each  A  subset of  W, one has  C(A)  subset of  W.” One of the basic processes that differentiates certain types of logical deduction from other games we play with strings of symbols is that, when we apply such deduction to a set  A, we at the very least get the set  A  back again. Although it’s hoped we get more than this, this minimal property is modeled by saying “If  A  subset of  W, then  A  subset of  C(A).” If I list one or two more very obvious and basic properties about such logical processes and if they are modeled within standard set theory, then what is obtained is the basic axiom system for things called finite consequence operators.

Now using properties of standard ZF set theory and these consequence operator axioms one can, by mathematical reasoning, obtain new properties about finite consequence operators. These properties can be easily verified in the “laboratory” for they predict perfectly what occurs when the human being takes a collection of statements, logically combines them together and writes down a set of conclusions. Finite consequence operator theory is probably the most empirically verified mathematical theory that has ever existed. At no time is one concerned with any aspect of the physical brain activity that may yield the written statements. Note that finite consequence operator properties do follow from the rules of classical logic and they can represent a collection of physical processes.

All aspects of physical mathematical modeling assumes one underlying condition. The condition is the one condition assumed by science. It states that

***the physical-world behavior we perceive, comprehend or even perform, at the very least, is associated with the logical processes used to produce the mimicking mathematical theory, a theory that is strictly associated with the terms taken from a discipline dictionary and that predicts such behavior.

The second type of mathematical modeling occurs when one first observes or imagines within a nonmathematical discipline certain recurring and simple patterns that can be described in written form. Further, logical deduction from these patterns seems to follow the deductive processes used within mathematical reasoning. In a few cases, an individual may not know of a particular mathematical theory that would serve as a bases for an appropriate interpretation. When this happens, one may be forced to abstract the properties and create a new mathematical theory prior to the interpretation process. Apparently, this is what Newton did when he created infinitesimal calculus as it applies to mechanics. Newton considered perceived knowledge of mechanical behavior as the bases for his geometry. His geometry used concepts for observed physical motion, concepts that Newton claimed involved infinitesimally small measures. However, once the mathematical theory is created, the exact same requirements for a strict interpretation must be maintained.

Mathematical Truth and Its Correspondence to Discipline TruthThe classical two-valued mathematical “truth” relative to “proving things” transfers to the discipline being modeled in one and only one way. Certain statements using the discipline dictionary, when expressed in a very simple manner (an informal first-order language) will under the strict interpretation correspond to statements within a mathematical theory. Under the above condition ***, if certain statements from the discipline correspond to the non-logical axioms or hypotheses of a mathematical theory, then any other interpreted discipline statement E is said to be a rational statement, where “rational” means that statement E can be obtained through application of the exact same logical processes that produce the mathematical theory. This type of mathematical “truth” relates only to rational comprehension with respect to specific logical processes and nothing more. If the discipline language claims to describe a physical event, then mathematical “truth” (proofs) under this basic transfer is not related to whether or not an event will or will not occur. It is not related to the concept of “truth” as a measure of “fact.”

The concepts of “what is a physical fact,” “what is not physical fact,” “what is an observable” or “what is not an observable” are not aspects internal to mathematics but are exterior notions. The assumptions that must be made cannot be established with mathematical certainty. Only the concept of rationality is certain with respect to mathematical modeling. [Note: This does not mean consistency is certain, since the consistency of much mathematics is only empirical in character and, hence, not certain.] I re-state a previous contention.

If the discipline statements are “accepted” as behavioral “facts” about physical events, then under the basic assumption that what we can comprehend or observe about nature satisfies the same logical patterns as those exhibited by a mathematical proof, then the conclusions of the mathematical theorems of the theory, when strictly interpreted, will yield “facts” about how nature behaves.

Thus, under these conditions, if a set of discipline statements are listed in a category called “accepted fact,” then any statement deduced from the mathematical model would also need to be listed as “accepted fact.” [There are certain aspects of quantum theory that have been postulated to possibly follow other non-classical patters. But the logic used to demonstrate this is classical logic.]A significant prototype for the concept of “accepted fact” within physical science is the notion of “occurrence.” Under the above conditions, if  H  contains statements that describe the occurrence of physical events and the deduced conclusions contain statements  K  that describe other distinct occurrences, then when these  H  events occur, the  K  events should occur. All of this, however, requires that each such discipline statement used be constructed in the special form of classical logic and statements are combined together in the same special way as they are within the mathematical theory. This means you use a simple intuitive first-order predicate language or other specific types for those portions of the discipline language that have been mathematically modeled. In summary, you have replaced mathematical “truth” with a new interpretation that follows the same pattern as mathematical truth. Because of other concerns, the statements so modeled may be “assumed to be fact.” The concept called “fact” that you are modeling by, say, classical two-valued mathematical modeling must follow such properties like the statement  D_1  is fact, the statement  D_2  is fact, then the statement “D_1  and  D_2” is fact.

However, the most misunderstood aspect of such scientific discourse is that what has been accepted as a set  H  of “factual hypotheses” relative to statements that cannot be directly verified by scientific means need not be the correct set of hypotheses that yields the verified behavior of a physical system. There are many different sets of hypotheses that postulate different entities or different processes and, that after strict mathematical modeling, yield the exact same verified statements that are deduced from such an  H  set of hypotheses [Herrmann, 1983, 1993b, 1994b].

Thus, it is impossible to know which set of hypotheses is the correct set if only indirect verification is possible. This is one of the strongest arguments that, in general, the acceptance of certain sets of hypotheses is not based upon science but rather is often based upon philosophical, political or economic concerns.Yet, another important aspect of these modeling procedures is also often misunderstood or, even purposely, omitted. All of the above depends upon the interpretation. If the interpretation is altered, even in the slightest, then usually the above correspondence to discipline “fact” or “rationality” no longer exists. Although physical system behavior can still be observed and described, it is no longer being modeled by the specific mathematical theory simply because the interpretation has been altered or, indeed, for some mathematical theories, no known physical interpretation exists. The relation between physical system behavior and whether or not biological creatures can comprehend such behavior completely, as such comprehension is based upon the use of strict logical patterns, leads to some interesting philosophic concerns. I point out that Marx advocated that science change its reliance upon certain logical patterns associated with mathematics and rely, instead, upon his dialect. Of course, this has not happened.

But, what if classical logic is not the actual logic used by a science-community to predict physical behavior. Is there mathematical modeling procedure for this? Yes. It has been shown that the notion of the finite consequence operator can be used to represent all forms of known deduction. This deduction need not correspond in any manner to the “truth-table” model. It corresponds to the “occurrence” notion only. That is, if the (consistent) hypotheses occur and the physical processes satisfy the logic-system used by the science-community, then the “proved” conclusions will occur. Since, we really don’t know what future scientific endeavors will require including, as some suggest, changes in the logical processes used, finite consequence operators and their equivalent the “general logic-systems” would be the proper general method to employ when modeling physical behavior (Herrmann, 2006).

Errors in Mathematical ModelingWhen Newton began to mathematically model physical behavior, he needed general physical concepts that were absolute in that no physical process could alter the concept. He picked Galileo’s idea that time could be considered as such an independent variable. He had other absolutes as well such as measures of length. He allowed time and length to take on real number and infinitesimal values. From the time of Newton until 1824, the language of infinitesimals was the major numerical approach to the mathematical modeling of physical system behavior. In 1824, Able discovered that the then known language of infinitesimals was mathematically inconsistent [Able, 1824]. This led to an alteration in mathematical theories to exclude the terms and previously understood properties of infinitesimals. BUT the scientific community has continued, until this very day, to model physical behavior by use of the language of infinitesimals, hoping that they do not use the inconsistent part. This is one area where error is possible in modeling, but the major error occurred in 1905 and still presses.

A. Einstein, who had poor mathematical understanding at the time, developed what he claimed was a mathematical model for the strange behavior of light. He claimed that he was able to predict laboratory experiences from a more fundamental character of physical reality. His and almost all modern approaches to “relativity” questions make the same fundamental logical error [Herrmann, 1993b].

Einstein used a special physical language and an operational definition for how time is to be measured. This definition is then modeled mathematically via measures and the measures correspond to real numbers. He then makes certain “obvious” statements about these measures retaining the language of light propagation. He continues to construct what is claimed to be a fixed correspondence into a mathematical theory. However, throughout his assumed mathematical arguments, he often makes the error called the model-theoretic error of generalization.

This error means that he changes the interpretation and claims, without the possibility of proof, that what is established with respect to the language of light propagation and the first interpretation of measured time also holds true for the concept of time, in general, and, hence, any measurement of time. He and most modern scientists generalize these results, without possible proof, to another domain. Indeed, they generalize from a light propagation language for time measurement, and what they claim is not absolute time, to absolute time as measured by infinitesimals. These procedures can produce logical contradictions. [Reconsideration of the actual foundations for “relativity” and the correct theory of infinitesimals has led to a new theory that not only predicts as accurately as the Einstein theory but shows explicitly why certain measurements for “time” are, indeed, altered [Herrmann, 1993b].]

Suppose that you use a digital clock to measure time. The measures of time you use can be corresponded to the nonnegative integers. You use the theory of nonnegative integers as your mathematical theory for model building. One of the theorems of this theory states that if you take any nonempty set of nonnegative integers every member of which is less than 100, then there exists in this set a number m such that every member of this set is less than or equal to m. Under this correspondence, your measures of digital time also have this property. Suppose, on the other hand, you used as a measure Einstein’s defined of the time concept. Einstein “time,” prior to my alteration in the concept, must correspond to the real numbers since the calculus is used to derive relativistic conclusions. But the above theorem about nonnegative integers is not and cannot be a theorem about the ordering of the real numbers. A worse error, in the use of the calculus for physical modeling, is the continued use of infinitesimal arguments without applying the correct theory or considering the realism question.

The theory of infinitesimals was corrected by Abraham Robinson in 1961. It turns out that infinitesimals do not behave like the real numbers. We must not use them as a direct basic model for certain types of real world behavior. [They can have an indirect affect upon physical system behavior.] For example, any nonempty set of real numbers between 0 and 1 has an associated real number called a “least upper bound.” However, the set of infinitesimals between 0 and 1 does not have a least upper bound. This makes the infinitesimals difficult to conceive of geometrically. Further, we have the important question of realism. It’s claimed that we can indirectly observe the effects of the “top” quarks as predicted by a mathematical model and, therefore, top quarks must exist in objective reality although we cannot perceive them directly. Under the same reasoning, it must be the case that entities with infinitesimal charge exist, or that there can be a collection of infinitesimally many neutrons and infinitesimal time exists. Why? Since we use an infinitesimal mathematical model to model charge on surfaces and to predict the critical mass of plutonium.

Most particle physicists ignore the fact that the theories used in subatomic physics to predict indirectly evidence for the existence of the top quark, also imply the existence of subparticles as the “true” constituents of matter and fields. These facts indicate clearly that scientists choose from a mathematical model certain entities and call them real while not applying the same procedure to other possible constituents for what can be shown are philosophic reasons.

For physical system behavior, infinitesimal models predict very accurately. They are the basic model for almost all macroscopic and large scalar physical system behavior. The fact that they are, at the least, accurate in various domains requires, in certain cases, frequent updating of information.

Theology and Theorem ProvingNow that the necessary mathematical ideas have been discussed, it’s possible to relate these concepts to theological notations. Indeed, I am the first to do this for significant theological notions. Any possible hidden relation between the Scriptures and numerology will not be discussed where numerology refers to patterns of numbers associated with the Greek or Hebrew alphabets.

Except in one respect, it’s conceded that the pure mathematical exercise of theorem proving is neutral theologically. If one assumes, that there is no God, that humankind evolved by chance, and that humankind is only related to physical processes, then the exercise of “proving things” is just that and can have no relation to the concept of a Divine being.

Suppose that you only hypothesize that there is a God who created the universe and, in His own image and some how or other, created humankind. In this case, the “proving things” has a very specific character. [Since portions of this article may be read separately, I repeat a statement made previously.] For humankind to apply mathematical models and obtain accurate predictions for the behavior of perceived physical systems, certainly means that, as far as perception is concerned, our simplest two-valued “absolute in character” reasoning processes, the processes used in mathematical reasoning, mirror similar processes in the physical world. There may be many other processes that cause a physical system to develop over time and of which we can have no comprehension, but an exceptional amount of evidence shows that those processes we can comprehend have the same logic-like properties as those we use to “prove things.” There is considerable testimony besides physical evidence, which indirectly verifies this basic belief.

As Nobelest deBroglie stated it: “[T]he structure of the material universe has something in common with the laws that govern the workings of the human mind.” [March 1963] But, from the perspective of Divine creation, our mental processes where created by God and, if Paul is correct [Romans 1:19-20], we can “clearly” understand what is to be know about God’s attributes. As C. S. Lewis states it: “What appears to be my thinking is only God’s thinking through me.” [Lewis, 1978, p. 29] “[E]vents in the remotest parts of space appear to obey the laws of rational thought . . . . There is in our human minds something that bears a faint resemblance to it.” [Lewis, 1978, p. 32.] “According to it what is behind the universe is more like a mind than it is any thing else we know.” [Lewis, 1960, p. 32.] “He is the source from which all your reasoning power comes: . . . . ” and “He lends us a little of His reasoning powers and that is how we think: . . . .” [Lewis, 1960, p. 52, 60].

Thus, with respect to the Divine attributes described within the Bible such as what He has created, the mathematical process of theorem proving gives evidence for of an absolute Divine mental process as well as evidence for the hypothesized statements. No matter what one may say about any non-two-valued logic such as the dialectic arguments used by liberal theologians and philosophers, two-valued classical logic is the most absolute in character.

Even thought some may doubt the existence of a supernatural God that satisfies the hypotheses, mathematical modeling and theology can be technically related in a specific way that expands upon the theorem proving aspect. This relation is discussed in the next and last section of this article. This relation actually enhances the evidence for the above hypotheses.

Theology and Modeling[The methods used to model mathematically theological notions are the same as those used to model physical behavior via a mathematical structure and interpretations. However, the great difference is that the structure used is called a nonstandard model, and the objects being consistently interpreted were not originally considered to have theological significance. A nonstandard model appears necessary for a significant modeling method that describes God’s attributes is the method of “comparison.” God’s Biblically stated attributes are often compared with those of His created.]

I mention again that there are certain logical processes called “dialectic” processes that are not two-valued in the above sense. Liberal theologians and philosophers seem always to use these procedures to argue for their more controversial notions. There’s a good reason for this. By application of clever selections of a set of theses, a set of antitheses and a specific synthesizing process one can give a dialectic argument for the acceptance of almost any pre-selected statement. Further, there are many dialectic arguments that have no possible dialectic consequences. These facts can be established by using the absolute nature of two-valued logic for most of the basic dialectic concepts can be model mathematically. [Gagnon, 1980. Herrmann, 1992.]

Of considerable significance is the fact that in 1981 at the Annual Meeting of the American Scientific Affiliation held at St. David PA, a paper was given entitled “Mathematical Philosophy – Status Report I” that, by means of pure logical analysis, specifically predicted with respect to Marxism and other such dialectic controlled philosophies certain consequences. “[C]ertain political and philosophical systems that have infiltrated numerous modern human cultures are based upon logically inconsistent and, thus, contradictory foundations. . . . [T]he paramount inconsistency which pervades these systems is closely related to human rights considerations and, in particular, how these irrational systems view human behavior, wants, and aspirations. It is the close proximity of these demonstrable contradictions to these highly emotional human factors which will tend to generate certain significant consequences. . . . Unless these fundamental errors are eradicated from these systems, then these cultures appear to be doomed to collapse from inherent logical inconsistencies. . . More importantly, it is a highly significant fact that the internal logical collapse of these inconsistent systems can take any diverse form. This collapse could easily involve an irrational action which would envelop all of mankind in an unprecedented holocaust.”

Dialectic controlled systems contradict a two-valued mathematical model for the behavior of the Divine mind [Herrmann, 1994a]. Two-valued logic is the basic logic utilized by humankind since the beginning of language. It’s especially common throughout cultures that believe that such concepts as freedom, life, and even forms of slavery, among others, are absolutes. The significance of the above quoted statements is that the prediction made in 1981, in many respects, has come true. This is a clear and exact example of the dangers in using the dialectic arguments for any purpose when two-valued logic is available. For this reason, I must reject the dialectic arguments put forth by any theologian or philosopher when a two-valued absolute logic will suffice.

Mathematical modeling of a nonmathematical discipline often depend upon absolutes. Absolutes are the simplest possible terms, descriptions or concepts that are fixed and do not vary in meaning. If they are rules or instructions, then their simplicity is such that the vast majority of individuals would arrive at the exact results after individually following the instructions. The use of discipline language absolutes is not necessary in order to construct a specific model. However, it is self-evident, that if absolutes are not modeled, then as discipline terms or concepts alter their meanings the model’s interpretation may fail or, indeed, there may no longer exist an interpretation. Science is mostly interested in fixed immutable physical laws and processes and, hence, the strict modeling of absolutes.

For a Christian, there are three biblical proof-texts that are so simple in character that their meanings could hardly be distorted by an alteration in their literalism. Malachi 3:6 should be considered as an absolute statement. Certainly, whatever God states or promises, and His attributes should be considered as absolute in character. Descriptions can be conditionally absolute, that is they are absolute except under a specific condition. Some of God’s statements and promises are conditional relative to time, but descriptions for Divine attributes can also be absolute “in time” from a viewpoint of the language used and the logic employed. Paul implies in Romans 1:19 that what can be known about God by all of humankind is “plain” [NIV, Today’s English, Jerusalem, RSV, New English], “apparent” [Greek Literal], “manifest” [KJ], “known instinctively” [Living], for God has made it “plain” [NIV, Today’s English, Jerusalem], “shown it” [KJ, RSV], “disclosed it” [New English], “manifests it” [Greek Literal]. For if this is not so, then human beings would have an “excuse.”

The only reasonable way that the absolute statements made by God can be “plain” to all humankind and absolute in their meanings (i.e. they do not change) is that they retain the same meaning from the moment they were written throughout historical time. The concept of “plain” to all humankind would require that the meanings not be hidden from common human comprehension and be “plain” only to a selected few philosophers or scientists. Further, 1 John 2:27 tells exactly how the Bible scribes obtained the word-forms. Whether it be under Old Testament or New Testament anointing, the word-forms are not those as might be selected by the scribes, but appear to be word-forms selected by the Holy Spirit of God. They are word-forms that, at the time written, have absolute meaning, whether literal, figurative or another special linguistic construction and that are understandable in their original meanings.

It is “plain” to me, that any other interpretation of the method used by the scribes to obtain the absolutes of the Bible, any other interpretation for the word-forms but the common meaning at the time the word-forms were presented and any other method for comprehending these absolutes except for a two-valued logic, would be contrary to instructions given specifically within the Scriptures. Special processes that influenced the selections of words that appear in the Bible have now been mathematically modeled [see influences]. Of course, the basic fixed contextual meanings of such words leads to conceptswhere concepts may be modeled by the describing set technique. This technique allows for the illumination or refining of concepts to improve comprehension.

To further justify these previous conclusions, consider the G-model [Herrmann, 1982] and the D-world mathematical model [Herrmann, 1994a] that, using a fix interpretation, model explicitly the Bible’s absolute description for God’s general behavioral attributes [see attributes] and how God’s mind functions when compared with human mental processes, respectively. [Note: The terms used in these older articles have been altered. See changes This model is now termed the GD-model.] Then there is the General Grand Unification model (GGU-model) that besides describing a cosmogony and solving the general grand unification problem [Herrmann 1988, 1994b] contains, using a strict re-interpretation, a mathematical description for various Divine creation scenarios, one of which is the MA-model. The MA-model establishes that a Biblically literal interpretation in terms of the explicit linguistic statements given in Genesis 1 leads to a scientifically rational creation scenario. (For examples, see the last sections of my book “Science Declares Our Universe IS Intelligently Designed, Xulon Press, 2002, and my personal belief statements starting with beliefdvd.)

For specific examples of absolutes, consider Job 33:12 “. . . I will answer thee, that God is greater than man.” Then seek out the attributes that when compared with man, one can say that God is “greater than man.” Psalm 95:3 states “. . . how profound your thoughts.” Using the Scriptures, one should assume that God’s thoughts are more profound than those of any biological life-form, that God is also more righteous than any biological life-form, that God is more intelligent than any biological life-form, etc. Using the first part of Herrmann, [1993a] and adjective reasoning one obtains a mathematical model for such God-like attributes. Consider statements such as Isaiah 55:8, 9. “For my thoughts are not your thoughts, . . . . As the heavens are higher than the earth, so are . . . . my thoughts than your thoughts.” These absolute concepts relative to thought patterns, and many more Scriptural absolutes, are modeled by the behavior of nonstandard mathematical objects.

It’s self-evident from this discussion, that the most significant theological aspect of (two-valued) mathematics is in the activity of mathematical modeling. Correct mathematical modeling of theological concepts demonstrates explicitly that the modeled Divine attributes and there relation to His created entities follow the most common form of human logical discourse.

Applying the exact same philosophy of science for the acceptance of entities by means of indirect evidence, entities that cannot be perceived by means of humankind’s basic five senses, the correctness of the predictions made when such theologically interpreted models are applied to science and to human behavior demonstrates beyond any doubt that such a Scripturally described entity, God, must exist.

However, if one should reject the notion of indirect evidence as a physically sound notion and since, with the exception of “proving theorems,” mathematics itself can never determine what is or is not fact, unless certain statements are actual fact and behavior follows certain explicit logical patterns, then mathematical modeling of theological notions does have a minimal effect.

It has been established, beyond all doubt, that with respect to a specific interpretation for Scriptural terms, that the basic statements made in the Scriptures relative to Divine attributes and behavior are rational statementsthat follow the most common and explicit form of logical discourse – modern scientific logic. (RH)

*** Will Myers

Please “Donate a penny” or any amount to support the ministry’s research and development. Just click the link below:

https://www.paypal.com/cgi-bin/webscr?cmd=_s-xclick&hosted_button_id=DKCQUR7YG7W5U

Share this:

Posted in Uncategorized | Leave a comment

Biblical Genealogies Revisited: Further Evidence of Gaps

 GOD

 

Calculations based on genealogies recorded in Genesis 5, 10, and 11 help form the foundation for the belief in a young earth. However, guest writers Dan Dyke and Hugh Henry argue that the relationships listed in these genealogies may indicate more general ancestor-descendant relationships (rather than parent-child relationships), thus implying that there are gaps in the genealogies that render them unreliable for determining a creation date.

****

In the age of the earth controversy, some who support a young-earth view assert that the Hebrew word yālad—which is translated as “beget” in the King James Version—always refers to a direct father-son relationship in Genesis 5, 10, and 11 and other genealogical passages. In an earlier series of articles we discussed that yālad implies only an ancestral relationship, not necessarily a parent-child relationship. This allows for gaps in the genealogical records in the Pentateuch.

Our research has been questioned based on the form (similar to the English concept of verb tense) of yālad. In Genesis 5, 10, and 11, the form of yālad used is called a hiphil, which means the subject is either directly or indirectly causing the result of the verbal action. In this article we confirm that yālad in the hiphil describes an ancestor-descendant relationship that is not necessarily a direct father-son relationship. This conclusion is derived from applying to yāladthe same principle of interpretation employed with biblical Hebrew words in general: that the surrounding narrative is key to proper interpretation. In the case of the genealogies, the narrative reveals the nature of the relationship between the one begetting and the one being begotten. Taking Noah as an example, the use of yālad in the hiphil form indicates direct causation when he begets Ham, Shem, and Japheth. However, yālad in the hiphil by itself shows only that Ham, Shem, and Japheth are related to him in a direct line of descent; the surrounding narrative is what reveals that these are father-son relationships.

To further demonstrate the validity of our argument, we’ll examine two narratives that provide examples of yālad in the hiphil used to describe an ancestor-descendant relationship. In both of these cases, a direct father-son relationship is without question impossible. The texts will be examined in reverse historical order.

Hezekiah’s “Sons”

The setting of the first narrative—recorded in Isaiah 38–39 and 2 Kings 20—is the time Sennacherib of Assyria invaded Judah, which took place around 701 BC during the latter part of King Hezekiah’s reign. (This includes the famous story of when God sent an angel to kill 185,000 Assyrian soldiers in the night.) The biblical record of this period also describes Hezekiah’s illness and the visit of emissaries from Merodach-Baladan (aka Marduk-apla-iddina II), king of Babylon. The historical order of these events is a matter of debate among scholars, but it seems certain that they center around the year 701.

The third event is of importance to this discussion. The Babylonian king appears to have sent emissaries to Hezekiah to enlist his aid in fighting against Sennacherib. Isaiah describes it as follows:

Hezekiah was pleased, and showed them all his treasure house, the silver and the gold and the spices and the precious oil and his whole armory and all that was found in his treasuries. There was nothing in his house nor in all his dominion that Hezekiah did not show them. (Isaiah 39:2, NASB, emphasis original)

Isaiah confronted the king and pronounced judgment upon him for his foolish actions. Note the last part of this prophetic message.

“Behold, the days are coming when all that is in your house and all that your fathers have laid up in store to this day will be carried to Babylon; nothing will be left,” says the Lord. “And some of your sons who will issue from you, whom you will beget [yālad in the hiphil], will be taken away, and they will become officials [sārîsîm] in the palace of the king of Babylon.” (Isaiah 39:6–7, NASB, emphasis original)

A casual and uncritical reading of the text would suggest that these “sons” were Hezekiah’s immediate biological offspring because the phrase “who will issue from you” is juxtaposed withyālad in the hiphil. Thus, the crucial question is the identity of the “sons” who will become officials in Babylon.

Isaiah’s prophecy was not fulfilled until a century later when Nebuchadnezzar of Babylon made three raids against Judah dated 605–604 BC (cf Daniel 1:1–4), 598–597 BC (cf 2 Kings 24:10–16), and 588–586 BC (cf 2 Kings 25:1–21), respectively. Historical records of the first raid describe it in such a way that it appears to begin the fulfillment. According to Babylonian texts, specifically the Chronicle Concerning the Early Years of Nebuchadnezzar II, Nebuchadnezzar “marched unopposed through the Hatti-land; in the month of Šabatu he took the heavy tribute of the Hatti-territory to Babylon.” To a Babylonian, the Hatti-land (land of the Hittites) included Judah. This raid is probably the raid mentioned in the book of Daniel:

In the third year of the reign of Jehoiakim king of Judah, Nebuchadnezzar king of Babylon came to Jerusalem and besieged it. (Daniel 1:1, NASB)

When Babylonians returned home from such raids, they carried back treasures and prisoners. In the ancient world these might be dedicated to the pagan temples and/or to service in the king’s palace (temple and palace were closely related in the minds of these people). The book of Daniel continues:

The Lord gave Jehoiakim king of Judah into his hand, along with some of the vessels of the house of God; and he brought them to the land of Shinar, to the house of his god, and he brought the vessels into the treasury of his god. Then the king ordered Ashpenaz, the chief of his officials [sārîsîm], to bring in some of the sons of Israel, including some of the royal family and of the nobles, youths in whom was no defect, who were good-looking (Daniel 1:2–4a, NASB, emphasis added)

This is the first known instance of the Jewish royal family being taken to Babylon and made into officials (sārîsîm) in the Babylonian court. In the two subsequent raids, all of Jerusalem’s treasures were removed and the city utterly obliterated. Together, these three raids fulfilled every aspect of Isaiah’s prophecy.

Hezekiah died around 687 BC; Jehoiakim was his great-great-grandson. Therefore, the royal “sons” taken captive as youths were probably great-great-great-grandsons of Hezekiah. Yet the text clearly says they were begotten (yālad in the hiphil) by Hezekiah. We emphasize that, in this case, yālad in the hiphil is in the same form as yālad in Genesis 5 and 11.

It is absolutely clear from this example that yālad in the hiphil does not necessarily imply a parent-child relationship, but rather it represents a general ancestral relationship.

Moses’s Farewell Address

Let us now move to a Mosaic text from Deuteronomy to verify that Moses—who we believe was the author of the Pentateuch—held the same understanding of yālad in the hiphil. An example is found in Moses’s farewell address:

When you become the father of [yālad in the hiphil] children and children’s children and have remained long in the land… (Deuteronomy 4:25a, NASB)

In this verse, yālad in the hiphil obviously refers, at least, to grandchildren as well as children. Again, yālad is in the same form as in Genesis 5 and 11, verifying that in Mosaic literatureyālad in the hiphil does not necessarily imply a parent-child relationship.

From these two examples, together with our earlier articles, it can be concluded that large gaps are possible in the genealogies in Genesis 5, 10, and 11. We welcome thoughtful response to this article.

*** Will Myers

Please “Donate a penny” or any amount to support the ministry’s research and development. Just click the link below:

https://www.paypal.com/cgi-bin/webscr?cmd=_s-xclick&hosted_button_id=DKCQUR7YG7W5U

Share this:

Posted in Uncategorized | Leave a comment

High Levels of Pseudogene Expression Help Silence the Case for Common Descent

 

Based on a study of cells derived from 13 different tissue types, researchers have demonstrated that pseudogenes are expressed at high levels. These high levels of expression indicate the central importance of pseudogenes in cell differentiation and the progression of cancers. As researchers continue to uncover function for pseudogenes, their interpretation as molecular fossils in the genomes becomes less tenable.

****

Pop psychologists identify four personality types: driver, analytical, friendly, and expressive. The latter group includes outgoing individuals who talk a lot. They are energetic and often quite charismatic, drawing people to themselves. While people with an expressive personality can make those they interact with feel valued, their talkative nature can be overwhelming.

Psychologists aren’t the only ones who study levels of expression. Biochemists do so, also—but instead of focusing their attention on people, they are interested in genes. Just like people, genes differ in their level of expression.

The cell’s machinery uses processes, known collectively as gene expression, to convert information stored in DNA into functional products like proteins and RNA molecules. Similar to human personalities, some genes are highly expressed and others are barely expressed at all.

Gene expression is one of the most important processes in biology. The patterns of gene expression define an organism by converting the genetic information stored in the genome into biological traits. For example, humans and chimpanzees have highly similar genomes (and in effect the same gene set), but have different gene expression patterns. These differences in gene expression account for the biological and behavioral uniqueness of these two primates.

It turns out that gene expression level also plays an important role in the common descent debate taking place among evangelical Christians. Of specific interest is the expression level of pseudogenes in the human genome. Scientists and Christians who favor the proposal that God used the evolutionary process to create (evolutionary creationism) maintain that shared pseudogenes in the genomes of humans and the great apes provide incontrovertible evidence for common descent.

I have written extensively about pseudogenes in the past. Specifically addressing what they are, the different types, and why evolutionary biologists view these shared sequence elements as evidence for common descent. I also articulate why I think that pseudogenes can be interpreted as evidence for a Creator’s handiwork. For background information on pseudogenes, I invite you to check out any of the articles linked below.

Evolutionary biologists consider pseudogenes to be the dead, useless remains of once functional genes. According to this view, extensive mutations destroyed the capacity of the cell’s machinery to “read” and process the information contained in these genes. Still, pseudogenes possess the tell-tale signatures that allow molecular biologists to recognize them as genes, albeit nonfunctional ones.

For evolutionary biologists, shared nonfunctional DNA sequences (or so-called junk DNA) clearly indicate that these organisms shared a common ancestor. According to their interpretation, the junk DNA segments arose prior to the time the organisms diverged from their mutual evolutionary ancestor.

When evolutionary biologists interpret shared genetic features as evidence for common descent, they make a number of assumptions. One of the most important is that shared features lack function. It makes no sense for a Creator to introduce identical (or nearly identical) features—at corresponding locations—into the genomes of two or more organisms. Yet, if these organisms share an evolutionary history, then shared nonfunctional genetic features find a ready explanation.

However, over the course of the last decade or so, researchers have uncovered evidence that the three classes of pseudogenes all display function. The links below will take you to articles I’ve written that progressively document discoveries of pseudogene function and its central role in regulating gene expression.

As it turns out, the pseudogene RNA transcripts (which are produced by the cell’s machinery from the DNA sequences of pseudogenes) take part in an elaborate network of RNA molecules. The relative levels of the RNA transcripts within the network influence the amount of protein products generated at the ribosomes, thereby impacting gene expression. Researchers have proposed the competitive endogenous RNA (ceRNA) hypothesis to describe and account for these interactions and their role in gene regulation.

In other words, biochemists and molecular biologists have discovered a key functional role for pseudogenes that depends in part on their sequence similarity to the corresponding “intact” genes. That being the case, the shared presence of pseudogenes in genomes could just as easily reflect the work of a Creator—common design, if you will—instead of common descent.

When I’ve shared the implications of the ceRNA hypothesis privately and publically with scientists who are friendly to the evolutionary paradigm, their response is to acknowledge function for pseudogenes, but they rightly point out that this idea is only relevant for pseudogenes that are expressed. What about the pseudogenes in the human genome that aren’t expressed? In other words, the ceRNA hypothesis only supports common design if most of the pseudogenes in the genome are expressed.

Work published in the summer of 2012 addresses this concern.1 A large team of collaborators from America and India systematically measured the expression of pseudogenes in cells taken from 248 cancerous and 45 benign cell lines representing 13 tissue types. They noted that pseudogene expression was “surprisingly prevelant.”2 Pseudogene expression fell into two categories: ubiquitous (expressed in all cell lines, at all times) and lineage-specific (expressed only in certain cell lines at certain points in the life cycle, and throughout the course of development).

The researchers concluded that: “transcribed pseudogenes are a significant contributor to the transcriptional landscape of cells and are positioned to play significant roles in cellular differentiation and cancer progression.”3

*** Will Myers

Please “Donate a penny” or any amount to support the ministry’s research and development. Just click the link below:

https://www.paypal.com/cgi-bin/webscr?cmd=_s-xclick&hosted_button_id=DKCQUR7YG7W5U

Share this:

Posted in Uncategorized | Leave a comment