Book Review: A Fortunate Universe:Life in a Finely Tuned Cosmos, Part 1

October, 2019
Dan Reynolds PhD

Figure 1

The book A Fortunate Universe: Life in a Finely Tuned Cosmos (Cambridge University Press, 2016, Fig. 1) by Geraint F. Lewis and Luke A. Barnes is one of the most recent and comprehensive books on the fine tuning of the physical laws of our universe. The “fine tuning of physics” refers to the discovery that the laws of nature, as currently understood, are just what they must be for life forms such as us to exist. Change any of these laws even slightly, and life as we know it would be impossible. The authors explain the various ways the universe is fine-tuned and then explore what fine tuning may imply. Lewis and Barnes examine particle physics, the funda- mental forces, cosmology, inflation theory, the flatness and horizon problems, the idea of a multiverse, string theory, the nature of the constants in the equations of the laws of physics, quantum mechanics, relativity, dark matter, dark energy, the double slit experiment, and much more. Both authors accept the standard cosmological and biological evolutionary stories including deep time. Nevertheless, even within the Big Bang/inflationary paradigm, fine tuning is seen everywhere from quarks to galaxy clusters. As we shall see, the fine tuning of physics renders naturalistic explanations untenable. Much of the discussion is useful to biblical creationists as the fingerprints of God are clearly seen throughout the created order.

Lewis and Barnes are both graduates of the University of Cambridge (Fig. 2). Lewis holds a PhD in astrophysics while Barnes has a PhD in astronomy. Both men work at the Sydney Institute of Astronomy in Australia. Both men freely acknowledge fine tuning but differ in their interpretation of it. Lewis looks to the multiverse to explain fine tuning. Barnes, a Christian, argues for theism as the best explanation.

The book is accessible to layman yet informative to specialists. The concepts are laid out logically and clearly with an occasional dash of humor.

This review will discuss some of the highlights of the book, chapter by chapter.

Chapter 1: A Conversation on Fine Tuning

The authors want to know why the universe is just right for complex, intelligent beings. They approach fine tuning by doing thought experiments. They explore hypothetical universes that could have resulted from changes in natural law. They then ask if any of these universes could support life as we know it. They want to understand why the universe is the way it is and if it must be this way to support life. Could it have been different?

There are numerical constants in the equations of physics that describe our universe. The numerical values of these constants can’t be derived from any known theory; they appear as brute facts of nature. In other words, these constants can’t be predicted but must be measured. There appears to be an inflexibility in the values of the constants insofar as life as we know it is concerned. Change the constants a little and a universe hostile to life is the result. In fact, most of the hypothetical universes are sterile.

The authors accept the Big Bang, abiogenesis, and deep time. Much of their discussion examines fine tuning assuming these things. They assume that most of the naturally occurring elements in the periodic table (up to uranium) were generated either in the early moments of the universe’s existence, in stars, or in supernovae. They openly admit that no one understands abiogenesis and assume it must be very rare even under favorable conditions. Life, they say, could only have arisen with the right chemicals, energy, and location. Earth just happens to have formed after three generations of element-forming stars that provided the necessary chemicals. We live in a privileged place and era.

Figure 2

Science consists of theories and experiments. Theoretical physicists such as the authors make mathematical models of some aspect of the universe and then test them with observations. There are usually four pieces to their models: (1) mathematical representation of the object of study, (2) the form of the equations (often the equations show how a system changes with time), (3) constants in equations that must be measured, and (4) the associated context of application such as initial conditions. The four pieces are used to make predictions which are then compared with observations. Theories that make predictions that match observations are kept, and those that don’t are rejected. Theories are kept until they fail to match observations and then they are replaced.

Chapter 2: I’m Only Human

The authors briefly describe what the biochemistry of life as we know it consists of. DNA is transcribed into RNA which is translated according to the genetic code into proteins in the ribosome. A protein’s three-dimensional structure determines its function. The three-dimensional structure is determined by how the protein folds which depends on the sequence of amino acids. The formation of the molecules of life depend on the nature of the atoms in the periodic table, especially carbon. It is the chemistry of the elements that facilitates the biochemistry of life as we know it. The chemistry of each element is determined by how it can share its electrons with other elements to make chemical bonds.

The chemistry of atoms depends on the properties of fundamental particles and forces. There are 92 naturally occurring elements. Each element is made from protons, which have a positive electric charge, neutrons, which have no charge, and electrons, which have a negative charge. An atom consists of protons and neutrons in a nucleus surrounded by electrons located in discrete energy shells. An element is defined by its number of protons. Electrically neutral elements have an equal number of electrons and protons. It is the number of electrons in the outermost shell of an atom, and the number and type of orbitals associated with that shell, that determines an atom’s chemistry. Atoms are mostly empty space. There are 12 known fundamental particles. As far as we know, these particles are not derived from a combination of yet more fundamental particles. Each fundamental particle has a specific mass that theory can’t predict but that has been measured. The most familiar and important of the fundamental particles are the electron, quarks, and neutrinos. There are six quarks, but only two, the up and down quarks, are found under most conditions. Protons consist of two up quarks and one down quark while neutrons have two down quarks and one up quark. Up quarks have an electric charge of +2/3 while down quarks have an electric charge of -1/3. Neutrinos are almost massless, have no electric charge, and can penetrate most matter without interaction. For most particles there is a corresponding antimatter particle with the opposite charge. While anti-matter has been made in the laboratory, our universe primarily consists of matter particles. How the universe ended up with matter only is a long-standing mystery.

Neutrons outside of an atomic nucleus have a half-life of 15 minutes. A neutron will decay into a proton, an electron, and an antineutrino. However, neutrons in an atomic nucleus are generally quite stable.

Changing the masses of the fundamental particles would radically change chemistry. If the mass of the down quark were 70 times greater, they would all covert to up quarks; there would then be no protons or neutrons and hence no elements for building life. 1 If the mass of the up quark was 130 times greater, they would all convert into down quarks and our familiar 92 elements would not exist. If the mass of the down quark were three times larger, neutrons would become unstable and would convert to protons resulting in a universe consisting of only hydrogen. If the mass of the up quark was increased by a factor of 6 or the mass of the down quark decreased by 8%, then all protons would convert to neutrons creating a very boring universe. If the mass of the electron were increased by a factor of 2.5, a neutron only universe would result.

Particles have a property called spin. The spins are quantized. Fermions such as electron and protons have half integral spins. For example, the electron is a fermion with a spin of ±1⁄2. Due to the Pauli Exclusion Principle, no more than two electrons can occupy an atomic orbital. The paired electrons in an orbital must have opposite spins. If the spin of the electron were ±1, the Pauli Exclusion Principle would no longer apply. This would result in all the electrons of an element populating the lowest energy orbital about the nucleus (where they are held tightly), making them unavailable for sharing with other atoms. The result would be no chemistry.

It is speculated that every particle has a much more massive supersymmetry counterpart called a sparticle. Supersymmetry is a theory devised to help solve some theoretical problems involving the Higgs boson (see below). So far, there is no evidence for sparticles. If they exist, they must be very massive and beyond the reach of the energies obtainable by Large Hadron Collider (LHC) in Europe. Some believe supersymmetry is dead.

As we have seen, slight changes in the masses of some of the fundamental particles or their spins would create a universe with chemistry very different from ours.

Chapter 3: Can You Feel the Force?

There are four fundamental forces, each mediated by a force particle: the strong force, mediated by gluons; the weak force, mediated by three force particles; electromagnetism, mediated by photons; and gravity, speculated to be mediated by the graviton, which has yet to be detected. There is also the Higgs boson and the associated Higgs field that imparts mass to the other particles. There are quantum theories for the strong, weak, and electromag- netic forces, but not for gravity. The strong force is what holds quarks together in protons and neutrons. It also holds protons and neutrons together in the atomic nucleus against the repulsive electromagnetic force (like charged particles repel each other). The weak force regulates radiometric decay; it is able to convert an up quark into a down quark and vice versa. Hence in some types of radiometric decay, protons are converted into neutrons or vice versa. The strong and weak forces operate over very short dis- tances (fraction of the diameter of a proton) while electromagnetism and gravity can work over very large distances (light-years).

In Newtonian (classical) mechanics, gravity was seen as a force of attraction between objects with mass. In relativity, gravity works by curving space-time.

Each force has an associated coupling constant. The coupling constant is a measure of the probability that a force particle will be exchanged with other particles. In a proton, the coupling constant for the strong force between two up quarks is 1 while the corresponding coupling constant for the electromagnetic force is 1/137. The square of the ratio of the coupling constants for the strong and electromagnetic forces (~20,000) reveals which force will dominate. Hence the strong force dominates and the proton is stable despite the repulsion between like-charged quarks. The coupling constant for the electromagnetic force is known as the fine structure constant. No theory predicts the coupling constants; they must be measured. If the coupling constants of the strong or electromagnetic forces were changed only slightly, very different universes would result. Among the problems that could emerge are unstable protons, unstable carbon, similar energies for nuclear and chemical reactions (which implies atoms would be unstable at the same energies required for chemical reactions), and unstable deuterons (one of the important intermediates in the nuclear chemistry in stars).

In the alleged Big Bang nucleosynthesis, the identity and quantities of the elements formed depend on the four forces and the temperature. Gravity controls the temperature, which determines which reactions occur. The weak force allows protons and electrons to combine into neutrons. Neutrons are needed to form the elements in the periodic table. The proton to neutron ratio seen in the universe is ~7. The temperature must be high enough to make neutrons and then make other elements from the neutrons before they decompose (15-minute half-life). If the strong force had been greater by a factor of 2, 90% of the hydrogen would have been converted to helium in the early universe; the conversion of hydrogen to helium by nuclear fusion is the main energy-producing reaction in stars. If the weak force had been weaker, there would have been more neutrons leading to more helium. If gravity were stronger, there would have been a higher temperature, more neutrons, and less hydrogen. The ratio of the masses of the up quark to the down quark makes protons more stable than neutrons and thus helped determine the hydrogen-to-helium ratio in the early universe.

Some of the elements are unstable and decay by radiometric decay. Radiometric decay usually occurs by alpha or beta decay. Alpha decay involves ejection of a helium nucleus (two proton and two neutrons). Beta decay can occur through three mechanisms: beta particle emission (electron loss from the nucleus resulting in the formation of a proton from a neutron), positron emission (the positron is the electron’s antiparticle; this process converts a proton into a neutron), and an orbiting electron falling into a nucleus to combine with a proton to form a neutron. Alpha decay is controlled by the strong and electromagnetic forces. Beta decay is controlled by the weak force. The heat generated by the decay of radioactive elements is assumed to help keep the earth’s core in a liquid state, thereby facilitating earth’s magnetic field and the protection it provides from the solar wind. Changing the strong or weak forces could radically change radiometric decay with potentially disastrous consequences.

There are 20,000 known isotopes. Only 300 are stable. The stability of the elements depends on the balance between the strong, weak, and electromagnetic forces. The elements that are stable are in the “valley of stability.” If the strong force were increased by a factor of 4, no element larger than carbon would exist. The same result is obtained if the electromagnetic force were increased by a factor of 12. The elements essential to life such as carbon, nitrogen, oxygen, phosphorous, sulfur, and a few others have isotopes in the valley of stability. These stable isotopes are what DNA and proteins are made from. Changing the strong, weak, or electromagnetic forces could render some of these isotopes unstable, making life as we know it much less likely or even impossible.

Chapter 4: Energy and Entropy

The authors ask why the universe was born with so much useful energy. They review the laws of thermodynamics. The so called “Zeroth” Law deals with thermal equilibrium. Bodies at different temperatures that come into physical contact will eventually come to the same temperature. The First Law of Thermodynamics states that matter/energy can be neither created nor destroyed but only converted from one form into another. The Second Law of Thermodynamics states that the amount of useful energy in the universe must decrease with time. Useful energy is energy that can be used to power some physical process. An example would be sunlight which can drive photosynthesis or melt ice. The second law is also known as the law of entropy. There are other ways to state the second law such as there is no such thing as a completely efficient process or there is no such thing as a perpetual motion machine or that isolated physical systems change from less probable states to more probable states. One of the strong evidences that the universe is not infinite in age is due to the second law. An infinitely old universe would no longer have any useful energy. Since our universe does have useful energy, if must be of a finite age (it had a beginning). The only way to keep an isolated system from increasing in entropy over time would be to reach absolute zero (no kinetic energy in the atoms and molecules). But the Third Law of Thermodynamics basically states it is impossible to reach absolute zero! On a cosmic scale, the universe is still at a low entropy state because there is still much useful energy as seen in the burning of stars. Running the universe backwards in time would take it to times when there was even more useful energy and less entropy than now. Hence, since the universe had a beginning, it must have started out in a very low entropy (improbable) state. Why should that be the case?

The reactions that occur in stars are determined by the interplay of all the forces. Change any of the forces and different stars would result. Nuclear fusion in stars is a balance between gravity, determined by a star’s mass, and heat. If the force of gravity were weaker, the temperature of the star would be lower, and the energy levels reached by the atoms in the star would be lower. Lower energies of the atoms would mean the wavelengths of light emitted by the sun would be different, which would impact photosynthesis on earth. For Lewis and Barnes, a decrease in gravity would result in fewer supernovae and hence less availability of the elements required for life as we know it. On the other hand, if gravity were stronger, stars would be hotter, with their atoms excited to higher energy states, resulting in the emission of photons with higher energy (light with shorter wavelengths), which could be lethal to life as we know it.

An 8% decrease in the strong force would make deuterium, an important intermediate in stellar fusion reactions, unstable. A 12% increase in the strong force would favor the formation of the diproton diproton (a helium nucleus with no neutrons) and cause stars to burn too quickly. Our stars (the ones with the right mass) are just right for life: they produce the right elements, give off the right wave-lengths of light, have long lifetimes, etc.

For Lewis and Barnes, all the elements that make up the earth ultimately were formed in stars by nuclear fusion and spread by supernovae. Eventually, the gas clouds formed by the supernovae recollapsed to form a new generation of stars, some with planetary systems like ours. It turns out that the nucleus of carbon has an excited state (also called a “resonance”) that can be stabilized by emission of a high energy gamma ray. If this were not the case, the energy of the excited state would cause the nucleus to disintegrate and carbon would not be formed in stars. For Lewis and Barnes, this would mean no organic, carbon-based life forms could have ever evolved. If the energy of the carbon resonance varied by as much as 3%, stars would not produce any carbon. In a similar vein, if the strong force were increased by 0.4%, oxygen would not be formed in stars, but carbon would be. On the other hand, if the strong force were decreased by 0.4%, no carbon would form in stars, but oxygen would be. If the quark masses were changed by a few percent, then neither carbon nor oxygen would form in stars.

For the Big Bang theory to work, the universe had to start off with much free energy and low entropy—an improbable condition. There had to be a smooth matter distribution, as presumably evidenced by the cosmic micro-wave background radiation (CMB). A smooth distribution of matter has great gravitational potential energy. But somehow, as the Big Bang story goes, the smooth distribution of matter had slight imperfections in the homogeneity which were just right to allow the action of gravity to eventually form stars and galaxies.

Chapter 5: The Universe is Expanding

This is where things get really interesting. The authors start off by describing the universe as we understand it. There are billions of galaxies of various shapes and sizes. Our own Milky Way galaxy is a spiral type with 400 million stars. Galaxies form groups and larger clusters held together by gravity. The Milky Way belongs to the Local Group. There are cosmic voids where little matter exists. There are web filaments of groups of galaxy clusters. Where did all this structure come from?

Our best theory of gravity, relativity, says mass bends space-time. To a first approximation, the universe is assumed to be homogeneous and isotropic. Homogeneity is the idea that the matter-energy distribution in the universe is very even and smooth if viewed on large scales. Presumably, there are no special places—no edges or center. You can picture this by imagining that the surface of a globe represents three-dimensional space. No matter what direction you travel or where you start on the globe’s surface, you would never encounter a barrier or any place that looked unique. Isotropy means regardless of location in the universe and in which direction one looks, everything appears more or less the same in terms of the amount and type of matter. These two assumptions, homogeneity and isotropy, taken together are referred to as the cosmological principle or the principle of mediocrity. The cosmological principle is assumed, but has not been proven, to be true.

The geometry of space-time in our universe is flat. It could have been curved positively or negatively, but it is flat. The universe is expanding. In a flat universe, parallel lines never converge. In a positively curved space-time, parallel lines will eventually converge. In a flat universe, the sum of the interior angles of a triangle is 180 degrees. In a positively curved space-time, the sum of the interior angles of a triangle is greater than 180 degrees. In a negatively curved space-time, the sum of the interior angles of a triangle is less than 180 degrees.

The flow of momentum shapes space-time. Energy is the flow of momentum through time. All forms of energy and momentum gravitate. Energy in the form of matter or radiation slows the expansion of space-time.

There are problems with current theory. The mass of visible matter in the universe is insufficient to account for the formation of stars and galaxies. It is also insufficient to account for the rotation curves of spiral galaxies; the outer arms are rotating faster than would be predicted by the observed visible matter. For these and other reasons cosmologists have inferred the existence of something called dark matter. Dark matter interacts with space-time and other forms of matter through gravity, but it does not interact with the electromagnetic force, hence it neither absorbs, reflects, or emits radiation; we can’t see it, hence the name. Dark matter is also invoked to explain how the primordial hydrogen cloud formed from the Big Bang (see below), with its microscopic imperfections in homogeneity (also known as anisotropies), could have condensed to eventually form stars and galaxies. The quest to understand what dark matter is has so far been unfruitful. 2

All the matter, dark matter, and radiation in the universe still does not stop the expansion of space. Astronomical measurements have shown that the universe is expanding and that the expansion is accelerating. The reasons for the expansion and its acceleration are poorly understood. Cosmologists infer something called dark energy to explain the expansion. Dark energy presumably acts like an antigravity force.

The standard cosmological model says the universe consists of 69% dark energy, 26% dark matter, and 5% ordinary matter. Only 0.3% of the mass-energy of the universe is made of stars. The effects of dark matter and dark energy on space-time are similar but opposite.

The authors discuss the CMB. They claim it is the Rosetta Stone of cosmology showing the distribution of matter in the universe 378,000 years after the Big Bang, when the universe was 1000 times smaller than now. As the story goes, the early universe was a hot plasma consisting of mainly protons, electrons, and photons. Unbound, free electrons scatter photons. Once the universe had expanded and cooled enough, electrons began to combine with protons to form elemental hydrogen gas. When this happened, the photons, which had been constantly scattered by interactions with the free electrons, were scattered no more. Presumably, those photons have been moving through space ever since with little interference. The wavelength of the reflected light has increased since that time due to the expansion of space. These photons are now in the microwave frequencies. We have detected and studied these photons with ground-based and satellite-based observatories. The differences in matter densities reflected in the CMB are on the order of one part in 100,000. These differences, they say, are the “seeds” that gravity operated on to eventually form stars and galaxies. These are the tiny imperfections in the homogeneity of the matter distribution alluded to earlier. If the differences in matter density had been one part in a million, then matter would have never coalesced into stars. If the matter density had been one part in 10,000, then planets with stable orbits would have never formed because the stars would be too close to one another.

It is important to point out that the Big Bang is not considered the beginning of the universe. In the standard model, the universe began as a point with infinite temperature, infinite density, and no volume. This is when time itself began. This is when the laws of nature first took effect. The starting point with these properties is known as a singularity. We currently have no theories to explain singularities.

If the amount of dark energy in the universe had been greater, the expansion rate of space-time would have been faster, and gravity would not have had time to pull the primordial hydrogen into stars. On the other hand, if the amount of dark energy had been smaller, then gravity could have caused the universe to collapse into a black hole. Some speculate that dark energy is the energy of space, which is also called vacuum energy. Empty space has quantum fields even in the absence of matter. The calculated and observed magnitudes of the vacuum energy are very different. This discrepancy is a long-standing problem. Some believe dark energy may be the cosmological constant Einstein wrote about and considered his greatest blunder.

The expansion rate of the universe is fine-tuned for the formation of stars. Our universe just happened to have the right density, distribution of matter, and expansion rate. For our universe to have the flatness observed today, the flatness of the early universe a few minutes after the Big Bang would have to have been fine-tuned to one part in 1015. Explaining why the universe has such a finely tuned geometry is called the flatness problem.

The initial density of the universe one nanosecond after the Big Bang is calculated to have been 1024 kg/m3. If the density had been greater by 1 kg/m3, then the universe would have already collapsed. If the initial density had been less by 1 kg/m3, then stars would have never formed. Now that’s fine tuning!

It turns out that the universe has not had time to reach thermal equilibrium and yet the CMB suggests the temperature of the universe is uniform to one part in 100,000. How can this be? This is called the horizon problem. The exchange of photons is how thermal equilibrium could be reached. However, the universe is so large that light from stars on one side of the sky has not had time to reach parts of the universe on the other side, even traveling at the speed of light (186,282 miles/second) over 13.4 billion years (allegedly, the first stars formed 400 million years after the Big Bang, 13.8 billion years ago).

Inflation theory was devised in part to explain the flatness and horizon problems. Presumably, 10-35 seconds after the Big Bang, the universe expanded by a factor of 280 before it settled into an expansion rate similar to what we observe now. This magnitude of this expansion is similar to starting with a grain of sand and ending with something the size of our galaxy, about 100,000 light years across! Inflation presumably smoothed all the matter-energy out into the uniform distribution seen in the CMB. Inflation is purely speculative and not based on known physics. What could cause inflation, start it, and stop it—all are unknown. Note that the duration and rate of inflation would have to be fine-tuned. Expected evidence for inflation in the CMB has not been found. One of the primary developers of inflation theory, Paul Steinhardt, now doubts inflation is true.

As mentioned before, neutrinos are neutrally charged particles with a vanishingly small mass (<10-6 the mass of an electron). They can pass through the earth as though it were not there. However, they can be detected by very special detectors. Neutrinos are generated by nuclear reactions in stars. Because of their small mass and infrequent interactions with matter, they are distributed throughout space. The average density of neutrinos in the universe is about 340 million per cubic meter, all moving at close to the speed of light! The mass of the neutrino is very important to the cosmic evolutionary story. If the mass of neutrinos were only two times greater, stars and galaxies would not have formed. This would be caused by the gravitational effect of the neutrinos, distributed evenly through space, inhibiting the condensation of hydrogen by gravity into stars.

Chapter 6: All Bets Are Off!

Lewis and Barnes continue to explore the impact of changing the laws of physics. They start off with a discussion of classical and quantum mechanics. In classical mechanics, we can accurately and precisely know and predict the position, velocity, and momentum of an object. Classical mechanics describes the world of our everyday experience. However, in the subatomic world of quantum mechanics, we are not able to know accurately both the position and momentum of a particle simultaneously. This inability is known as the Heisenberg Uncertainty Principle. A particle’s energy, position, and momentum are described by a mathematical expression known as a wave function. The more accurately we know a particle’s position, the less accurately we can know its momentum and vice-versa. The classical world is deterministic while the quantum world is probabilistic. Some refer to the probabilistic nature of quantum mechanics as quantum fuzziness. Classical mechanics can tell us where an object is, quantum mechanics tells us where a particle is likely to be. Subatomic particles exhibit wave-particle duality, that is, they sometimes behave as solid particles and other times as waves. In the classical world, energies can have any value along a broad continuum (think analog). In quantum mechanics, energy states are quantized, discrete, and discontinuous (think digital). For example, the energies of atomic orbitals, where electrons reside about a nucleus, are very limited, defined, and different for each element. Hence the energies of the electrons in these orbitals can only have specific values with nothing possible in between. This fact determines the wavelengths (and hence energies) of light that can be absorbed or emitted by an atom when its electrons move between orbitals. In general, “large” objects follow classical mechanics but particles on the atomic scale obey quantum mechanics. Exactly where quantum mechanics ends and classical mechanics begins is still a question. Some say that because large objects consist of innumerable amounts of atoms and molecules, the averaged probabilities of positions emerge as a single outcome.

The world of quantum mechanics is often discussed in light of a famous thought experiment known as Schrodinger’s Cat. In this scenario, a cat is housed in an airtight cage hidden from sight. The cage also has a bottle of a toxic gas that would kill the cat if the bottle were broken. There is a radiation detection device equipped with an electronic arm holding a hammer that will break the bottle and release the poisonous gas if radiation is detected. Lastly, there is a radiation source containing radioactive elements near the detector. At any given moment, there is a 50:50 chance that radiation will be emitted and detected, the hammer will swing, the bottle will break, and the cat will die. According to quantum mechanics in this example, there is a 50:50 chance that some form of radiation coming from the source will be located at the detector and result in a dead cat. Since in quantum mechanics we can only know the probability of a particle’s location and not its exact location at any moment, our hypothetical cat is neither dead nor alive at any given moment. Only when we look in the cage do we see one of the possible outcomes (living or dead cat). Having neither a living nor dead cat at a given moment is analogous to having two possible quantum states each with equal probability. Having a particle defined by multiple possible quantum states simultaneously is called superposition. In the quantum world, the two states are superimposed, but in the classical world we observe only one state or the other.

A real experiment that exemplifies quantum strangeness is the famous double slit experiment. This is carried out with a source of particles (electrons, atoms, even some molecules) aimed at a detector with a barrier placed between them. The barrier can have one or two slits through which the particles can travel to reach the detector. When particles are fired at the barrier with one slit, the pattern seen at the detector is a projection of the image of the slit, a narrow band of impacts. One might then expect that when there are two parallel slits in the barrier, we would see two narrow bands of impacts, but this is not the case. Instead what is seen is an interference pattern of many alternating dark and light regions (dark being where many impacts have occurred). The result is similar to the familiar constructive and destructive wave interference patterns. This pattern is observed even if particles are fired one at a time! It’s as if a single particle goes through both slits! Here is where the wave-particle duality is clearly seen. However, when we try to follow a given particle’s path closely, it always goes through just one slit.

Here the interference pattern is exemplifying superposition, but classical behavior is seen when we attempt to follow a particle’s path. Quantum strangeness is observed when an entire system is tracked, but classical behavior is obtained when we look at a subsystem.

There are many interpretations of this strange behavior, but I’ll mention two. The most widely accepted view is called the Copenhagen Interpretation. In this view, the various possible quantum states are superimposed until we make an observation which causes the wave function to collapse into one of the possible states. This was Einstein’s view. The other view I’ll mention is called the Many Worlds Interpretation. In this view, all possible outcomes exist in parallel worlds. In the case of Schrodinger’s cat, as soon as we look inside the box, we either see a living or dead cat, the alternative outcome now exists in another universe. We see a living cat, but there also exists a dead cat in another universe. In this view, alternate timelines and universes are forming constantly. You may have seen a science fiction show where this view was worked into the plot—very strange indeed. But as Nobel laureate Richard Feynman, a great contributor to the theory of quantum mechanics, once said, “I think I can safely say that nobody understands quantum mechanics.”

There is a basic equation that can be used to calculate the energy of a photon with a given frequency. The equation is E = hν, where E is the energy, h is Planck’s constant, and ν is the frequency. Planck’s constant is the spacer between possible energy states of photons; it defines the scale of quantum mechanics, i.e., which quantized energy states are allowed. So, what would happen if h were different? If h were 0, quantum mechanics would not exist, electrons would crash into nuclei, and chemistry would be destroyed. If h were much larger, the quantum mechanics/classical mechanics boundary would be shifted towards larger objects, possibly some of which we encounter in everyday experience. This would bring quantum strangeness to our everyday lives with unpredictable results.

Lewis and Barnes discuss the idea of symmetry in physics. Symmetry is seen when a property of a system does not change over time. There are symmetries of location, time, electric charge, energy, etc. Symmetries are the regularities of behavior in the natural order. Without the regularities, we could not do science and might not even exist.

Space-time does not possess time symmetry because it is getting larger. The authors claim that the first law of thermodynamics is not obeyed for the universe as a whole because the universe is expanding over time and the density of vacuum energy remains constant. For the record, I disagree with this view; only God can create ex nihilo. If their equations say the amount of energy of the universe is increasing without a source, then I believe the theory is incorrect in this instance.

The net neutrality of electric charge in the universe remains constant over time. If the electrical neutrality of the universe were to vary by as little as one part in 1036, the electromagnetic force would have precluded the formation of stars due to electrical repulsion—the universe would be a gas cloud.

It turns out that the strong, electromagnetic, and gravitational forces are symmetric with regards to their behavior in a hypothetical mirror image universe. However, this is not the case for the weak force. One of the products of beta nuclear decay is a neutrino. Neutrinos spin in the same direction in mirror image universes! Hence symmetry in this case is lost. Lewis and Barnes say that this asymmetry may somehow be connected to why we only see matter in our universe and how life is based exclusively on left-handed amino acids and right-handed sugars.

Physicists ponder the direction of the arrow of time. The equations that describe our universe work both forwards and backwards in time, so why do we only observe processes occurring in one time direction? The second law of thermodynamics is one possible explanation since it requires the universe to change from less probable to more probable states. This is the thermodynamic arrow of time. Processes such as a broken coffee cup spontaneously reassembling would violate this principle.

We have three spatial and one time dimension in our universe. What if we had had a different number of dimensions? Adding another spatial dimension would make the force of gravity vary with 1/r3 instead of 1/r2. In this hypothetical universe, elliptical planetary orbits would be impossible. Another feature of this universe would be that atoms would no longer have ground states (the lowest energy orbital for electrons), so electrons would crash into nuclei and there would be no chemistry! On the other hand, universes with fewer than three spatial dimensions would be too simple to support life. Having two or more time-dimensions would result in chaotic and unpredictable universes. As far as we can tell, only universes with three spatial and one time dimension can support life as we know it.

String theory, an attempt to formulate a quantum theory of gravity, requires that our universe have at least 11 dimensions. Since we only experience three spatial and one time dimension, the remaining eight are said to be compactified to scales we are unable to detect. However, there is as yet no evidence for the existence of these additional dimensions.

Lewis and Barnes discuss a computer simulation called the Game of Life. Here particles are allowed to interact according to various rules. Only finely tuned rules result in stable and complex structures. All other rule sets result in simple, uniform, unstable, or chaotic patterns that could never carry complex information.

Lewis and Barnes summarize their discussion up to this point in the book thusly:

Our Universe’s laws reflect the order and stability that allow life to exist. Since theoretical physics starts with the postulation of such laws, science cannot tell us why the Universe is orderly at all. ...Throughout our journey, we have encountered a common message. Be it small changes to what we know, or larger changes that shake the foundations of space and time, it seems the Universe could have been so different, so very dead and sterile. We find ourselves questioning our existence in a Universe with a nice set of physical laws, with the right masses and forces, the right kind of beginning, played out against a convivial canvas of three dimensions of space and one of time. With so many potential ways the Universe could have been, we cannot ignore the apparent specialness of our existence.

Well, that’s it for part 1. In part 2, I’ll discuss the various responses Lewis and Barnes have encountered to fine-tuning as well as their own opinions. Please look for part 2 in November’s newsletter.

  • 1. If you take into account all the possible masses that quarks could have had, 70× is a very small increase. The mass of the down quark is 8.379 x 10-27 g. In theory, the maximum mass an elementary particle can have is the Planck mass (2 × 10-5 g). There is a range of possible masses that spans at least 22 orders of magnitude. Single particles with the Planck mass or greater are predicted to become their own black hole.
  • 2. Explanations other than dark matter have been advanced, including Modified Newtonian Dynamics (MOND), spacevelocity (https://creation.com/images/pdfs/tj/j21_1/j21_1_69-74.pdf), plasma cosmology (Peratt AL [1986] Evolution of the plasma universe: II. The formation of systems of galaxies. IEEE Trans. Plasma Sci. 14 [6]: 763-78), etc. A discussion of these ideas is beyond the scope of this article.