Of Superposition and Solipsism: A Survey of Quantum-Mechanical Approaches in Addressing “The Hard Problem”

Daniel Swain

Swain1.jpg

Writer’s comment: “Of Superposition and Solipsism: A Survey of Quantum-Mechanical Approaches in Solving the ‘Hard Problem’” is the product of an open-ended assignment in Dr. Evan Fletcher’s Scientific Study of Consciousness. In his class, various approaches to solving the physiological and existential conundrum surrounding the nature of human consciousness—which has been termed “The Hard Problem” by philosopher David Chalmers—were introduced to students. This paper attempts to summarize and compare several of the current theories about the nature of consciousness that have at their core interactions and effects on the quantum scale. Any discussion about consciousness in a scientific context is bound to be met with at least some controversy, but all of the hypotheses presented in this paper were posed originally by leaders in the field of the study of consciousness. Many thanks are due to Dr. Evan Fletcher, who provided the background and inspiration necessary to delve into a topic currently in as much flux as the quantum-mechanical aspects of consciousness.  

—Daniel Swain

 

Instructor's comment: Daniel Swain tackles a difficult problem in explaining issues of quantum mechanics and consciousness.  The “Hard Problem” is philosopher David Chalmers’ succinct designation for one of the most famous problems in philosophy, the mind-body problem, and it is of current relevance due to the striking successes of contemporary neuroscience. If the physical processes of the material brain give rise to consciousness (as most neuroscientists believe), then it should be feasible—though not trivial—to map out these processes in the brain. But this is easy in comparison with explaining how such physical processes actually generate subjective conscious awareness. What kind of conceptual framework could embrace these different realms? This is the Hard Problem.  Quantum mechanics may provide such a framework. Central is the concept of superposition, wherein a quantum state remains in simultaneous combination of potentialities until an observation is made. But how can an observation provoke a random and discontinuous collapse of the superposition into one of its potential alternatives? This is the measurement problem. To some it suggests a mysterious connection between consciousness (of the observer) and the actualities of the observed world.  Daniel has done a wonderful job—one of the best I’ve ever seen—of explaining these issues and outlining some possible quantum approaches to the Hard Problem.

—Evan Fletcher, Integrated Studies

 

 

The scientific study of consciousness has provoked discussion among experts in disparate contemporary fields. Philosophers and physicists, psychologists and programmers—all have shown interest in elucidating the enigmatic nature of the relationship between the brain and the consciousness that (presumably) arises from it. The quest to reconcile these seemingly distinct entities has been dubbed “The Hard Problem” by Philosopher David Chalmers. The various solutions that have been proposed to this mind-body problem are often in conflict with one another and in many cases can neither be confirmed nor invalidated by empirical evidence. Several of the more intriguing theories of consciousness now under serious consideration involve the potential interactions between the brain, the mind, and physical interactions on the quantum level. The quantum-mechanical explanations of the origins of consciousness posed by Penrose, Hameroff, Stapp, Tegmark, and others are by no means complete or cohesive; indeed, there is a significant amount of disagreement even amongst proponents of a quantum-based explanation over the nature of any such connection. What does differentiate these quantum-physical hypotheses from other abstract and “mysterian” viewpoints, however, is that the quantum explanations rely on the existence of an ultimate truth rooted in actual if not immediately tangible quantum physical processes.  Any discussion of these proposed explanations should begin at a fundamental level: the submicroscopic structure of the brain. Before delving into the specific consequences of these microscale interactions, however, a brief overview of the relevant aspects of modern quantum theory is in order.

One of the key elements of contemporary quantum mechanics is the behavior of the probabilistic wave function. According to Heisenberg, a  particle unobserved by a conscious entity—one which does not experience any type of “interference,” including interaction with a photon, electromagnetic field, or even another atom—has a well-behaved wave function which evolves deterministically and can be modeled by Schrödinger’s Equation. The “wave function” generated by Schrödinger’s Equation describes the probability distribution of a particle in space. From the perspective of an individual observer, any particle will always appear to be at a particular discrete point within this “probability cloud,” usually where the “probability density” is most concentrated.  The assumption that a given particle must reside at a particular and singular location, however, does not give a complete picture. Schrödinger’s Equation also implies that a particle will exist in all of the possible states dictated by its wave function simultaneously—that is, the particle will be “superposed” in multiple states.

Here lies the great departure between classical and quantum physics. That an observer will perceive only one of multiple possible positions for a given particle is a conundrum that remains unresolved even after a century of scientific debate. For many years, the proposition that an act of “observation” on such a superposed particle would cause a “collapse” of the wave function, forcing the particle to exist in a particularly defined classical state while the rest of the wave function ceases to exist—called the Copenhagen interpretation—was widely accepted by the scientific community. Although quantum-level effects have macroscopic consequences, according to the Copenhagen interpretation, the micro and macro worlds remain separated, residing in two entirely different domains and subject to two sets of fundamentally contradictory assumptions.

These contradictions have spurred the development of other interpretations. In the past few decades, for example, Hugh Everett’s Many Worlds theory, first posited in the 1950s, has been favored by many as an alternative to the collapse theory. Instead of literally “collapsing” upon observation as suggested by Schrödinger and Bohr, the wave function stays intact—that is, the particle actually does exist in all possible locations described by the wave function. These various superposed states simply aren’t visible to our macroscopic selves because they have been “decohered” through interference by the time we are able to perceive them. In other words, a single outcome has become fixed in a particular observer’s frame of reference through the quantum system’s interaction with the surrounding environment, even though the complete superposition of outcomes still exists. “Macroscopic objects,” according to Tegmark, “are almost impossible to keep isolated to the extent needed to prevent decoherence” (Tegmark, 2001), and this is why events in our everyday lives appear to evolve deterministically. It follows, however, that a system sufficiently isolated from environmental interaction would be able to preserve pristine superpositions, a possibility that becomes important when we consider, in a moment, certain structures in the brain. In any case, quantum decoherence would preserve our perceived reality, and allow for effective (if not actual) “collapses” of the wave function to occur. We are capable of experiencing only those aspects of the world remaining after decoherence—and from a practical standpoint we can ignore the alternatives in our day-to-day macroscopic lives. The “measurement problem,” as it has been termed in recent years, stems from our inability to separate the observer from the observed in quantum frames of reference because the act of observation seems to play a crucial role in the outcome of any experiment.

Another corollary to entanglement important to mention here is the idea of quantum non-locality: that a universal spatial ubiquity exists such that all quantum effects are experienced instantaneously, regardless of the distance between involved particles. Heisenberg’s Uncertainty Principle, which states explicitly that there is an inverse relationship between knowledge of a particle’s position and its momentum—is also integral to understanding the prevailing theories of modern quantum mechanics. 

Although the preceding simplified overview of the relevant major concepts in quantum mechanics relies on still contested assumptions, they can be viewed as relatively mainstream when compared to the hotly debated and vehemently defended theories and philosophies regarding the relationship between these concepts and consciousness. These quantum processes may be linked to consciousness in one or both of the following ways: microscale interactions in the brain governed by quantum superposition could lead to our perception of individual “consciousness,” or the act of conscious observation itself may actually have a causal influence on the reality of the world around us.

The first approach, involving microscale interactions in the brain, draws largely from the ideas of Roger Penrose. Penrose uses a multi-pronged argument to prove that the inner workings of the brain as they relate to conscious perception must involve quantum-level interactions. According to Penrose there are certain aspects of our conscious existence that we simply could not possess if our thought processes functioned in accordance with classical mechanics. Penrose believes there exists “something in the physical action of the brain which evokes awareness . . . but cannot be simulated computationally” (Penrose, 1997). Because classical physics is entirely deterministic, the behavior of any system governed solely by Newtonian mechanics can hypothetically be modeled computationally. If the initial conditions of a system are known and the mathematical model for the evolution of that system is perfect, a computation performed using an algorithmic process can determine the exact state of the system at a future point in time. Penrose certainly does believe that  some computations made in the brain rely on strictly algorithmic processes. There is a strong argument for the existence of such “algorithms for general understanding” as a product of natural selection (Penrose, 1997). At the same time, however, Penrose argues that “conscious understanding”—from which “hunches” and “flashes of insight” arise—is a product of quantum processes (Penrose, 1997). Our conscious perception allows us to transcend the limitations of algorithmic determinism, and these non-computational aspects of consciousness cannot be the result of classical physical processes.  That we humans can deduce the truth of statements which cannot be resolved using finite-step proofs (a consequence of Gödel’s Incompleteness Theorem) is, according to Penrose, evidence that at least certain aspects of brain function, and thus consciousness, must be a result of quantum processes.

Penrose also argues that regardless of one’s preconceptions on the subject, quantum theory must be considered when examining consciousness because most of the chemical and physical processes in the brain occur on a scale small enough to be subject to the stipulations of quantum mechanics. In collaboration with anesthesiologist Stuart Hameroff, Penrose has studied extensively the microtubular substructure of neurons. Each of the approximately 100 billion neurons in the human brain contains microtubules consisting of “tubulin” molecules and spans a total distance of several millimeters (Hagan, 2002). Each tubulin molecule retains a lone electron that is found most often in one of two stable orbitals. Depending on the orbital in which its electron resides at a given time, each tubulin molecule will take up one of two slightly different spatial configurations (Penrose, 1997). Hameroff noticed that anesthesiological drugs tend to prevent or greatly inhibit the ability of this lone electron to shift between orbital positions and therefore limit the frequency of change in the orientation of tubulin molecules throughout the brain. Because such drugs lead to the dramatic reduction of or even temporary suspension of “consciousness,” a causal relationship is implied between the degree of freedom of the tubulin molecule to change shape and the level of consciousness experienced.

This conclusion is interesting in light of Libet’s neuronal adequacy experiments. An irreducible “quantum” of energy is necessary for the lone electron to shift to a higher-level orbital state, and so the very large number of shifts between these energy states may be directly and mathematically related, according to Hameroff and Penrose, to the 500-millisecond requirement threshold for a stimulus to generate a conscious awareness measured by Libet (Penrose, 1997). This amount of energy corresponds to a configurational shift in about 1% of all the tubulin molecules in the brain (Penrose, 1997; Hagan, 2002). Penrose points out that the mere fact that this number makes logical sense and that these calculations do not result in an impossible answer (which, given the many significant figures of the numbers involved, could easily happen) lends credence to his theory.

Hameroff also maintains that there is another important aspect to the duality of possible molecular orientations. These microtubules would at times exist in superposed states, according to quantum mechanics, in both possible orientations. The possibility that information could be encoded by these dual-state structures (reminiscent of binary in computer programming) cannot be ignored.  Hameroff believes that this binary system could form the basis of a quantum mechanical system of computation in the brain. Microtubules, according to Hameroff, are “perhaps ideally designed” as quantum computers (Hagan, 2002). This massive network of tubulin molecules within the brain’s neurons could lead to an enormously complex series of quantum computers working in parallel and connected together by an “internet” of hollow microtubules which would serve as “guides for quantum waves.” Hameroff argues that various aspects of the physical structure of tubulin molecules, especially the “ordered water” lining of the tubes, might sufficiently isolate the interior of these microtubules from outside influences and thus  sustain “coherence conditions” capable of preserving superposed states long enough (more than 10-13 seconds) to be utilized in the form of a “quantum computer” (Penrose, 1997). These superpositions within our neural nets, however, would remain invisible to the conscious subject because inevitable “observations” would cause them to decohere faster than the brain is able to process information (10-3 seconds) (Penrose, 1997; Hagan, 2002). Quantum oscillatory movements in these microtubules, then, could lead to a global connectivity via quantum non-locality—what Penrose and others believe may be the impetus for consciousness.

In light of the importance of these possible quantum-scale effects in the brain, it is germane to also briefly mention the work of Sir John Eccles and Friedrich Beck. Their research primarily concerns cellular exocytosis in the brain, which occurs when a vesicle on a presynaptic nerve terminal releases chemical neurotransmitters. In order for information transmission to occur, the neurotransmitters must travel through the synaptic cleft and reach a postsynaptic nerve terminal. To trigger the release of neurotransmitters, however, depolarization of the axon must first occur via the diffusion of soidum ions down the length of the axon. This is the propagation of an action potential. Such propagations induce calcium ions to flow between axon terminals on different neurons, and the successful transmission of an impulse requires that a sufficient fraction of ions actually arrive at the succeeding receptor. Heisenberg’s Uncertainty Principle produces a probability distribution for the position of each ion. The spatial uncertainty for something the size of a calcium ion is sufficiently large that a substantial possibility exists that a given ion will “miss” the ion receptor at the end of the axon entirely. If this happens, exocytosis will never occur; therefore, a superposition of “exocytosis and no exocytosis” exists each time a neuron fires in the brain (Stapp, 2006). The conclusions drawn from their research are somewhat analogous to Penrose’s and Hameroff’s theories behind the quantum function of microtubules in the brain, and both theories become relevant when we consider, shortly, the Stapp/von Neumann approach in explaining consciousness via quantum concepts.

Penrose states that his hypotheses about the quantum origins of consciousness in microtubles will ultimately depend upon a new “new” physics—an updated, refined, or even fundamentally different version of the prevailing theory. This will be necessary to fully understand not only the quantum-level processes in the brain but the ways in which these relate to consciousness. A quantum computer is limited to answering only algorithmically possible questions, albeit with much greater speed and efficiency than traditional “machines” (Khrenikov, 2007). Yet even if the brain does not function as an actual quantum computer and simply takes advantage of quantum-like processing techniques, the computational aspects of brain function are not the most daunting obstacles when discussing consciousness. Modern quantum mechanics still cannot “explain the way in which we ‘think’” (Penrose, 1997). The types of innate human logic mentioned earlier must be explained in terms of non-algorithmic processes, so a resolution to this problem must involve something as yet undiscovered.

Most promising, according to Penrose, is the work in the area of quantum gravity. The theory of quantum gravity seeks to resolve the abrupt and discontinuous changes that affect an object’s probability function when an “observation” is made. These so-called quantum jumps present a problem for the evolution of space-time. Deformation of the space-time interface via gravity would not proceed in a well-behaved manner because centers of mass would not be distinct. Nature, according to Penrose, tends to reduce “excessive ambiguity” in the structure of space-time, and many ongoing quantum processes in the brain (including tubulin configurational activity) are almost certainly a result of this trend towards stability (Penrose, 1997; Penrose, 1989). Because the nature of quantum gravity is still unknown, Penrose admits that he does not know whether or not its existence will help resolve uncertainties about consciousness and quantum mechanics.

Henry Stapp, too, believes that David Chalmers’ “Hard Problem” is a non-issue, a product of over-reliance on classical physics. According to Stapp, the trouble with such a dualistic approach as Chalmers’—in which the mind and the brain are viewed as entities having different properties—is its inherent determinism. Brain processes and the conscious awareness that seems to arise from them do not evolve deterministically. Moreover, classical physics does not even accommodate the possibility of consciousness, which we as human observers are rather certain does exist. Quantum mechanics dictates that an observer becomes a dynamic part of a system, irrevocably and causally tied to the outcome of that system. One cannot examine a quantum mechanical system, therefore, without considering the influence of the act of examination upon the system itself. Stapp argues for the existence of three processes by which consciousness and quantum mechanics interact. A modified version of a similar proposal first set forth by John Von Neumann in the 1930s, Stapp’s theory implies “causal dynamical connections” between the following: (1) an observer’s conscious choice of how to act, which determines the nature of the observation being made, (2) the observer’s consciously experienced “increments of knowledge,” which result from the local and deterministic evolution of the quantum state, and (3) the “physical actualizations” of these experiences as neural correlates of consciousness in the brain (taken literally, the “collapse” of the wave function into a physically observable classical mechanical outcome) (Stapp, 2006).

Process 1 describes an act of volition—exactly why a human observer might make a particular observation in a particular manner is left unaddressed by both von Neumann and Stapp (Rosenblum and Kuttner, 2006). Stapp argues that willful choices are not subject to the known laws of physics (Stapp, 2006). Regardless of the origin of the choice, the act of making an observation limits the range of possible outcomes to a query. After the quantum wave function evolves deterministically according to Schrödinger’s Equation (Process 2), “nature” provides a reply to the question posed in Process 1 in the form of a response that will generate an answer to the question that was asked (Process 3). These three processes allow for the inclusion of the brain in the base quantum state, which establishes a referential frame that is not bifurcated by the conundrum of the “measurement problem.”

Defining the initial state of the physical system such that it includes the brain state also helps resolve the issue of so-called quantum jumps, which would occur in a purely physical sense as a result of Process 1 events and in a conscious or experiential sense during Process 3 events. These “jumps” would represent the transition from “possible” to “actual,” both in terms of physical certainty and conscious perception of physical certainty. This integration also leads to a continuous or “singular” model of behavior.  Unlike Heisenberg’s belief in a verifiable delineation between quantum and classical physical systems, the Stapp/von Neumann approach makes no such distinction (Stapp, 2006). It is worth noting here, however, that including the brain in the initial state creates a situation in which a conscious observer is using his own conscious mind to examine a process that arises from the physical brain, an apparent paradox that Stapp argues is resolved by making a strict delineation between the observing mind and the physical system under observation.

As a result of the causative power of conscious decision described by Stapp and others, the brain relies on clues from its “environment” to generate neural maps for particular events. Stapp calls these maps “Templates for Action,” certain patterns of brain activations that correspond or lead to physical actions or cognitive function. Quantum effects, according to Stapp, “inject” experiences into one’s consciousness when lower-level, classic mechanical processes cannot come into agreement with regard to a particular decision, possibly because the time scale on which this type of choice would need to be made is far shorter than other conscious processes can account for (Stapp, 2006). Interference leading to decoherence causes superpositions of neural correlates of consciousness to “collapse” into a particular observable and actionable state. In order to hold these Templates for Action in place against quantum mechanical processes that would tend to disrupt them, sustained brain activity induces rapid repetitions of Process 1 events. The brain repeatedly asks questions in the same manner to redefine the “response” from nature. This process—known as the Quantum Zeno Effect—holds particular patterns of brain activity in place for long enough time intervals to allow the conscious entity to perform tasks (motor activity, for example) (Stapp, 2006).

This effect has some interesting implications in the context of Benjamin Libet’s experimentation in the area of volitional acts. Subjects in these experiments were asked to raise a finger at some point of their own choosing during a given one-minute interval. Libet observed that a subject’s “readiness potential,” a spike in the level of neuronal activity in the motor cortex, precedes the conscious awareness of his decision to raise his finger (Stapp, 2006). Some have interpreted the antecedent nature of the readiness potential to be an indication that “free will” is illusory; Stapp, however, suggests that the subject generates a series of readiness potentials that occur in fairly rapid succession. At each readiness potential, the conscious subject is really posing a Process 1-type question to which there is a yes-or-no answer (“to raise the finger or not to raise the finger?”). “Nature” provides an answer to that question (a Process 3-type response): if the answer is “no,” then the Template for Action in that circumstance will not be actualized and the finger will not rise. When the answer to that question is “yes,” however, the Process 1-type question must be repeated frequently and rapidly to sustain an activation of the finger-raising Template for Action for a period of time long enough for the finger to actually rise. The subject’s free will comes into play when his or her brain repeatedly asks these Process 1-type questions.  Here again, Stapp emphasizes that one has a conscious and causal influence on the outcome of the situation by determining the nature of the Process 1-type question posed and consequently defining a range of possible responses. In this way, Stapp argues, the subject’s free will is preserved through quantum processes.

One alternative to the strict version of the Copenhagen interpretation as a solution to the measurement problem, however, is Everett’s “Many Worlds” theory, which denies the objective reality of wave function collapse by assuming that everything in the universe exists in a state of superposition. A single wave function exists for the entire universe which, like all unobserved systems, evolves deterministically and according to Schrödinger’s Equation. His view is, essentially, a literal interpretation of quantum theory. From a hypothetical—though by definition unattainable—omniscient and subsequently non-causative perspective, an observer completely removed from the system would be able to see all possible states of the superposed system (the universe) simultaneously.

Here the physicist Max Tegmark’s views on the Many Worlds interpretation of Everett intersect the study of consciousness. Tegmark has proposed various types of parallel universes, some of which result from permutative limits to possible finite states, others in which the known (or unknown) laws of physics do not apply, and still others that result from the infinitely branching spatial superpositions that stem from the relative state interpretation. Tegmark has called the last of these a “Level III multiverse,” which he argues may be relevant in understanding conscious existence. An observer located in any one of the infinitely many Level III universes would be unable to conceive of any of the other Level III universes within his own Level III multiverse. After any such observer asks a Process 1–type question, and the system evolves deterministically (Process 2), that observer will only experience one possible response to his original query. Tegmark believes that instead of leading to a collapse of the quantum state into a single outcome as described by Stapp and von Neumann, conscious observation of a superposed system causes the observer literally to “split” into multiple copies of himself, one for each possible state described by the probabilistic wave function. Each of these copies continues to exist in his own parallel universe completely unbeknownst to his counterparts elsewhere. Each branch evolves independently of all others; new superpositions and new branchings occur each time a superposition develops in any of them. This assumption leads to the decidedly bizarre conclusion that all possible permutations of every event that has ever occurred in human (or physical) history exist somewhere—all things mathematically possible ultimately come to pass.

Such branching manifests itself as a certain degree of randomness which is experienced in our reality as a particular probability of a particular outcome to a query. Because “all possible states exist at every instant,” argues Tegmark, “the passage of time may be in the eye of the beholder” (2007). These quantum branches would persist in infinite-dimensional Hilbert space.  Some involved in this field of research, including Tegmark, believe that the brain resides in the same domain (Tegmark, 2007). Tegmark suggests that our perception of conscious experience, the essence of consciousness itself, may be a direct result of this branching or even our particular interpretation of the branching process. Unlike Penrose, Eccles and others, Tegmark believes that consciousness is not a result of quantum processes within the brain (with regard to microtubules or ion receptors) but is instead a product of translational shifts through space, and not necessarily linearly through time.

Tegmark states that the development of true quantum computers would prove at least the plausibility of the many-worlds interpretation as they would exploit the parallelism of the Level III multiverse (Tegmark, 2007). Because we are constitutionally unable to perceive of these parallel Level III universes, however, the inescapable implication of the many-worlds argument is that we may never be able to know whether or not this theory has any basis in reality. It will remain, probably for quite some time, a tantalizing possible explanation for the nature of our physical existence and perceived conscious experience.

These various theories that attempt to link quantum mechanics with consciousness are invariably controversial; some have a basis in long-standing logical assumptions and prevailing worldviews, but many espouse concepts that might appear to the layperson to be more than a little fantastical. Even among the three major viewpoints of Penrose, Stapp, and Tegmark there is substantial and often fundamental disagreement as to the nature of their interconnections. Penrose believes that dramatic advancements in theoretical physics are needed before we will be able to ascertain the origins of consciousness; Stapp asserts that consciousness is itself an irreducible causal power in the universe; Tegmark argues that consciousness is largely illusory and a product of a physical reality than can ultimately be described mathematically. He believes that our perception of all that surrounds us is limited to the domain we currently inhabit. From an infinite number of possibly observable realities, ours is only one isolated from all others, existing in parallel. Those who work in this area of research do agree, however, that resistance to these often dramatic and enormously counterintuitive ideas does not stem from genuine scientific criticism of their respective theories but instead from aesthetic concerns generated in a macroscopic worldview in which classical mechanics and staunch determinism govern physical reality. “The principal argument against quantum mechanical models of the universe,” according to Tegmark, “is that they are . . . weird” (2007). 

There is certainly no guarantee that any of the currently proposed theories of consciousness will turn out to be accurate.  Indeed, an answer to the mind-body problem might come from an area we have yet to consider. When we ask a profound question about the nature of human consciousness, it is only reasonable to “expect an answer that sounds strange” (Tegmark, 2007). That which we perceive to be conscious experience—regardless of what mysteries may be revealed—is an inescapable aspect of human existence, and the elucidation of its subtleties will provide insight into the nature of the construction of reality.

 

Sources

Blackmore, S. (2006) Conversations on Consciousness. Oxford UP, New York.

Byrne, P. (Nov. 2007) The many worlds of Hugh Everett. Scientific American.

Hagan, S., Hameroff, S., Tuszynski, J. (2002) Quantum computation in brain microtubules? Decoherence and biological feasibility. Physical Review E, Vol. 65, 061901.

Khrenikov, A. (2007) Can quantum information be processed by macroscopic systems? J Quantum Information Processing, 6:6.

Penrose, R. (1989) The Emperor’s New Mind. Oxford UP, New York.

———. (1997) The Large, the Small, and the Human Mind. Cambridge UP, London.

———. (1994) Shadows of the Mind. Oxford UP, New York.

Rosenblum, B. & Kuttner, F. (2006) Quantum Enigma. Oxford UP, New York.

Stapp, H. (2006) Quantum approaches to consciousness. Ed. Zelazo, P.D., Moscovitch, M., Thompson. E. Cambridge Handbook of Consciousness. Cambridge UP, Cambridge.

Tegmark, M. (Jan. 2001) 100 years of the quantum. Scientific American.

———. (Oct. 2007) Parallel universes. Supplement to Scientific American.