Chris Fields Research
Ideas, drafts, recent publications ...

Return to Homepage


What is quantum theory about?

Scientific theories are generally regarded as being about objects or phenomena that exist either independently of human beings altogether, or at least independently of whether human researchers choose to investigate them. Geology, for example, investigates structures and forces within the Earth that long predate the arrival of human beings; astrophysics investigates the structure and evolution of stars that are far away. Even psychology investigates mental states that people experience whether or not a psychologist happens to be talking with them. As the fundamental science of the structure and behavior of matter and energy, physics seems a paradigm case of this independence: the physical world is simply there, independently of our efforts to extract information from it by making measurements.

Regarding quantum theory as a theory about objects that straightforwardly exist independently of our observations, however, leads to paradoxes that have been recognized since the theory was first developed in the 1920s. Quantum theory predicts, for example, that objects can be in two places at once, that objects can move with two different speeds at the same time, that cats - as in Erwin Schrödinger's famous thought experiment - can be both dead and alive, and in general, that any quantum system can simultaneously have any combination of values of any variables, parameters or properties that one might choose to describe it. Quantum theory also predicts that objects can become "entangled" - related to each other in such a way that manipulations of one of the objects result instantaneously in alterations of the other object. Experimental investigations of microscopic objects - objects up to the size of large molecules - consistently confirm these predictions. In fact, some predictions of quantum theory have been tested and confirmed to accuracies of one part in 10 billion, an experimental uncertainty of 0.00000001%. Ordinary objects like tables and chairs, or cats or other people, however, never appear to be in two places at once, or both moving and still, or dead and alive. This disconnect between theoretical predictions and ordinary experience is obviously a problem. Scientific theories are not supposed to be overwhelming well-confirmed experimentally, but at the same time apparently false. Physicists from Neils Bohr to Richard Feynman have remarked that if quantum theory does not appear to be utterly mysterious, one has not understood it.

An increasingly popular response to the paradoxes generated by quantum theory is to claim that the theory is not about objects at all, but rather about the information about objects that can be obtained from experimental investigations. On this view, the prediction that an object can be in two places at once simply reflects the truism that if one has not looked for it, one does not know where the object is. If quantum theory is viewed in this way, it is not a typical scientific theory; it is not a theory about things that exist independently of whether we look at them, but rather a theory of how we look at things, a theory of measurement and inference. Christopher Fuchs, a prominent theorist of quantum information, goes so far to claim that quantum theory is part of probability theory, the mathematical theory of reasoning under conditions of uncertainty.

I increasingly believe that quantum theory is in fact two theories - an ordinary theory about objects and a theory of observation - that have been combined into a single mathematical formalism. The mathematics of quantum theory assigns well-defined mathematical representations to objects; hence it implicitly assumes that objects themselves are well-defined. I suspect that this assumption is wrong, and that this assumption being wrong is the source of the paradoxes of quantum theory. My goal, therefore, is to develop a formulation of quantum theory that does not make this assumption, and see how that new formalism deals with open problems in quantum theory, including the problems of how quantum theory relates to the theory of algorithms, i.e. the theory of computation, and to general relativity.

The problem of boundaries

My work in this area started when I noticed a logical problem with quantum Darwinism. Quantum Darwinism is part of the general project of decoherence theory, a body of theory developed from the early 1970s onward that attempts to explain "the emergence of classicality" - the "classical" appearance of the ordinary world of tables, chairs and other people - from a theoretical perspective that views quantum theory as a theory of independently-existing objects. In quantum theory, objects take on single, definite values of observable properties such as position or momentum (or aliveness or deadness) only when they are observed in a way that looks for the particular observable. For example, a quantum-theoretic object - a "quantum system" - only has a definite spatial position if one measures its spatial position. Decoherence theory starts from the idea that the physical interactions between an object and its surrounding environment - for example, the interactions between a table and the surrounding air - can be viewed as measurements, in particular, as measurements of the object by the environment. If this is the case, then the properties that the environment measures take on definite values; hence one would predict that the "classical" properties of objects that humans perceive in the ordinary world are those properties that our shared environment measures for us.

The mathematics of decoherence theory are a straightforward application of the mathematics of quantum theory itself, so decoherence theory is well accepted as a set of tools for calculating the consequences of object-environment interactions. The interpretation of decoherence theory in terms of "emergence," however, is controversial. Quantum Darwinism is an attempt to show that decoherence explains the emergence of classicality in an objective, observer-independent way by showing that object-environment interactions encode information about an object into its environment in a way that allows observers who interact only with the environment to learn about the object. For example, human observers can learn about a table by interacting with light that has bounced off the table: that is how human vision works, by collecting light that has bounced off of objects. This environment-encoded information is objective, according to quantum Darwinism, because it is encoded redundantly in many parts of the environment. Redundant encoding allows multiple observers to interact with different parts of the environment and agree that they have all discovered encodings of the same information; hence they can all agree that they have observed the same object.

I show in "Quantum Darwinism requires an extra-theoretical assumption of encoding redundancy" that the sense of redundancy required by quantum Darwinism is not demonstrated from within the theory, but is rather assumed from outside the theory. This is a logical problem: the problem of circularity, of assuming X to prove X. I then show in "Classical system boundaries cannot be determined within quantum Darwinism" that this circularity cannot be repaired, by showing that observers restricted to the observational procedures allowed by quantum Darwinism cannot determine the boundary that separates an object from its environment. If observers cannot say what the boundary of an object is, they cannot say what the object is; they cannot distinguish the object from its environment. This is a serious problem for a theory that is "about" objects, and it appears to be a problem that decoherence theory cannot solve. The theory must, therefore, be making a wrong assumption somewhere. Because the formalism produces calculations that are consistently shown by experiments to be correct, the problem must be in the formal semantics of the theory, the relationship between the mathematical formalism that expresses the theory and the physical objects and actions that the formalism is supposed to be about.

Quantum theory without boundaries

In order to understand what was going wrong in quantum Darwinism, I switched to the perspective that quantum theory is not about objects but about what can be inferred from observations, and asked what an observer has to assume about an object in order to carry out observations of that object. The standard answer to this question is "nothing" - quantum theory standardly models an "observer" as simply a point of view, a location in space from which observations could be made. In "If physics is an information science, what is an observer?" I show that this way of thinking about observers cannot work: an observer must know how to identify an object in order to conduct observations. This "know how" can be represented mathematically using the same formalism that quantum theory uses to represent measurement operations. If this "know how" is built into the mathematical model of the observer, many of the key features of quantum theory fall out automatically. "A model-theoretic interpretation of environment-induced superselection" and "Bell's theorem from Moore's theorem" investigate in greater detail the assumptions that must be made about the world in order to derive quantum theory from a mathematical model of how observers identify the objects that they are to observe. One assumption stands out as crucial: the assumption that the physical dynamics do not depend on what observers choose to regard as objects of investigation. This assumption of "decompositional equivalence" sounds innocent, but in fact it is powerful: it is the assumption that any way of cutting the world up into "objects" is as good, from the point of view of fundamental physics, as any other. If this assumption is true, physics cannot explain the "emergence" of our ordinary classical world; indeed it cannot explain the emergence of any particular world - any particular cutting up of "everything" into a particular set of objects - at all.

One way of testing the assumption of decompositional equivalence is to examine the logical structure of a theory that assumes that it is false. One such theory is quantum Bayesianism, a formulation of quantum theory that takes the objects in the "ordinary world" for granted and treats quantum theory as a "users' manual" for making inferences based on limited observations conducted under conditions of uncertainty. In "Autonomy all the way down: Systems and dynamics in quantum Bayesianism" I show that quantum Bayesianism becomes inconsistent if observers are regarded as knowing with certainty what "ordinary world" objects they are observing, and that consistency can be restored by assuming decompositional equivalence, i.e. by assuming that the "ordinary world" is not ordinary after all.

If decompositional equivalence is true, nothing has a real boundary. In particular, observers do not have real boundaries. You and I do not have real boundaries. As far as I can tell, this is what quantum theory is telling us about the structure of the world: that it is boundaryless. If the world is boundaryless, our "ordinary" experience of bounded objects is purely virtual - it is "just semantics" that we somehow overlay on what is actually going on around and within us. If we can understand how we overlay this semantics onto the world, perhaps we will be able to understand what it means to be an observer, to be a bit of physics that has experiences.


Return to Homepage




Copyright © 2012 Chris Fields