Chapter 3

Anthropic Principles

The Motley Family

We have seen how observation selection effects are relevant in assessing the implications of cosmological fine-tuning, and we have outlined a model for how they modulate the conditional probability of us making certain observations given certain hypotheses about the large-scale structure of the cosmos. The general idea that observation selection effects need to be taken into account in cosmological theorizing has been recognized by several authors and there have been many attempts to express this idea in the form of an “anthropic principle”. None of these attempts quite hits the mark, however, and some seem not even to know what they are aiming at.

The first section of this chapter reviews some of the more helpful formulations of the anthropic principle found in the literature and considers how far these can take us. Section two briefly discusses a set of very different “anthropic principles” and explains why they are misguided or at least irrelevant for our present purposes. A thicket of confusion surrounds the anthropic principle and its epistemological status. We shall need to clear that up. Since a main thrust of this book is that anthropic reasoning merits serious attention, we shall want to explicitly disown some associated ideas that are misguided. The third section continues where the first section left off. It argues that the formulations found in the literature are inadequate. A fourth section proposes a new methodological principle to replace them. This principle forms the core of the theory of observation selection effects that we will develop in subsequent chapters.

The anthropic principle as expressing an observation selection effect

The term “anthropic principle” was coined by Brandon Carter in a paper of 1974, wherein he defined it thus:

. . . what we can expect to observe must be restricted by the conditions necessary for our presence as observers. (Carter 1974), p. 126

Carter’s notion of the anthropic principle, as evidenced by the uses to which he put it, is appropriate and productive. Yet his definitions and explanations of it are rather vague. While Carter himself was never in doubt about how to understand and apply the principle, he did not explain it in a philosophically transparent enough manner to enable all his readers to do the same.

The trouble starts with the name. Anthropic reasoning has nothing in particular to do with homo sapiens. Calling the principle “anthropic” is therefore misleading and has indeed misled some authors (e.g. (Gale 1981; Gould 1985; Worrall 1996)). Carter has expressed regrets about not using a different name (Carter 1983), suggesting that maybe “the psychocentric principle”, “the cognizability principle” or “the observer self-selection principle” would have been better. The time for terminological reform has probably passed, but emphasizing that the anthropic principle concerns intelligent observers in general and not specifically human observers should help to prevent misunderstandings.

Carter introduced two versions of the anthropic principle, one strong (SAP) and one weak (WAP). WAP states that:

. . . we must be prepared to take account of the fact that our location in the universe is necessarily privileged to the extent of being compatible with our existence as observers. (p. 127)

And SAP:

. . . the Universe (and hence the fundamental parameters on which it depends) must be such as to admit the creation of observers within it at some stage. (p. 129)

Carter’s formulations have been attacked alternatively for being mere tautologies (and therefore incapable of doing any interesting explanatory work whatever) and for being widely speculative (and lacking any empirical support). Often WAP is accused of the former and SAP of the latter. I think we have to admit that both these readings are possible, since the definitions of WAP and SAP are very vague. WAP says that we have to “be prepared to take into account” the fact that our location is privileged, but it does not say how we are to take account of that fact. SAP says that the universe “must” admit the creation of observers, but we get very different meanings depending how we interpret the “must”. Does it serve merely to underscore an implication of available data (“the universe must be life-admitting—present evidence about our existence implies that!”)? Or is the “must” instead to be understood in some stronger sense, for example as alleging some kind of prior metaphysical or theological necessity? On the former alternative, the principle is indisputably true; but then the difficulty is to explain how this trivial statement can be useful or important. On the second alternative, we can see how it could be contentful (provided we can make sense of the intended notion of necessity), the difficulty now being to provide some reason for why we should believe it.

John Leslie (Leslie 1989) argues that AP, WAP and SAP can all be understood as tautologies and that the difference between them is often purely verbal. In Leslie’s explication, AP simply says that:

Any intelligent living beings that there are can find themselves only where intelligent life is possible. (Leslie 1989), p. 128

WAP then says that, within a universe, observers find themselves only at spatiotemporal locations where observers are possible. SAP states that observers find themselves only in universes that allow observers to exist. “Universes” means roughly: huge spacetime regions that might be more or less causally disconnected from other spacetime regions. Since the definition of a universe is not sharp, neither is the distinction between WAP and SAP. WAP talks about where within a life-permitting universe we should expect to find ourselves, while SAP talks about in what kind of universe in an ensemble of universes we should expect to find ourselves. On this interpretation the two principles are fundamentally similar, differing in scope only.

For completeness, we may also mention Leslie’s (Leslie 1989) “Superweak Anthropic Principle”, which states that:

If intelligent life’s emergence, NO MATTER HOW HOSPITABLE THE ENVIRONMENT, always involves very improbable happenings, then any intelligent living beings that there are evolved where such improbable happenings happened.” (Leslie 1989), p. 132; emphasis and capitals as in the original.

The implication, as Michael Hart (Hart 1982) has stressed, is that we shouldn’t assume that the evolution of life on an earth-like planet might not well be extremely improbable. Provided there are enough Earth-like planets, as there almost certainly are in an infinite universe, then even a chance lower than 1 in 103,000would be enough to ensure (i.e. give an arbitrarily great probability to the proposition) that life would evolve somewhere1. Naturally, what we would observe would be one of the rare planets where such an improbable chance-event had occurred. The Superweak AP can be seen as a special case of WAP. It doesn’t add anything to what is already contained in Carter’s principles.

1The figure 1 in 103,000 is Hart’s most optimistic estimate of how likely it is that the right molecules would just happen to bump into each other to form a short DNA string capable of self-replication. As Hart himself recognizes, it is possible that there exists some as yet unknown abiotic process bridging the gap between amino acids (which we know can form spontaneously in suitable environments) and DNA-based self-replicating organisms. Such a bridge could dramatically improve the odds of life evolving. Some suggestions have been given for what it could be: self-replicating clay structures, perhaps, or maybe some simpler chemicals isomorphic to Stuart Kaufmann’s autocatalytic sets (such as thioesters?). But we are still very much in the dark about how life got started on Earth or what the odds are of it happening on a random Earth-like planet.

The question that immediately arises is: Has not Leslie trivialized anthropic reasoning with this definition of AP?—Not necessarily. Whereas the principles he defines are tautologies, the invocation of them to do explanatory work is dependent on nontrivial assumptions about the world. Rather than the truth of AP being problematic, its applicability is problematic. That is, it is problematic whether the world is such that AP can play a role in interesting explanations and predictions. For example, the anthropic explanation of fine-tuning requires the existence of an ensemble of universes differing in a wide range of parameters and boundary conditions. Without the assumption that such an ensemble actually exists, the explanation doesn’t get off the ground. SAP, as Leslie defines it, would be true even if there were no other universe than our own, but it would then be unable to help explain the fine-tuning. Writes Leslie:

It is often complained that the anthropic principle is a tautology, so can explain nothing. The answer to this is that while tautologies cannot by themselves explain anything, they can enter into explanations. The tautology that three fours make twelve can help explaining why it is risky to visit the wood when three sets of four lions entered it and only eleven exited. (Leslie 1996), pp. 170–1

I would add that there is a lot more to anthropic reasoning than the anthropic principle. We discussed some of the non-trivial issues in anthropic reasoning in chapter 2, and in later chapters we shall encounter even greater mysteries. Anyhow, as we shall see shortly, the above anthropic principles are too weak to do the job they are supposed to do. They are best viewed as special cases of a more general principle, the Self-Sampling Assumption, which itself seems to have the status of a methodological and epistemological prescription rather than that of a tautology pure and simple.

Anthropic hodgepodge

The “anthropic principles” are multitudinous—I have counted over thirty in the literature. They can be divided into three categories: those that express a purported observation selection effect, those that state some speculative empirical hypothesis, and those that are too muddled or ambiguous to make any clear sense at all. The principles discussed in the previous section are in the first category. Here we will briefly review some members of the other two categories.

Among the better-known definitions are those of physicists John Barrow and Frank Tipler, whose influential 700-page monograph of 1986 has served to introduce anthropic reasoning to a wide audience. Their formulation of WAP is as follows:

(WAPB&T) The observed values of all physical and cosmological quantities are not equally probable but they take on values restricted by the requirement that there exist sites where carbon-based life can evolve and by the requirement that the Universe be old enough for it to have already done so. (Barrow and Tipler 1986), p. 162

2 A similar definition was given by Barrow in 1983:

[The] observed values of physical variables are not arbitrary but take values V(x,t) restricted by the spatial requirement that x ? L, where L is the set of sites able to sustain life; and by the temporal constraint that t is bound by time scales for biological and cosmological evolution of living organisms and life-supporting environments. (Barrow 1983), p. 147

The reference to “carbon-based life” does not appear in Carter’s original definition. Indeed, Carter has explicitly stated that he intended the principle to be applicable “not only by our human civilization, but also by any extraterrestrial (or non-human future-terrestrial) civilization that may exist” (Carter 1989, p. 18). It is infelicitous to introduce a restriction to carbon-based life, and misleading to give the resulting formulation the same name as Carter’s.

Restricting the principle to carbon-based life forms is a particularly bad idea for Barrow and Tipler, because it robs the principle of its tautological status, thereby rendering their position inconsistent, since they claim that WAP is a tautology. To see that WAP as defined by Barrow and Tipler is not a tautology, it suffices to note that it is not a tautology that all observers are carbon-based. It is no contradiction to suppose that there are observers who are built of other elements, and thus that there may be observed values of physical and cosmological constants that are not restricted by the requirement that carbon-based life evolves.3 Realizing that the anthropic principle must not be restricted to carbon-based creatures is not a mere logical nicety. It is paramount if we want to apply anthropic reasoning to hypotheses about other possible life forms that may exist or come to exist in the cosmos. For example, when we discuss the Doomsday argument in chapter 6, this becomes crucial.

3 There is also no contradiction involved in supposing that we might discover that we are not carbon-based.

Limiting the principle to carbon-based life also has the side effect of encouraging a common type of misunderstanding of what anthropic reasoning is all about. It makes it look as if it were part of a project to restitute Homo sapiens into the glorious role of Pivot of Creation. For example, Stephen Jay Gould’s criticism (Gould 1985) of the anthropic principle is based on this misconception. It’s ironic that anthropic reasoning should have been attacked from this angle. If anything, anthropic reasoning could rather be said to be anti-theological and anti-teleological, since it holds up the prospect of an alternative explanation for the appearance of finetuning—the puzzlement that forms the basis for the modern version of the teleological argument for the existence of a creator.

Barrow and Tipler also provide a new formulation of SAP:

(SAPB&T) The Universe must have those properties which allow life to develop within it at some stage in its history. (Barrow and Tipler 1986), p. 21

On the face of it, this is rather similar to Carter’s SAP. The two definitions differ in one obvious but minor respect. Barrow and Tipler’s formulation refers to the development of life. Leslie’s version improves this to intelligent life. But Carter’s definition speaks of observers. “Observers” and “intelligent life” are not the same concept. It seems possible that there could be (and might come to be in the future) intelligent, conscious observers who are not part of what we call life—for example by lacking such properties as being self-replicating or having a metabolism, etc. For reasons that will become clear later, Carter’s formulation is superior in this respect. Not being alive, but being an (intelligent) observer is what matters for the purposes of anthropic reasoning.

Barrow and Tipler have each provided their own personal formulations of SAP. These definitions turn out to be quite different from SAPB&T:

Tipler: . . . intelligent life must evolve somewhere in any physically realistic universe. (Tipler 1982), p. 37

Barrow: The Universe must contain life. (Barrow 1983), p. 149

These definitions state that life must exist, which implies that life exists. The other formulations of SAP we looked at, by Carter, Barrow & Tipler, and Leslie, all stated that the universe must allow or admit the creation of life (or observers). This is most naturally read as saying only that the laws and parameters of the universe must be compatible with life—which does not imply that life exists. The propositions are not equivalent.

We are also faced with the problem of how to understand the “must”. What is its modal force? Is it logical, metaphysical, epistemological or nomological? Or even theological or ethical? The definitions remain highly ambiguous until this is specified.

Barrow and Tipler list three possible interpretations of SAPB&T in their monograph:

(A) There exists one possible Universe ‘designed’ with the goal of generating and sustaining ‘observers’.

(B) Observers are necessary to bring the Universe into being.

(C) An ensemble of other different universes is necessary for the existence of our Universe.

Since none of these is directly related to idea of about observation selection effects, I shall not discuss them further (except for some brief remarks relegated to this footnote4).

4 (A) points to the teleological idea that the universe was designed with the goal of generating observers (spiced up with the added requirement that the “designed” universe be the only possible one). Yet, anthropic reasoning is counter-teleological in the sense described above; taking it into account diminishes the probability that a teleological explanation of the nature of the universe is correct. And it is hard to know what to make of the requirement that the universe be the only possible one. This is definitely not part of anything that follows from Carter’s original exposition.

(B) is identical to what John Wheeler had earlier branded the Participatory Anthropic Principle (PAP) (Wheeler 1975, 1977). It echoes Berkelian idealism, but Barrow and Tipler want to invest it with physical significance by considering it in the context of quantum mechanics. Operating within the framework of quantum cosmology and the many-worlds interpretation of quantum physics, they state that, at least in its version (B), SAP imposes a boundary condition on the universal wave function. For example, all branches of the universal wave function have zero amplitude if they represent closed universes that suffer a big crunch before life has had a chance to evolve, from which they conclude that such short-lived universes do not exist. “SAP requires a universe branch which does not contain intelligent life to be non-existent; that is, branches without intelligent life cannot appear in the Universal wave function.” ((Barrow and Tipler 1986), p. 503). As far as I can see, this speculation is totally unrelated to anything Carter had in mind when he introduced the anthropic principle, and PAP is irrelevant to the issues we discuss in this book. (For a critical discussion of PAP, see e.g. (Earman 1987).

Barrow and Tipler think that statement (C) receives support from the many-worlds interpretation and the sum-over-histories approach to quantum gravity “because they must unavoidably recognize the existence of a whole class of real ‘other worlds’ from which ours is selected by an optimizing principle.” ((Barrow and Tipler 1986), p. 22). (Notice, by the way, that what Barrow and Tipler say about (B) and (C) indicates that the necessity to which these formulations refer should be understood as nomological: physical necessity.) Again, this seems to have little do to with observation selection effects. It is true that there is a connection between SAP and the existence of multiple worlds. From the standpoint of Leslie’s explication, this connection can be stated as follows: SAP is applicable (non-vacuously) only if there is a suitable world ensemble; only then can SAP be involved in doing explanatory work. But in no way does anthropic reasoning presuppose that our universe could not have existed in the absence of whatever other universes there might be.

A “Final Anthropic Principle” (FAP) has been defined by Tipler (Tipler 1982), Barrow (Barrow 1983) and Barrow & Tipler (Barrow and Tipler 1986) as follows:

Intelligent information-processing must come into existence in the universe, and, once it comes into existence, it will never die out.

Martin Gardner charges that FAP is more accurately named CRAP, the Completely Ridiculous Anthropic Principle (Gardner 1986). The spirit of FAP is antithetic to Carter’s anthropic principle (Leslie 1985; Carter 1989). FAP has no claim on any special methodological status; it is pure speculation. The appearance to the contrary, created by affording it the honorary title of a Principle, is what prompts Gardner’s mockery.

It may be possible to interpret FAP simply as a scientific hypothesis, and that is indeed what Barrow and Tipler set out to do. In a later book (Tipler 1994), Tipler considers the implications of FAP in more detail. He proposes what he calls the “Omega Point Theory”. This theory assumes that our universe is closed, so that at some point in the future it will recollapse in a big crunch. Tipler tries to show that it is physically possible to perform an infinite number of computations during this big crunch by using the shear energy of the collapsing universe, and that the speed of a computer in the final moments can be made to diverge to infinity. Thus there could be an infinity of subjective time for beings that were running as simulations on such a computer. This idea can be empirically tested, and if present data suggesting that our universe is open or flat are confirmed, then the Omega Point Theory will indeed have been falsified (as Tipler himself acknowledges).5 The point to emphasize here is that FAP is not in any way an application or a consequence of anthropic reasoning (although, of course, anthropic reasoning may have a bearing on how hypotheses such as FAP should be evaluated).

5 For further critique of Tipler’s theory, see (Sklar 1989).

If one does want to treat FAP as an empirical hypothesis, it helps if one charitably deletes the first part of the definition, the part that says that intelligent information processing must come into existence. If one does this, one gets what Milan C. Çirkoviç and I have dubbed the Final Anthropic Hypothesis (FAH). It simply says that intelligent information processing will never cease, making no pretenses to being anything other than an interesting empirical question that one may ask. We find (Çirkoviç and Bostrom 2000) that the current balance of evidence tips towards a negative answer. For instance, the recent evidence for a large cosmological constant (Perlmutter, Aldering et al. 1998; Reiss 1998, 2000)6 only makes things worse for FAH. There are, however, some other possible ways in which FAH may be true which cannot be ruled out at the present time, involving poorly understood mechanisms in quantum cosmology.

6 A non-zero cosmological constant has been considered desirable from several points of view in recent years, because it would be capable of solving the cosmological age problem and because it would arise naturally from quantum field processes (see e.g. (Klapdor and Grotz 1986; Singh 1995; Martel, Shapiro et al. 1998)). A universe with a cosmological density parameter O ˜ 1 and a cosmological constant of about the suggested magnitude ? ˜ 0.7 would allow the formation of galaxies (Weinberg 1987; Efstathiou et al. 1995) and would last long enough for life to have a chance to develop.

Freak observers and why earlier formulations are inadequate

The relevant anthropic principles for our purposes are those that describe observation selection effects. The formulations mentioned in the first section of this chapter are all in that category, yet they are insufficient. They cover only a small fraction of the cases that we would want to have covered. Crucially, in all likelihood they don’t even cover the actual case: they cannot be used to make interesting inferences about the world we are living in. This section explains why that is so, and why it constitutes a serious gap in earlier accounts of anthropic methodology and a fortiori in scientific reasoning generally.

Space is big. It is very, very big. On the currently most favored cosmological theories, we are living in an infinite world, a world that contains an infinite number of planets, stars, galaxies, and black holes. This is an implication of most multiverse theories. But it is also a consequence of the standard big bang cosmology, if combined with the assumption that our universe is open, as recent evidence suggests it is. An open universe—assuming the simplest topology7—is spatially infinite at every point in time and contains infinitely many planets etc.8

7 I.e. that space is singly connected. There is a recent spate of interest in the possibility that our universe might be multiply connected, in which case it could be both finite and hyperbolic. A multiply connected space could lead to a telltale pattern consisting of a superposition of multiple images of the night sky seen at varying distances from Earth (roughly, one image for each lap around the universe that the light has traveled). Such a pattern has not been found, although the search continues. For an introduction to multiply connected topologies in cosmology, see (Lachièze-Rey and Luminet 1995). There is an obvious methodological catch in trying to gain high confidence about the global topology of spacetime—if it is so big that we observe but a tiny, tiny speck of it, then how can we be sure that the whole resembles this particular part that we are in? A large sphere, for example, appears flat if you look at a small patch of it.


8 A widespread misconception is that the open universe in the standard big bang model becomes spatially infinite only in the temporal limit. The observable universe is finite, but only a small part of the whole is observable (by us). One fallacious intuition that might be responsible for this misconception is that the universe came into existence at some spatial point in the big bang. A better way of picturing things is to imagine space as an infinite rubber sheet, and gravitationally bound groupings such as stars and galaxies, as buttons glued on. As we move forward in time, the sheet is stretched in all directions so that the separation between the buttons increases. Going backwards in time, we imagine the buttons coming closer together until, at “time zero”, the density of the (still spatially infinite) universe becomes infinite everywhere. See e.g. (Martin 1995).

Until recently, it appeared that the mass density of the universe fell far short of the critical density and thus that the universe is open (Coles and Ellis 1994). Recent evidence, however, suggests that the missing mass might be in the form of vacuum energy, a cosmological constant (Zehavi and Dekel 1999; Freedman 2000). This is supported by studies of supernovae and the microwave background radiation. If this is confirmed, it would bring the actual density very close to the critical density, and it may thus be hard to tell whether the universe is open, flat, or closed.

Some additional backing for the infinite-universe hypothesis can be garnered if we consider models of eternal inflation, in which an infinite number of galaxies are produced over time.

Most modern philosophical investigations relating to the vastness of the cosmos have focused on the fine-tuning of our universe. As we saw in chapter 2, a philosophical cottage industry has sprung up around controversies over issues such as whether fine-tuning is in some sense “improbable”, whether it should be regarded as surprising, whether it calls out for explanation and if so whether a multiverse theory could explain it, whether it suggests ways in which current physics is incomplete, or whether it is evidence for the hypothesis that our universe resulted from design.

Here we shall turn our attention to a more fundamental problem: How can vast-world cosmologies have any observational consequences at all? We shall show that these cosmologies imply, or give a very high probability to, the proposition that every possible observation is in fact made. This creates a challenge: if a theory is such that for any possible human observation that we specify, the theory says that that observation will be made, then how do we test the theory? What could possibly count as negative evidence? And if all theories that share this feature are equally good at predicting the data we will get, then how can empirical evidence distinguish between them?

I call this a “challenge” because cosmologists are constantly modifying and refining theories in light of empirical findings, and they are surely not irrational in doing so. The challenge is explain how that is possible, i.e. to find the missing methodological link that enables a reliable connection to be established between cosmological theories and astronomic observation.

Consider a random phenomenon, for example Hawking radiation. When black holes evaporate, they do so in a random manner9 such that for any given physical object there is a finite (although, typically, astronomically small) probability that it will be emitted by any given black hole in a given time interval. Such things as boots, computers, or ecosystems have some finite probability of popping out from a black hole. The same holds true, of course, for human bodies, or human brains in particular states.10 Assuming that mental states supervene on brain states, there is thus a finite probability that a black hole will produce a brain in a state of making any given observation. Some of the observations made by such brains will be illusory, and some will be veridical. For example, some brains produced by black holes will have the illusory experience of reading a measurement device that does not exist. Other brains, with the same experiences, will be making veridical observations—a measurement device may materialize together with the brain and may have caused the brain to make the observation. But the point that matters here is that any observation we could make has a finite probability of being produced by any given black hole.

9 Admittedly, a complete understanding of black holes probably requires new physics. For example, the so-called information loss paradox is a challenge for the view that black hole evaporation is totally random (see e.g. (Belot, Earman et al. 1999) for an overview). But even pseudo-randomness, like that of the trajectories of molecules in gases in a deterministic universe, would be sufficient for the present argument.


10 See e.g. (Hawking and Israel 1979): “[I]t is possible for a black hole to emit a television set or Charles Darwin” (p. 19). (To avoid making a controversial claim about personal identity, Hawking and Israel ought perhaps to have weakened this to “. . . an exact replica of Charles Darwin”.) See also (Garriga and Vilenkin 2001).

The probability of anything macroscopic and organized appearing from a black hole is, of course, minuscule. The probability of a given conscious brain-state being created is even tinier. Yet even a low-probability outcome has a high probability of occurring if the random process is repeated often enough. And that is precisely what happens in our world, if the cosmos is very vast. In the limiting case where the cosmos contains an infinite number of black holes, the probability of any given observation being made is one.11

11 In fact, there is a probability of unity that infinitely many tokens of each observation-type will appear. But one of each suffices for our purposes.

There are good grounds for believing that our universe is infinite and contains an infinite number of black holes. Therefore, we have reason to think that any possible human observation is in fact instantiated in the actual world.12 Evidence for the existence of a multiverse would only add further support to this proposition.

12 I restrict the assertion to human observations in order to avoid questions as to whether there may be other kinds of possible observations that perhaps could have infinite complexity or be of some alien or divine nature that does not supervene on stuff that is emitted from black holes—such stuff is physical and of finite size and energy.

It is not necessary to invoke black holes to make this point. Any random physical phenomenon would do. It seems we don’t even have to limit the argument to quantum fluctuations. Classical thermal fluctuations could, presumably, in principle lead to the molecules in a gas cloud containing the right elements to bump into each other so as to form a biological structure such as a human brain.

The problem is that it seems impossible to get any empirical evidence that could distinguish between different Big World theories. For any observation we make, all such theories assign a probability of one to the hypothesis that that observation be made. That means that the fact that the observation is made gives us no reason whatever for preferring one of these theories to the others. Experimental results appear totally irrelevant.13

13 Some cosmologists are recently becoming aware of the problematic that this section describes, e.g. (Linde and Mezhlumian 1996; Vilenkin 1998). See also (Leslie 1992).

We can see this formally as follows. Let B be the proposition that we are in a Big World, defined as one that is big enough and random enough to make it highly probable that every possible human observation is made. Let T be some theory that is compatible with B, and let E be some proposition asserting that some specific observation is made. Let P be an epistemic probability function. Bayes’ theorem states that

P(T|E&B) = P(E|T&B)P(T|B) / P(E|B).

In order to determine whether E makes a difference to the probability of T (relative to the background assumption B), we need to compute the difference P(T|E&B) - P(T|B). By some simple algebra, it is easy to see that

P(T|E&B) - P(T|B) ˜ 0 if and only if P(E|T&B) ˜ P(E|B).

This means that E will fail to give empirical support to E (modulo B) if E is about equally probable given T&B as it is given B. We saw above that P(E|T&B) ˜ P(E|B) ˜ 1. Consequently, whether E is true or false is irrelevant for whether we should believe in T, given that we know that B.

Let T2 be some perverse permutation of an astrophysical theory T1 that we actually accept. T2 differs from the T1 by assigning a different value to some physical constant. To be specific, let us suppose that T1 says that the current temperature of the cosmic microwave background radiation is about 2.7 degrees Kelvin (which is the observed value) whereas T2 says it is, say, 3.1 K. Suppose furthermore that both T1 and T2 say that we are living in a Big World. One would have thought that our experimental evidence favors T1 over T2. Yet, the above argument seems to show that this view is mistaken. Our observational evidence supports T2 just as much as T1. We really have no reason to think that the background radiation is 2.7 K rather than 3.1 K.

At first blush, it could seem as if this simply rehashes the lesson, made familiar by Duhem and Quine, that it is always possible to rescue a theory from falsification by modifying some auxiliary assumption, so that strictly speaking no scientific theory ever implies any observational consequences. The above argument would then merely have provided an illustration of how this general result applies to cosmological theories. But that would be to miss the point.

If the argument given above is correct, it establishes a much more radical conclusion. It purports to show that all Big World theories are not only logically compatible with any observational evidence, but they are also perfectly probabilistically compatible. They all give the same conditional probability (namely one) to every observation statement E defined as above. This entails that no such observation statement can have any bearing, whether logical or probabilistic, on whether the theory is true. If that were the case, it would not be worthwhile to make astronomical observations if what we are interested in is determining which Big World theory to accept. The only reasons we could have for choosing between such theories would be either a priori (simplicity, elegance etc.) or pragmatic (such as ease of calculation).

Nor is the argument making the ancient statement that human epistemic faculties are fallible, that we can never be certain that we are not dreaming or that we are not brains in a vat. No, the point here is not that such illusions could occur, but rather that we have reason to believe that they do occur, not just some of them but all possible ones. In other words, we can be fairly confident that the observations we make, along with all possible observations we could make in the future, are being made by brains in vats and by humans who have spontaneously materialized from black holes or from thermal fluctuations. The argument would entail that this abundance of observations makes it impossible to derive distinguishing observational consequences from contemporary cosmological theories.

I trust that most readers will find this conclusion unacceptable. Cosmologists certainly appear to be doing experimental work and to modify their theories in light of new empirical findings. The COBE satellite, the Hubble Space Telescope, and other devices are showering us with a wealth of data that is causing a renaissance in the world of astrophysics. Yet the argument described above would show that the empirical import of this information could never go beyond the limited role of providing support for the hypothesis that we are living in a Big World, for instance by showing that the universe is open. Nothing apart from this one fact could be learnt from such observations. Once we have established that the universe is open and infinite, then any further work in observational astronomy would be a waste of time and money.

Worse still, the leaky connection between theory and observation in cosmology spills over into other domains. Since nothing hinges on how we defined T in the derivation above, the argument can easily be extended to prove that observation does not have a bearing on any empirical scientific question so long as we assume that we are living in a Big World.

This consequence is absurd, so we should look for a way to fix the methodological pipeline and restore the flow of testable observational consequences from Big World theories. How can we do that?

Taking into account the selection effects expressed by SAP, much less those expressed by WAP or the Superweak AP, will not help us. It isn’t true that we couldn’t have observed a universe that wasn’t fine-tuned for life. For even “uninhabitable” universes can contain the odd, spontaneously materialized “freak observer”, and if they are big enough or if there are sufficiently many such universes, then it is indeed highly likely that they contain infinitely many freak observers making all possible human observations. It is even logically consistent with all our evidence that we are such freak observers.

It may appear as if this is a fairly superficial problem. It is based on the technical point that some infrequent freak observers will appear even in non-tuned universes. Couldn’t it be thought that this shouldn’t really matter because it is still true that the overwhelming majority of all observers are regular observers, not freak observers? While we cannot interpret “the majority” in the straightforward cardinal sense, since the class of freak observers may well be of the same cardinality as the class of regular observers, nonetheless, in some natural sense, “almost all” observers in a multiverse live in the fine-tuned parts and have emerged via ordinary evolutionary processes, not from Hawking radiation or bizzare thermal fluctuations. So if we modify SAP slightly, to allow for a small proportion of observers living in non-tuned universes, maybe we could repair the methodological pipeline and make the anthropic fine-tuning explanation (among other useful results) go through?

In my view, this response suggests the right way to proceed. The presence of the odd observer in a non-tuned universe changes nothing essential. SAP should be modified or strengthened to make this clear. Let’s set aside for the moment the complication of infinite numbers of observers and assume that the total number is finite. Then the idea is that so long as the vast majority of observers are in fine-tuned universes, and the ones in non-tuned universes form a small minority, then what the multiverse theory predicts is that we should with overwhelming probability find ourselves in one of the fine-tuned universes. That we observe such a universe is thus what such a multiverse theory predicts, and our observations, therefore, tend to confirm it to some degree. A multiverse theory of the right kind, coupled with this ramified version of the anthropic principle, can potentially account for the apparent fine-tuning of our universe and explain how our scientific theories are testable even when conjoined with Big World hypotheses. (In chapter 5 we shall explain how this idea works in more detail.)

How to formulate the requisite kind of anthropic principle? Astrophysicist Richard Gott III has taken one step in the right direction with his “Copernican anthropic principle”:

[T]he location of your birth in space and time in the Universe is privileged (or special) only to the extent implied by the fact that you are an intelligent observer, that your location among intelligent observers is not special but rather picked at random from the set of all intelligent observers (past, present and future) any one of whom you could have been. (Gott 1993), p. 316

This definition comes closer than any of the others we have examined to giving an adequate expression of the basic idea behind anthropic reasoning. It introduces a notion of randomness that can be applied to the Big World theories. Yes, you could have lived in a non-tuned universe; but if the vast majority of observers live in fine-tuned universes, then the multiverse theory predicts that you should (very probably) find yourself in a fine-tuned universe.

One drawback with Gott’s definition is that it makes problematic claims which are not essential to anthropic reasoning. It says your location was “picked at random”. But who or what did the picking? Maybe that reading is too naïve. Yet the expression does suggest that there is some kind of physical randomization mechanism at work, which, so to speak, picks out a birthplace for you. We can imagine a possible world where this would be a good description of what was going on. Suppose that God, after having created a multiverse, posts a world-map on the door to His celestial abode. He takes a few steps back and starts throwing darts at the map. Wherever a dart hits, He creates a body, and sends down a soul to inhabit it. Alternatively, maybe one could imagine some sort of physical apparatus, involving a time travel machine, that could putter about in spacetime and distribute observers in a truly random fashion. But of course, there is no evidence that any such randomization mechanism exists. Perhaps some less farfetched story could be spun to the same end, but anthropic reasoning would be tenuous indeed had it to rely on such suppositions—which, thankfully, it doesn’t.

Further, the assertion that “you could have been” any of these intelligent observers who will ever have existed is also problematic. Ultimately, we may have to confront this problem. But it would be nicer to have a definition that doesn’t preempt that debate.

Both these points are relatively minor quibbles. I think one could reasonably explicate Gott’s definition so that it comes out right in these regards.14 There is, however, a much more serious problem with Gott’s approach which we shall discuss during the course of our examination of the Doomsday argument in chapter 6. We will therefore work with a different principle, which sidesteps these difficulties.

14 In his work on inflationary cosmology, Alexander Vilenkin has proposed a “Principle of Mediocrity” (Vilenkin 1995), which is similar to Gott’s principle.

The self-sampling assumptions

The preferred explication of the anthropic principle that we shall use as a starting point for subsequent investigations is the following, which we term the Self-Sampling Assumption:

(SSA) One should reason as if one were a random sample from the set of all observers in one’s reference class.

This is a preliminary formulation. Anthropic reasoning is about taking observation selection effects into account, which tend to creep in when we evaluate evidence that has an indexical component. In chapter 10 we shall replace SSA with another principle that takes more indexical information into account. That principle will show that only under certain special conditions is SSA a permissible simplification. However, in order to get to the point where we can appreciate the more general principle, it is necessary to start by thoroughly examining SSA—both the reasons for accepting it, and the consequences that flow from its use. Wittgenstein’s famous ladder, which one must first climb and then kick away, is a good metaphor for how to view SSA. Thus, rather than inserting qualifications everywhere, let it simply be declared here that we will revisit and reassess SSA when we reach chapter 10.

SSA as stated leaves open what the appropriate reference class might be and what sampling density should be imposed over this reference class. Those are crucial issues that we will need to examine carefully, an enterprise we shall embark on in the next chapter.

The other observational selection principles discussed above are special cases of SSA. Take first WAP (in Carter and Leslie’s rendition). If a theory T says that there is only one universe and some regions of it contain no observers, then WAP says that T predicts that we don’t observe one of those observerless regions. (That is, that we don’t observe them “from the inside”. If the region is observable from a region where there are observers, then obviously, it could be observable by those observers.) SSA yields the same result, since if there is no observer in a region, then there is zero probability that a sample taken from the set of all observers will be in that region, and hence zero probability that you should observe that region given the truth of T.

Similarly, if T says there are multiple universes, only some of which contain observers, then SAP (again in Carter and Leslie’s sense) says that T predicts that what you should observe is one of the universes that contain observers. SSA says the same, since it assigns zero sampling density to being an observer in an observerless universe.

The meaning, significance, and use of SSA will be made clearer as we proceed. We can already state, however, that SSA and its strengthenings and specifications are to be understood as methodological prescriptions. They state how reasonable epistemic agents ought to assign credence in certain situations and how we should make certain kinds of probabilistic inferences. As will appear from subsequent discussion, SSA is not (in any straightforward way at least) a restricted version of the principle of indifference. Although we will provide arguments for adopting SSA, it is not a major concern for our purposes whether SSA is strictly a “requirement of rationality”. It suffices if many intelligent people do in fact—upon reflection—have subjective prior probability functions that satisfy SSA. If that much is acknowledged, it follows that investigating the consequences for important matters that flow from SSA can potentially be richly rewarding.