The Doomsday Argument: a Literature Review

(c) Nick Bostrom
Dept. of Philosophy, Logic and Scientific method
London School of Economics; Houghton St.; WC2A AE; London; UK
Email: [email protected]
Homepage: http://www.hedweb.com/nickb

20 October, 1998

[Back to anthropic-principle.com]

Overview

    The most important application of the Copernican anthropic principle is the so-called doomsday argument (DA). It has been independently discovered at least three times. Brandon Carter was first, but he did not publish. The other independent co-discoverers are H. B. Nielsen and Richard Gott.

    The credit for being the first person to clearly enunciate it in print belongs to John Leslie who had heard about Carter's discovery from Frank Tipler. Leslie has been by far the most prolific writer on the topic with one monograph and a dozen or so academic papers.

    Nielsen has only hinted at the DA in print [1989]. Gott has published a couple of articles specifically about the DA (Gott [1993, 1997]). As we shall see, there are some differences in how Gott and Leslie present the DA. It’s clear that they put forth the same argument, but they approach the issue from somewhat different angles.

    The basic idea behind the DA is easy enough to grasp:

    Imagine that two big urns are put in front of you, and you know that one of them contains ten balls and the other a million, but you are ignorant as to which is which. You know the balls in each urn are numbered 1, 2, 3, 4 ... etc. Now you take a ball at random from the left urn, and it is number 7. Clearly, this is a strong indication that that urn contains only ten balls. If originally the odds were fifty-fifty, a swift application of Bayes' theorem gives you the posterior probability that the left urn is the one with only ten balls. (pposterior (L=10) = 0.999990). But now consider the case where instead of the urns you have two possible human races, and instead of balls you have individuals, ranked according to birth order. As a matter of fact, you happen to find that your rank is about sixty billion. Now, say Carter and Leslie, we should reason in the same way as we did with the urns. That you should have a rank of sixty billion or so is much more likely if only 100 billion persons will ever have lived than if there will be many trillion persons. Therefore, by Bayes' theorem, you should update your beliefs about humankind’s prospects and realize that an impending doomsday is much more probable than you have hitherto thought.

    While the core idea can thus easily be stated in a single paragraph, a large part of what makes up the corpus of "the argument" consists of replies to numerous objections. If we regard these replies as part of the argument then a concise exposition would easily fill a chapter of a book, even without any attempt to evaluate the various claims that have been made.

    Here I will only mention those objections that seem most alive. The ones that arguably haven't been uncontroversially refuted. For a more exhaustive list of objections, see Leslie [1996], chapters 5 and 6).

    My exposition will take following path: We begin by looking at Gott’s version of the argument. We then move on to Leslie’s version and we’ll see how he supports it by imaginative analogies and thought experiments. Then we consider the most important objections that have been advanced – by William Eckhardt, Dennis Dieks, Korb & Oliver and others. We shall see how the proponents of the DA have answered these objections. One set of objections, the ones related to the so-called Shooting-room paradox (which Leslie attributes to Derek Parfit) is given a separate section at the end since it introduces a new thought experiment and number of new issues.

The doomsday argument as presented by Richard Gott III

    Astrophysicist Richard Gott III, who independently discovered the DA, first published his ideas in a brilliant Nature paper [1993] (see also the responses and Gott’s replies: Goodman [1994], Buch [1994], Mackay [1994], Gott [1994]) and later popularized some of them in an article in New Scientist [1997]. In the Nature paper he not only sets forth a version of the DA but he also considers its implications for the search of extraterrestrial life (SETI) and for the prospects of space travel. Here we will focus on what he has to say about the DA. Gott’s version of the DA is based on a more general argument type that he calls the delta t argument.

The Delta t argument

    Gott first explains an argument form that he calls the "Delta t argument". It is extremely simple and yet Gott thinks it can be applied to make a very wide range of predictions about most everything in heaven and on earth. It goes as follows:

    Suppose we want to estimate how long some series of observations (measurements) is going to last. Then,

    Assuming that whatever we are measuring can be observed only in the interval between times tbegin and tend, if there is nothing special about tnow we expect tnow to be randomly located in this interval. (p. 315)

    Using this randomness assumption, we can make the estimate tfuture = (tend – tnow) = tpast = (tnow – tbegin). tfuture is the estimated value of how much longer the series will last. This means that we make the estimate that the series will continue for as long as it has already lasted when we make the random observation. This estimate will overestimate the true value half the time and underestimate it half the time. It also follows that a 50% confidence interval is given by

    1/3 tpast < tfuture < 3 tpast

    And a 95% confidence interval is given by

    1/39 tpast < tfuture < 39 tpast

    Gott gives some illustrations of how this reasoning can be applied in the real world:

    [In] 1969 I saw for the first time Stonehenge (tpast » 3,868 years) and the Berlin Wall (tpast = 8 years). Assuming that I am a random observer of the Wall, I expect to be located randomly in the time between tbegin and tend (tend occurs when the Wall is destroyed or there are no visitors left to observe it, whichever comes first). (p. 315)

    At least in these two cases, the delta t argument seems to have worked! The New Scientist article also features an inset that invites the reader to use the arrival date of that issue of the magazine to predict how long their current relationship will last. You can presumably use my paper for the same purpose. How long has your present relationship lasted? Use that value for tpast and you get your prediction from the expressions above, with the precise confidence intervals.

    Wacky? Yes, but all this does indeed follow from the assumption that tnow is randomly sampled from the interval tbegin to tend. This imposes some restrictions on the applicability of the delta t argument:

    [At] a friend’s wedding, you couldn’t use the formula to forecast the marriage’s future. You are at the wedding precisely to witness its beginning. Neither can you use it to predict the future of the Universe itself – for intelligent observers emerged only long after the Big Bang, and so witness only a subset of its timeline. (Gott [1997], p. 39)

    Gott does not discuss in any more detail the all-important question of when, in practice, the delta t argument is applicable. We shall return to this issue in a later chapter.

The Copernican anthropic principle

      Underlying the delta t argument is what Gott calls the Copernican anthropic principle, which says that you should consider yourself as being randomly sampled from the set of all intelligent observers:

      [T]he location of your birth in space and time in the Universe is privileged (or special) only to the extent implied by the fact that you are an intelligent observer, that your location among intelligent observers is not special but rather picked at random from the set of all intelligent observers (past, present and future any one of whom you could have been. (p. 316)

      The Copernican anthropic principle says that you are more likely to be where there are many observers that where there are few. This can be seen as a strengthening of the weak anthropic principle, which says that you will be where there are observers. The Copernican anthropic principle and the weak anthropic principle both assert that the prior probability that you should be find yourself as anything other than an observer is zero. But whereas the weak anthropic principle is silent as to the prior probability that you should find yourself as a particular observer, the Copernican anthropic principle makes an assertion about this too. It says that this prior probability should be 1/N, where N is the total number of observers that will ever have existed. In other words, the Copernican anthropic principle says that all (intelligent) observers should be assigned equal sample density.

The doomsday argument as presented by Gott

If we want to apply the delta t argument to the life-expectancy of the human species, we have to measure time on a "population clock" where one unit of time corresponds to the birth of one human. This modification is necessary because the human population is not constant. Due to population growth, most humans that have been born so far find themselves later rather than earlier in the history of our species. According to the Copernican anthropic principle, we should consequently assign a higher prior probability to finding ourselves at these later times. By measuring time as the number of births, we regain a scale where you should assign a uniform sampling density to all points of time.

There have been something like 70 billion humans so far. Using this value as tpast, the delta t argument gives the 95% confidence interval

    1. billion < tfuture < 2.7 trillion.

The units here are human beings. In order to convert this to years, we would have to estimate what the future population figures will be given that a total of N humans will have existed. In the absence of such an estimate, the DA leaves room for alternative interpretations. If the world population levels out at say 12 billion then disaster is likely to put an end to our species fairly soon (within 1400 years with 75% probability). If population figures rise higher, the prognosis is even worse. But if population decreases drastically, then the delta t argument could be compatible with survival for many millions of years. However, such a population collapse could perhaps itself be called a "doomsday".

The probability of space colonization looks abysmal in the light of the Gott’s version of the DA. Reasoning via the delta t argument, Gott concludes that the probability that we will colonize the galaxy is of order P “ 10-9, since if we did manage such a feat we would expect there to be at least a billion times more humans in the future than have been born to date.

The doomsday argument as presented by John Leslie

Leslie’s presentation of the DA differs in several respects from Gott’s. On a stylistic level, Leslie makes less use of mathematics than does Gott. Leslie’s writing is informal and his arguments often take the form of analogies or thought experiment. Leslie is, however, much more explicit about the philosophical underpinnings. He places the argument in a Bayesian framework and devotes considerable attention to the empirical considerations that determine what the priors are. One important feature of Leslie’s approach is his doctrine of how the DA would be affected if the world happens to be radically indeterministic.

The doomsday argument à la Leslie

Leslie’s version runs as follows:

One might at first expect the human race to survive, no doubt in evolutionary much modified form, for millions or even billions of years, perhaps just on Earth but, more plausibly, in huge colonies scattered through the galaxy and maybe even through many galaxies. Contemplating the entire history of the race – future as well as past history – I should in that case see myself as a very unusually early human. I might well be among the first 0.00001 per cent to live their lives. But what if the race is instead about to die out? I am then a fairly typical human. Recent population growth has been so rapid that, of all human lives lived to far, anything up to about 30 per cent ... are lives which are being lived at this very moment. Now, whenever lacking evidence to the contrary one should prefer to think of one’s own position as fairly typical rather than highly untypical. To promote the reasonable aim of making it quite ordinary that I exist where I do in human history, let me therefore assume that the human race will rapidly die out. ([1990], pp. 65f; emphasis in the original.)

Leslie emphasizes the point that the DA does not show that Doom will strike soon. It only argues for a probability shift. If we started out being extremely certain that the humans species will survive for a long time, we might still be fairly certain after having taken the DA into account – though less certain than before. Also, it is possible for us to improve our prospects. Leslie hopes that if the DA convinces us that the risks are greater than was previously thought then we should become more willing to take steps to diminish the dangers – perhaps through protecting the ozone layer, pushing for nuclear disarmament, setting up a meteor early warning system, or being careful with future very-high-energy particle physics experiments which could possibly upset a metastable vacuum and destroy the world. So Leslie does not see the DA as a reason for despair, but rather as a call for greater caution and concern about potential species-annihilating disasters. (People who think that too little is done today to safeguard against the possible extinction of our species might agree with this recommendation even if they don’t themselves believe in the DA. They could even use the DA as an ad hominem for people who do believe in it.)

A large part of The End of the World, Leslie’s monograph on the doomsday argument, consists of an examination of various concrete potential threats to our survival. Such an examination is necessary if we are to derive some definite prediction from the DA, since we will only get a realistic posterior probability distribution if we put in a realistic prior. It is convenient, however, not to regard these empirical considerations as part of the DA as such. It is more reasonable to define the DA to be just the part of the reasoning that argues that you should not, ceteris paribus, expect to be an untypical human observer and goes from there to argue that the risk of human extinction has been systematically underestimated. This is where anthropic deliberation comes in, and the philosophical problems associated with this line of reasoning can profitably be separated out from the empirical question of how likely, say, an all-out nuclear war is to wipe out species. This reading is generally consistent with what various authors have written on the topic.

This is not to in any way to downplay the importance of delving into the empirical evaluation of the risk factors. What makes the DA so important is that it gives a strong prediction about an issue that we care a lot about. Abstracting from the empirical content, we are left with a mere philosophical puzzle. So what we want to do is, once we have solved the philosophical puzzle in its pure form, we want to connect it back to the empirical information we have and see what concrete implications there might be for human policy making and rational expectations about our future. Since this is our goal (or at least my goal), empirical considerations will be included in the discussion; I will, however, occasionally set them to one side in order to focus on the underlying logic of the reasoning.

Often, Leslie’s arguments take the shape of a thought experiment in which it is supposed to be intuitively clear that the rational judgement to make is in accordance with what is required for the DA to work. Many of his thought experiments are variations on the following theme:

Imagine an experiment planned as follows. At some point in time, three humans would each be given an emerald. Several centuries afterwards, when a completely different set of humans was alive, five thousand humans would each be given an emerald. Imagine next that you have yourself been given an emerald in the experiment. You have no knowledge, however, of whether your century is the earlier century in which just three people were to be in this situation, or in the later century in which five thousand were to be in it. ...

Suppose you in fact betted that you lived [in the earlier century]. If every emerald-getter in the experiment betted in this way, there would be five thousand losers and only three winners. The sensible bet, therefore, is that yours is instead the later century of the two. (p. 20)

Leslie introduces this example to refute the objection that the DA fails because future humans aren’t yet alive so one couldn’t possibly have found oneself being one of them. It can also be used to counter several other simple objections, such as:

(Some people could be tempted to make that objection after having read only about the analogous spatial form of the thought experiments where the batches exist at the same time.)

(This latter objection was actually advanced in a recent Mind-paper by Korb & Oliver, as we shall see in a later section.)

Note that what Leslie’s example shows that following the recommended line of reasoning will increase the fraction of winners to losers. At this point, the argument ends, maybe because Leslie deems that a point has been reached where any reasonable objector would roll over and realize he was wrong.

But maybe there are other ways of making guesses that would give as good or better result than the recommended one? Or one could perhaps object on the ground that it might not be obvious that just because one principle maximizes the number of people who are right, this means that it is rational for a particular individual in a particular situation to use that principle? Leslie never attempts to go beyond analogies and give a more rigorous formulation of the DA. It seems that at this stage it would be worthwhile to sharpen up the debate a bit by introducing a little more rigor. To actually write down a doomsday argument, and argue for each step, rather than just give a sketch of an argument and then try to patch it up with analogies. This hasn’t been done to date. (I will try to it that in a later chapter.)

Leslie on the problem with the reference class

In my opinion, a major open question in observer self-selection in general and for the DA in particular is how to define the reference class: what should count as an observer for the purposes of the DA?

Looking backwards in time, we see a big stretch of human prehistory where it is not clear whether our ancestors who were living then should be called "human". It’s not just that we don’t know; it’s that the decision where to draw the line seems to be largely arbitrary and conventional. Yet, the prediction that the DA sets out to establish is not conventional. The odds that nuclear war will wipe out intelligent life on Earth should not depend on how paleontologists choose to classify some old bones.

Looking in the future direction, the zone of uncertainty of what counts as an observer is even greater. There we have to take into account the possibility that humans evolve into posthuman life-forms. Will artificial intelligences count as observers? If so, what kinds of these artilects will count? Should smarter, more comprehensive minds be given more weight than less intelligent beings? What if the conventional principles that we use to individuate minds become inapplicable due to increased bandwidth of communication that allows minds to share memories, to copy parts of each other, to fuse, or delegate some part of their normally conscious functions to separate and autonomous agents? If the DA is to give us any concrete information about the future, we want to have at least the outline of an answer to these questions.

The very difficulty of thinking of a way to settle these questions may even encourage us to doubt the validity of the DA itself, not just to be uncertain about exactly what it would show if it were right. For these are questions that it could seem that there ought to be an objectively right answer to if the DA is right. It would be strange if there was no fact of the matter about such crucial parameters as whether more intelligent minds should be given more weight (i.e. a higher sampling density in the set of all observers) than less intelligent minds. If there is no fact of the matter about such things then one would have an additional reason for suspecting that the whole DA, and perhaps many other forms of anthropic reasoning as well, were built of air and rested on some sort of confusion. (This suspicion could be overridden if a very close examination showed that there was nothing wrong with the DA after all; but it would still encourage some degree of lingering metalevel doubt.)

So how does Leslie answer the question of how the reference class should be determined?

As a first remark, Leslie suggests that "perhaps nothing too much hangs on it." (p. 257):

[The DA] can give us an important warning even if we confine our attention to the human race’s chances of surviving for the next few centuries. All the signs are that these centuries would be heavily populated if the race met with no disaster, and they are centuries during which there would presumably be little chance of transferring human thought-processes to machines in a way which would encourage people to call the machines ‘human’. (p. 258)

This clearly won’t do as a reply. First, the premise that there is little chance of creating machines with human-level and human-like thought processes within the next few centuries is something that many of those who have thought seriously about these things would dispute. Many thinkers in this field think that these developments will happen well within the first half of the next century (Moravec [1998a, 1998b, 1988], Drexler [1985], Minsky [1994], Bostrom [1997a, 1997b]).

Second, the comment does nothing to soothe the suspicion that the difficulty of determining an appropriate reference class might be symptomatic of an underlying more fundamental difficulty with the DA itself.

Leslie does proceed, however, to offer a positive proposal for how to settle the question of which reference class to choose.

The first part of this proposal is best understood by expanding the urn analogy that we used to introduce the DA. Suppose that the balls in the urns came in different colors. And suppose your task was to guess how many red balls there are in the urn in front of you. Now, ‘red’ is clearly a vague concept – what shades of pink or purple count as red? This vagueness could be seen as corresponding to the vagueness about what to classify as an observer for the purposes of the DA. So, if some vagueness like this is present in the urn example, does that mean that the Bayesian induction used in the original example can no longer be made to work at all? Clearly not.

The right response in this case is that you have a choice of how you want to define the reference class. Your choice depends on what hypothesis you are interested in testing. Suppose that what you are interested in finding out is how many balls there are in the urn of the color light-pink-to-dark-purple. Then all you have to do is to classify the random sample you select as being either light-pink-to-dark-purple or not light-pink-to-dark-purple. Once you have made this classification, the Bayesian calculation proceeds exactly as before. If instead you are interested in knowing how many light-pink-to-light-red balls there are, then you classify the sample according to whether it has that property; and then you proceed as before. The Bayesian apparatus is perfectly neutral as to how you define the hypotheses. There is not a right or wrong way here, just different questions you might be interested in asking.

Applying this reasoning to the DA, Leslie writes:

The moral could seem to be that one’s reference class might be made more or less what one liked for doomsday argument purposes. What if one wanted to count our much-modified descendants, perhaps with three arms or with godlike intelligence, as ‘genuinely human’? There would be nothing wrong with this. Yet if we were instead interested in the future only of two-armed humans, or of humans with intelligence much like that of humans today, then there would be nothing wrong in refusing to count any others. (p. 260)

This suggests that if we are interested in the survival-prospects of just a special kind of observers, we are entitled to apply the DA to this subset of the reference class. Suppose you are black and you want to know how many black people there will have been. Answer: Count the number of black people that have existed before you, and use the doomsday-style calculation to update your prior conditional (given by empirical considerations) to take account of the fact that this random sample from the set of all blacks – you – turned out to live when just so many blacks have yet lived.

How far can we push this mode of reasoning though, before we end up in absurdity? What if I want to know how many people-born-on-the-tenth-of-March-in-1973-or-later there will have been and decide to use as reference class the set of all people born-on-the-tenth-of-March-in-1973-or-later. My temporal position among the people in this set is extraordinarily early and will quickly become even more extraordinarily early if humans continue to be born for much longer. Should I therefore conclude that the population of people-born-on-the-tenth-of-March-in-1973-or-later will almost certainly go extinct within a few years? That would obviously be absurd!

How can the doomsdayer avoid this absurd conclusion? According to Leslie, by adjusting the prior probabilities in a suitable way:

No inappropriately frightening doomsday argument will result from narrowing your reference class ... provided you adjust your prior probabilities accordingly. Imagine that you'd been born knowing all about Bayesian calculations and about human history. The prior probability of the human race ending in the very week you were born ought presumably to have struck you as extremely tiny. And that's quite enough to allow us to say the following: that although, if the human race had been going to last for another century, people born in the week in question would have been exceptionally early in the class of those-born-either-in-that-week-or-in-the-following-century, this would have been a poor reason for you to expect the race to end in that week, instead of lasting for another century. (p. 262)

I will criticize this solution in a later chapter and suggest another solution that I think does the trick.

The possibility of choosing too a narrow reference class is only half of the problem with the reference class. It is also possible to choose too a wide reference class, so we need to know how much we can include. What about pre-historic humans? Neanderthals? Our common ancestors with modern apes? Do these guys count as observers? Where do we draw the line? (And again, the gray-area might be much more extensive in the future direction.) Writes Leslie,

Widening of the reference classes can easily be taken too far. For example, we ought to think twice before accepting any widening which counted as ‘observers’ even primitive forms of animal life. These might not be conscious at all. Furthermore it could be held that full consciousness involves introspective ability of a kind which chimpanzees haven’t yet acquired. (pp. 260-1)

This is about all that Leslie says about the problem of excessive widening of the reference class. It appears that he thinks that having "full consciousness" is a necessary requirement for being counted as an "observer". This condition is too vague to be very useful.

Once we have settled on an appropriately justified reference class we have still not reached the end of our troubles. We will also need to select and justify some particular sampling density over the chosen reference class. This is a problem that Leslie does not address. He implicitly assumes a uniform sampling density, i.e. that your prior probability that you are observer X should be the same for all X in the reference class. But this could be disputed. Perhaps clarity of mind, long life span, or time spent thinking about the DA should result in an observer being given more weight, i.e. having a higher sampling density in the reference class? Or maybe not, but it is by no means obvious that the uniform distribution is always the right one. And before we specify the sampling density we can’t derive any prediction from the DA. We will come back to this issue in a later chapter. For now it’s enough to note that there is a considerable gap at this point in Leslie’s reasoning.

Leslie on the effect on the doomsday argument of physical indeterminism

One prominent feature of Leslie’s exposition of the DA is that throughout he keeps stressing that if the world is indeterministic, as quantum physics might lead us to believe, then the DA is seriously weakened though not completely obliterated. We shall return to Leslie’s reasoning about this when we discuss the shooting room paradox.

Attempted refutations of the doomsday argument

There have been many attempted refutations of the DA, yet no one refutation seems to have convinced many people. Most of the purported refutations can easily be seen to be wrong, but there are a few that are more serious. I survey below those objections that I deem to be most serious. These include all objections that have actually been raised (as opposed to merely reported) in academic publications, with the exception of the five objections by Korb and Oliver. (I will try to show in a later chapter that those four objections can easily be seen to be wrong.)

The self-indication assumption

    The idea behind this objection is that the probability shift in favor of earlier doom that the DA leads us to make is offset by another probability shift that likewise has been overlooked. This other probability shift is in the direction of a greater probability for the hypothesis that there will have been many humans. According to this objection, the more humans there will ever have existed, the more "slots" would there be that you could have been "born into". Your existence is more probable if there are many humans (or observers) than if there are few. Since you do in fact exist, the Bayesian rule has to be applied and the posterior probability of the hypothesis according to which many people exist must be increased.

    The neat thing is that these two probability shifts cancel each other precisely, as first noted by Dieks [1992] and shown by Kopf et al. [1994].

    The principle that this objection depends on can be dubbed the self-indication assumption:

    (SIA) The fact that you are an observer gives you some reason to believe that the world contains many observers.

    Whether the objection succeeds depends on how strong reasons can be given for accepting or rejecting this assumption. Leslie argues ([1996], pp. 224-8) that adopting SIA, we have to conclude that the probability that the world contains infinitely many observers is one, and that this is an unacceptable consequence. There are also other considerations that make SIA hard to accept. We will discuss SIA further in a later chapter.

Andrei Linde’s suggestion

    Andrei Linde first suggested an interesting variant of the objection based on the self-indication assumption. He thinks the universe is such that it is technologically feasible for the human species to continue for infinitely long (see also Tipler & Barrow [1986] and Tipler [1994]). If that is right then no matter when you were born you would still be "infinitely early". Finding yourself alive in the late twentieth century would be no more improbable, conditional on this hypothesis, than finding yourself alive in, say, year 34,898, 836 AD. The DA would therefore not yield any probability shift, although it would still be formally valid.

    Leslie’s reply is that if we aren’t initially certain that the universe contains infinitely many observers then the fact that on Linde’s theory we would in some sense be "infinitely early" gives us "superbly strong probabilistic grounds for rejecting the theory" (Leslie [1996], p. 264). Note that it looks as if this reply is implausible in a way symmetric to the alleged implausibility of the SIA-objection. The SIA implied that the probability of there being infinitely many observers is one; which seemed wrong. Now Leslie’s reply to Linde implies that the probability of there being infinitely many observers is zero, which seems equally wrong.

    Infinities create a lot of problems in probability theory and decision theory in many contexts. Just think of Pascal’s wager, the St Petersburg paradox etc. As we shall see, part of the puzzlement in the so-called Shooting Room paradox also derives from the presence of infinite possibilities.

No meaningful objective probabilities

    An objection that I have heard several people advance is that the DA requires the existence of determinate probabilities where none exist. This objection may be phrased in different ways, but the basic sentiment is as expressed by Torbjörn Tännsjö:

    Leslie may well find it very improbable that we are born exceptionally early in the history of the species (what we speak of here are subjective probabilities), but I don’t. When he claims that ‘no observer should at all expect to find that he or she or it had to come into existence very exceptionally early in his, her or its species,’ I fully agree. But this does not mean that he or she or it should expect not to have come into existence very exceptionally in his, her, or its species either. The most natural attitude to adopt here is agnosticism. What we are contemplating is a matter of radical uncertainty, not risk. (Tännsjö [1996], p. 248)

    I expect that this sort of objection should look more attractive to people who aren’t Bayesians. But it’s true that one can’t take the meaningfulness of probability assignments for granted in these kinds of very unusual applications. A proper presentation of the DA should contain some account of why and how the probability assignments it postulates make sense and are the right ones.

    Even for a person who is convinced that the DA is valid, there is an important problem in determining exactly what probability assignments make sense in this context. The reason is that there is a connection here to the problem of the reference class; or so at least I shall argue in a later chapter.

Prima facie implausibility

One obvious objection (also in Tännsjö’s [1996], p. 249) against the DA is that it leads to an intuitively surprising/implausible conclusion. This is, of course, one reason to be somewhat reluctant to immediately accept it as valid. Most people, when they first hear about the DA, think that it is wrong. That is a perfectly healthy reaction. We are asked to make major changes to our worldview and we rightly demand a pretty good justification before conceding anything to the doomsdayer.

It’s also clear, however, that this defense only goes so far. Thinking would be boring if it didn’t occasionally lead us to accept conclusions that originally seemed implausible. Some level of lingering metalevel doubt may be appropriate, but at least for the sake of philosophical discussion the burden of proof now rests equally on those who believe in the DA and those who don’t. The issue is no longer about whether the DA should be taken seriously. We already know that the DA is interesting enough that it would be worth refuting if it were false.

(Is a perceived need to soften the bite of the DA part of the reason why Leslie argues that physical indeterminism will dramatically reduce the probability shift that the DA requires? With this modification, the DA is agreeably spicy, yet not so absurdly hot as to be impossible to swallow.)

Interpreting the doomsday argument: alternative conclusions

The DA might be seen to set out to establish that terrestrial intelligent life (by which I include all possible future intelligent life forms that might live off-earth but descended from us Earth-bound humans) is likely to go extinct fairly soon. If this is the conclusion that is aimed for, then it is not at all clear that it succeeds, even is the basic structure of the DA is correct. The reason is that seem to be alternative conclusions, each of which is a possible way of accommodating the DA. We don’t know what the DA really shows until we have decided which of the alternative conclusions (or which disjunction of alternative conclusions) is the right lesson to draw from the DA. Here are some of these possible alternative conclusions:

Swamping by other considerations

The DA can be overridden if we have sufficiently strong empirical grounds for thinking that a doomsday won’t happen. This is most clearly seen in Leslie’s version of the argument. If the prior we feed into Bayes’ formula for the hypothesis that we will go extinct within, say 50,000 years, is small enough, then even after taking the DA into account we can still have a very high degree of confidence that we will not go extinct within this period.

This is all well as far as it goes. Seen as a refutation of the DA, however, it doesn’t go very far.

First, it is not an objection against the DA, but rather a point about how the DA should be interpreted.

Second, even with a very big prior probability of survival, the posterior will become desperately tiny for a large range of scenarios. These scenarios include those suggested by transhumanists and others, who think that either ourselves or our electronic successors will go on to colonize the galaxy and beyond. If that happens then there could well be much more than 1010 times as many observers that have existed so far. Even if our prior estimate of the likelihood that space colonization would fail were as low as one in a million, we would still become virtually certain that large-scale space colonization will not happen after we take the DA into account.

The point is that even with very advantageous priors, every scenario that implies the existence of very many observers will become refuted with virtual certainty by the DA. It doesn’t matter how good our other sources of information are (within limits); for sufficiently long durations of the human species, the posterior probability will approach zero. (That this is the case can be easily seen by inspecting Bayes’ formula.) So even though low empirical priors can reassure us for the near-time future, it doesn’t help for the long run.

This conclusion is subject to certain qualifications that may prove absolutely decisive. We examine them below. They are based on the fact that there seem to be other possible interpretations of what the DA shows than the one that is typically associated with the DA (i.e. that our species will soon go extinct). When the swamping-consideration is combined by one or more of these other considerations, it is possible that one could consistently (and maybe even plausibly) interpret the DA as not giving us good grounds for thinking that doomsday will happen either in the near-future or in the far-future.

Infinite duration of the human species

We have already mentioned Linde’s suggestion that the universe will allow the human species to continue to exist forever. If that is true then one can argue for the position that the DA is valid but inapplicable, since everybody would be in some sense "equally early" in an ever-lasting species.

Indeterminism

We have also mentioned Leslie’s doctrine that quantum-mechanical indeterminism leads to a major weakening of the DA.

Decrease in birth rates

Another possibility, if the reference class consists of all humans or all observers that will ever have existed, is that we will turn out to be fairly typically positioned in the reference class, not because some disaster causes us to go extinct, but because a decrease in population. This could in itself be calamitous if the decrease were a result of a total collapse of civilization as a result of nuclear war or an ecological breakdown, for example. But it is possible to imagine a scenario where population figures are voluntarily reduced. The barriers that separate one human mind from another might begin to erode once we start to create direct links between our brains, on the one hand, and between our brains and computers on the other. Neuro/chip interfaces are already under development, and it has been argued that molecular nanotechnology will in all likelihood make it possible to upload the biological neural network (the human mind) unto an artificial neural network perhaps running as a simulation on a computer (Drexler [1985]). This would be done by creating a 3-d scan of the human brain to an atomic level of resolution. Once we exist as uploads, it’s imaginable that high-bandwidth communication, and the ability to change mental parameters at will, and perhaps to paste and copy cognitive modules from one individual to another, will lead to a gradual fusion of all minds into one.

This loophole might be blocked if the future global mind runs at an extremely high clock speed and if as a reference class we use "observer-moments" (i.e. time-segments of observers) rather than observers. For then, even if there were just one mind, it would quickly accumulate many observer-moments. I shall return in a later chapter to the question of whether the reference class should consist of observers or observer-moments.

Metamorphosis

However, there is still the ambiguity of what counts as an observer/observer-moment. When will the mental life of our successors have changed so much that they don’t qualify as observers for the purposes of the DA? We can’t conclusively answer that question until we have settled the problem of the reference class.

It is by no means implausible that human descendants will evolve or technologically metamorphose into something very different from our current human form. For example, Alexander Chislenko (in a commentary [1996] on a forthcoming book by Hans Moravec [1998]) envisions that biological intelligences will become obsolete and that the society of the future will be a kind of functional soup populated by "infomorphs", distributed information-processing entities existing on vast computer networks. The infomorphs would be of varying degrees of complexity and durability; they might be able to buy and sell knowledge and share many functions with each other. The human concept of personal identity might not be at all useful in such a world. It would be extremely hard to determine what should go into the reference class. It would not be clear what complexes should count as observers or how to individuate these observers.

One would be tempted to say that the DA is not applicable to these kinds of entities, that they should not be counted in the reference class. If that’s so, then the DA doesn’t have any effect on hypotheses according to which our society will soon be replaced by such functional soup. (Hypotheses according to which this metamorphosis is further in the future will still be partially affected by the DA; the more so the more humans or other clear-cut observers they imply will exist before the metamorphosis.)

One can perhaps imagine much less radical transformations than the one suggested by Chislenko that would still be sufficient to turn us into something that falls outside the reference class and is thus immune to the DA. Exactly how much of a metamorphosis is necessary in order to put us outside the reference class cannot be specified until we have solved the reference class problem.

There is a dilemma facing anybody who looks for the metamorphosis-interpretation to lift the gloom from our outlook on the future. The dilemma is that if the metamorphosis is too small, then the beings we metamorphose in would still be in the reference class; but if the metamorphosis is too big, the beings we metamorphose into will not be us or even the sort of beings we care about. The infomorphs, for example, could seem too inhuman to give us much comfort.

Yet, it is by no means obvious that the borders of the reference class coincide with the borders of what we care about. If they don’t, then there could be room for hypotheses that could be largely unaffected by the DA and still leave room for arbitrarily many beings, of the sorts we care about, to exist in the future. But the reference class problem has to be solved before this issue can be settled conclusively.

Modifying the priors through by considering a larger hypothesis space

It has been argued by Dieks [1992], Korb & Oliver [1998], and Eastmond [1997] that we can obviate the dark conclusion of the DA simply by considering a larger hypothesis space. If we consider only the hypotheses h1, h2, …, hn, where hi says that there will have been i observers, and we assume a uniform distribution over the chosen hypothesis space, then we can push the expected number of observers upward by making n larger. The same thing happen, although to a less degree, if we use a prior distribution such as h ´ (1/(n2)) (where h is a normalizing constant) or if we use the so-called "unbiassed" improper prior, 1/n.

Does this show that the conclusion of the DA is arbitrary? I will argue in a later chapter that it does not. The priors are not arbitrary or conventional – they are supposed to represent our actual empirical knowledge of the situation. I therefore think that this last alternative conclusion is definitely flawed.

The idea of annulling the DA by introducing the self-indication assumption could also be characterized as a proposal to change the priors from what we would naively take them to be. But that case is different since an independent motivation was supplied there for that particular choice of prior. It was not a matter of changing the prior for the sole purpose of avoiding the standard DA conclusion.

The shooting room paradox

The shooting-room paradox was introduced by John Leslie (e.g. [1996], pp. 251ff), who says he developed the idea with help from David Lewis, who considers it "a good, hard paradox".

In the shooting room experiment we are to imagine a room of infinite capacity. First a batch of ten people are led into this room. A pair of dice is thrown in front of their eyes. If a double six comes up they are all shot. Otherwise they leave the room safely and a new batch, this one containing a hundred people, is thrust in. The process continues, with each consecutive batch ten times larger than the previous one, until there is a double six; whereupon the people in the room at that time are shot and the experiment ends.

Suppose you have been thrust into the room. You are asked to estimate the odds of leaving safely. One the one hand, since whether you will leave or not will be determined by the throw of a fair pair of dice, it seems that you have a 35/36 chance of exiting alive. On the other hand, 90% of all people who are in your situation will be shot, so it seems you have only a 10% chance of exiting alive. That is the paradox.

The connection to the DA is obvious. Except for the fact that each consecutive batch in the shooting room is postulated to be ten times bigger than its predecessor (which corresponds to an indefinite exponential population growth in the case of the DA), the two situations are structurally very similar.

Leslie on the shooting room paradox

    Leslie thinks there is a radical difference for what the person in the shooting room should believe depending on whether the random mechanism (the two dice) are deterministic or not.

    Consider first the indeterministic case. Suppose the outcome of the dice is determined by a radically indeterministic quantum process. (And we suppose that everybody is aware of this fact.) Then according to Leslie you should expect to get out of the room alive. Leslie admits that "this is in a way wildly paradoxical, given that at least 90 per cent of those who betted they would get out alive would lose their bets." (Leslie [1996], p.252). As Bas van Fraassen remarked, an insurance agent who insured all of them would be making a costly mistake. Despite this, Leslie maintains that the chances of not being shot are 35/36:

    It can nevertheless seem fairly plain that you personally should expect the dice not to fall double-six if you do know for sure that they are fair, radically indeterministic dice. For there the dice are, resting in the Devil’s hand; they haven’t yet been thrown; and there is either no ‘fact of the matter’ of how they are going to fall, or else (see the discussion [in an earlier chapter of The End of the World] of the irrelevance of the B-theory of Time) no fact of the matter to which you can properly appeal. All you can say is that you have thirty-five chances out of thirty-six of leaving the room safely. End of argument, I think. (Leslie [1996], pp. 252-3).

    This result is supposed to hold under the assumption that the Devil can continue to create new people forever, and fit them into the room, so that the process can continue indefinitely. (Leslie thinks the result would also hold if this assumption is relaxed and we stipulate some finite but very large maximal number of people that could enter the room (whereafter the experiment would end); he does not press that point, however.)

    Next, consider the deterministic case. For example, instead of the dice we might use two consecutive decimal places in the expansion of of pi. These decimal places could be chosen to be far away from what anybody has ever calculated, and we could disregard all decimals that are not between one and six.

    Here Leslie’s verdict is reversed: if you enter the room under these conditions, expect to be shot – "For now there’s no need for you to accept the paradoxical conclusion which seemed forced on you in the indeterministic version of the experiment. You cannot say that, when you arrived in the room, whether you’d exit from it safely hadn’t yet been fixed by factors working deterministically." (p. 254). And "Disaster is what will come to over 90 per cent of those who will ever have been in your situation." (p. 255).

    There is a connection between Leslie’s views on the shooting room paradox and his doctrine of "observer-relative chances". In the indeterministic case of the shooting room, how can it be right that you should expect to come out alive, while at the same time an insurance company wanting to insure all people who entered the room would be making a costly mistake? Leslie’s answer is that chances are observer-relative in a paradoxical way, so that the rational probability estimates of how the dice are likely to fall will differ depending on weather you are the insurance company or a person in the room.

    In a later chapter I will critically examine Leslie’s views on the shooting room, and I will also argue that the paradox can be resolved without any appeal to paradoxical observer-relative chances.

Eckhardt’s critique

William Eckhardt [1997] argues against Leslie’s doctrine that the probabilities in the deterministic shooting room are different from the ones in the indeterministic variant. Eckhardt thinks that the probability in each case is 35/36 that you will get out alive, conditional on finding yourself in the shooting room. In Eckhardt’s view, "the shooting room is not a paradox at all; rather, it is a cogent line of reasoning alongside an utterly spurious one, masquerading as horns of a dilemma." (p. 253).

How does this square with the fact that a booker who betted against all the people in the room at these odds would be certain to make a profit? Since the bookie’s gain are the punters’ loss, and if both expect to make a profit, then it would seem as if at least one of them were mistaken about the odds. Eckhardt points out that this problem only arises in an imaginary world where the house, if need be, can continue to raise the stakes forever. In the real world, the house would risk running out of credit. So what we have here is basically a pyramid scam. "It has long been known that by successively increasing bet size in a sequence of unfavorable bets, one can theoretically obtain winning results; [footnote omitted] this is the basis of various infamous doubling systems in roulette and other games." (Eckhardt [1997], p. 253.)

To suppose that the odds depend on whether the random mechanism is deterministic or not has unpalatable consequences according to Eckhardt:

If there existed a mode of statistical inference that were valid according to the extent that determinism were true … then by repeatedly testing the accuracy of this type of statistical inference, one could gauge the correctness of determinism. Since this conclusion is highly implausible, it is a safe bet that statistical inferences, including those which underlie the doomsday argument, do not hinge on the truth of determinism. (Eckhardt [1997], pp. 245-6.)

Since this objection presupposes repeatability, it doesn’t immediately strike Leslie’s doctrine on the shooting room with full force. At least in its original formulation, the shooting room cannot be repeated, since once there is a double-six, you’ll be shot. Leslie, however, seems to think that the same point about the relevance of physical determinism can be made also in other contexts where the game is repeatable. In that case it could look as if Leslie is forced into the unattractive position of having to maintain that physicists could set up some kind of gambling institution in order to ascertain whether the world is indeterministic or not.

I shall later argue that there is a clear sense in which unrepeatability is an essential feature of observer self-selection.

A second objection that Eckhardt advances against Leslie’s view on the relevance of determinism is as follows. We can imagine that a sequence of indeterministic dice throws is recorded. Then we have two cases. In one shooting room, the fate of the people that enter is determined directly by the original indeterministic process. In another shooting room, at some later time, the fate of the people that enter this room is determined by the transcripts of the outcome of the indeterministic process. The sequence of dice outcomes is the same in both cases, and we can assume that the people involved know this. Yet, according to Leslie, what you should believe depends on whether you are in the deterministic shooting room or the indeterministic one. Thus, Leslie’s claims lead to contradicting statements about what amount to the same game; i.e. Leslie’s doctrine is self-refuting.

Leslie responds to this briefly in a footnote. He dismisses the objection as question-begging: "I reject Eckhardt’s question-begging claim that betting games are ‘the same games’ regardless of whether the are played (a) with indeterministic dice or else (b) with records" (Leslie [1997], pp. 435f). While this defense might save Leslie from outright contradiction, it doesn’t remove the perceived implausibility of the consequence that he is committed to treating the two games differently.

[Back to anthropic-principle.com]

References

Barrow, J. D. & Tipler, F. J. 1986. The Anthropic Cosmological Principle. Oxford: Clarendon Press.

Barvinsky, A.O. 1998. "Open inflation without anthropic principle". Physics preprint archive (http://xxx.lanl.gov), hep­th/9806093 v2 14 June.

Bostrom, N. 1998 "Investigations into the Doomsday Argument". Forthcoming. (Preprint available at available at http://www.anthropic-principles.com/preprints/inv/investigations.html)

Bostrom, N 1997. How long before Superintelligence? International Journal of Future Studies, Vol. 2 (Also available at http://www.hedweb.com/nickb/superintelligence.htm)

Brin, G. D. 1983. "The `Great Silence': The Controversy Concerning Extraterrestrial Intelligent Life, Q. Jl. R. astr. Soc. 24:283-309.

Buch, P. 1994. "Future prospects discussed". Nature, vol. 368, 10 March, p.108.

Carter, B. 1983. "The anthropic principle and its implications for biological evolution". Phil. Trans. Roy., Soc., Lond., A310, pp. 347-363.

Carter, B. 1974. "Large Number Coincidences and the Anthropic Principle in Cosmology". In Leslie, J. 1990. (edt.) Physical Cosmology and Philosophy. Macmillan Publishing Company.

Chislenko, A. 1996. "Networking in the Mind Age". Available at http://www.lucifer.com/~sasha/mindage.html.

Davis, P. 1984 "What caused Big Bang". In Leslie, J. 1990. (edt.) Physical Cosmology and Philosophy. Macmillan Publishing Company.

De Garis, H. 1996. http://www.hip.atr.co.jp/~degaris/

Delahaye, J-P. 1996 "Reserche de modeles pourl’argument de l’Apocalypse de Carter-Leslie". Unpublished manuscript.

Dieks, D. 1992. "Doomsday - Or: the Dangers of Statistics". Phil. Quat. 42 (166) pp. 78-84.

Drexler, E. 1985. Engines of Creation: The Coming Era of Nanotechnology. Forth Estate London. Also available from http://www.foresight.org/EOC/index.html.

Drexler, E. 1992. Nanosystems, John Wiley & Sons, Inc., NY.

Dyson, F. 1979 "Time without end: physics and biology in an open universe" Reviews of Modern Physics, 51:3, July.

Earman, J. 1987. "The SAP also rises: a critical examination of the anthropic principle. Am. Phil Quat., Vol. 24, 4, pp. 307-17

Eckhardt, W. 1997 "A Shooting-Room view of Doomsday". Journal of Philosophy, Vol. XCIV, No. 5, pp. 244-259.

Eckhardt, W. 1993. "Probability Theory and the Doomsday Argument". Mind, 102, 407, pp. 483-88.

Ellis, G. F. R. 1975. "Cosmology and Verifiability" Quarterly Journal of the Royal Astronomical Society, Vol. 16, no. 3. Reprinted in Leslie, J. 1990. (edt.) Physical Cosmology and Philosophy. Macmillan Publishing Company.

Garriga, J. 1998. "The density parameter and the Anthropic Principle". Physics preprint archive (http://xxx.lanl.gov), astro- ph/ 9803268. 23 March

Goodman, S. N. 1994. "Future prospects discussed". Nature, vol. 368, 10 March, p.108.

Gott III, R. J. 1994. "Future prospects discussed". Nature, vol. 368, 10 March, p.108.

Gott III, J. R. 1993. "Implications of the Copernican principle for our future prospects". Nature, vol. 363, 27 May, pp. 315-319.

Gott III, R. 1982. Nature (London) 295, 304.

Gould, S. J. 1985. "Mind and Supermind". Reprinted in Leslie, J. 1990. (edt.) Physical Cosmology and Philosophy. Macmillan Publishing Company.

Guth, A. 1981. Phys. Rev. D, 23, 347.

Hanson, R. 1996. "The Great Filter". Work in progress. Available from http://hanson.berkeley.edu/.

Hawking, S. W. & Turok, N. 1998. "Open Inflation Without False Vacua". Physics preprint archive (http://xxx.lanl.gov), hep- th/ 9802030 5 Feb 1998.

Howson, C. & Urbach, P. 1993. Scientific Reasoning: The Bayesian Approach, 2 ed. Open Court, Chicago, Illinois.

Hoyle 1975. Astronomy and Cosmology. San Francisco.

Kopf, T.& Krtous, P.& Page, D. N. 1994. "Too soon for doom gloom". Physics preprint gr-gc/9407002, v3 4 Jul.

Korb, K. B. & Oliver, J. J. 1998. "A refutation of the Doomsday Argument". Mind. Forthcoming.

Landsberg, P. T. & Park, D. 1975. "Entropy in an Ocillating Universe". Proc. R. Soc. Lon. A 346, pp. 485-495.

Leslie, J. 1996. The End of the World: The Ethics and Science of Human Extinction. Routledge. London.

Leslie, J. 1993. "Doom and Probabilities". Mind, 102, 407, pp. 489-91

Leslie, J. 1992. "Doomsday Revisited". Phil. Quat. 42 (166) pp. 85-87.

Leslie, J. 1990. (edt.) Physical Cosmology and Philosophy. Macmillan Publishing Company.

Leslie, J. 1989. Universes. Routledge: London.

Leslie, J. 1985. The Scientific Weight of Anthropic and Teleological Principles. Proc. 1984 Conference on Teleology in Natural Science. (Center for Philosophy of Science, Pittsburgh)

Lewis, D. 1986. On the Plurality of Worlds. Oxford.

Linde, A. D. 1990. Inflation and Quantum Cosmology. Academic Press, San Diego.

Linde, A. 1985. The Universe: Inflation Out Of Chaos. New Scientist, Vol. 105, No. 1446, 7 March, pp. 14-18.

Linde, A. 1983. In Gibbons, G. W. et al. The Very Early Universe. Cambridge: Cambridge University Press.

Mach, R. 1993. "Big Numbers and the induction case for extraterrestrial intelligence". Philosophy of Science, 60, pp. 204-222.

Mackay, A. L. 1994. "Future prospects discussed". Nature, vol. 368, 10 March, p.108.

Markov, M. A. 1985. In Markov et al. Proceedings of the Third Seminar on Quantm Gravity. Moscow.

Minsky, M. 1994. "Will Robots Inherit the Earth?" Scientific American, Oct., Available from http://www.ai.mit.edu/people/minsky/papers/sciam.inherit.html.

Misner, C. et al. 1973. Gravitation. (San Francisto: W. H. Freeman and Co.)

Moravec, H. 1998a. Being, Machine. Forthcoming. Cambridge University Press (?)

Moravec, H. 1998b. "When will computer hardware match the human brain?" Journal of Transhumanism, Vol. 1 http://www.transhumanist.com/volume1/moravec.htm

Moravec, H. 1989. Mind Children. Harvard University Press, Harvard.

Mukhanov. V. F. 1985. In Markov et al. Proceedings of the Third Seminar on Quantm Gravity. Moscow.

Nielsen, H. B. 1989. "Did God have to fine tune the laws of nature to create light?". Acta Physica Polonica B20, 347-363.

Oliver, J. J. & Korb, K. B. 1997. "A Bayesian analysis of the Doomsday Argument". Technical Report 97/323, Department of Computer Science, Monash University.

Pagels, H. R. 1985. "A Cosy Cosmology. In Leslie", J. 1990. (edt.) Physical Cosmology and Philosophy. Macmillan Publishing Company.

Parfit, D. 1997. The Sharman Memorial Lectures. UCL, March 1997, London.

Perry, R. M. 1995. "An Alternative to Deism based on the Anthropic Principle". Venturist monthly News. Feb. Available at http://www.alcor.org/11.

Recher, N. 1982. "Extraterrestrial Science". In Regis, E. JR. 1985. Extraterrestrials: Science and Alien Intelligence. Cambridge University Press.

Regis, E. JR. 1985. Extraterrestrials: Science and Alien Intelligence. Cambridge University Press.

Sagan, C. & Newman, W. I. 1982. "The Solipsist Approach to Extraterrestrial Intelligence", Reprinted In Regis, E. Jr. 1985. Extraterrestrials: Science and Alien Intelligence. Cambridge University Press. pp. 151-163.

Sagan, C. 1977. The dragons of Eden. Ballantine, NY.

Sato et. al. 1982. Physics Letters, vol. 108B, 14 January, pp. 103-7.

Schopf, J. W. 1992. Major Events in the History of Life. Jones and Bartlett, Boston.

Silk, J. 1980. The Big Bang. San Francisco: W. H. Freeman and Co.

Smolin, L. 1997. The Life of the Cosmos. New York: Oxford University Press.

Tegmark, M. 1997. Is "the theory of everything" merely the ultimate ensemble theory?". Physics preprints archive, gr-gc/9704009. 3 Apr.

Tipler, F. J. 1994. The Physics of Immortality. Doubleday.

Tryon, E. P. 1973. "Is the universe a vacuum fluctuation?". Nature, 14 December, pp. 396-7.

Turok, N. & Hawking, S. W. 1998. "Open Inflation, the Four Form and the Cosmological Constant". Physics preprint archive (http://xxx.lanl.gov), hep- th/ 9803156 v4 1 Apr 1998.

Tännsjö, T. 1997 "Doom Soon?". Inquiry, 40, 243-52

Vilenkin, A. 1995. Phys. Rev. Lett. 74, 846.

Weinberg, S. 1993. Dreams of a Final Theory. Hutchinson.

Wesson, P. S. 1990. Cosmology, Extraterrestrial Intelligence, and a Resolution of the Fermi-Hart Paradox. Quat. J. Roy. Astr. Soc., 31, 161-170.

Wilson, P. A. 1994. "Carter on Anthropic Principle Predictions". Brit. J Phil. Sci., 45, 241-253.

[Back to anthropic-principle.com]