# Chapter 6

The Doomsday Argument

By now we have seen several examples where SSA gives intuitively plausible results. If SSA is applied to our actual situation and the future prospects of the human species, however, we get disturbing consequences. Coupled with a few seemingly quite weak empirical assumptions, SSA generates (given that we use the universal reference class) the Doomsday argument (DA), which purports to show that the life expectancy of the human species has been systematically overestimated. That is a shocking claim. The prediction is derived from premises which one would have thought too weak to entail such a thing. Moreover, under some not-so-implausible empirical assumptions, the reduction in our species’ life expectancy is quite drastic.

Most people who hear about DA at first think there must be something wrong with it. A small but significant minority think it is obviously right.^{1 }What everybody must agree is that if the argument works, it would be a momentous result, since it has major empirical consequences for an issue that we care a lot about, our survival.

^{1 }The ranks of distinguished supporters of DA include among others: J.J.C. Smart, Anthony Flew, Michael Lockwood, John Leslie, Alan Hàjek (philosophers); Werner Israel, Brandon Carter, Stephen Barr, Richard Gott, Paul Davis, Frank Tipler, H.B. Nielsen (physicists); and Jean-Paul Delahaye (computer scientist). (John Leslie, personal communication.)

Up until now, DA remains unrefuted. Not for a lack of trying; the attempts to refute it are legion. In the next chapter, we will analyze in detail some of the more recent objections and explain why they fail. In the present chapter, we shall spell out the Doomsday argument, identify its assumptions, and examine various related issues.

We can distinguish two forms of DA that have been presented in the literature, one due to Richard Gott and one to John Leslie. Gott’s version is incorrect. Leslie’s version, while a great improvement on Gott’s, also falls short on several points. Correcting these shortcomings does not, however, destroy the basic idea of the argument. So we shall try to fill in some of the gaps and set forth DA in a way that gives it a maximum run for its money. But to my cards on the table, I think DA ultimately fails. However, it is crucial that it not be dismissed for the wrong reasons.

DA has been independently discovered many times over. Brandon Carter was first, but did not publish on the issue. John Leslie gets the credit for being the first to clearly enunciate it in print (Leslie 1989). Leslie, who had heard rumors of Carter’s discovery from Frank Tipler, has been the most prolific writer on the topic, with one monograph and over a dozen academic papers. Richard Gott III independently discovered and published a version of DA in 1993 (Gott 1993). The argument also appears to have been conceived by H.B. Nielsen (Nielsen 1981) (although Nielsen might have been influenced by Tipler), and again more recently by Stephen Barr. Saar Wilf (personal communication) has convinced me that he, too, independently discovered the argument a few years ago.

Although Leslie has the philosophically most sophisticated exposition of DA, it is instructive to first take a look at the version expounded by Gott.

Gott’s version of DA^{2 }is based on a more general argument-type which he calls the “delta *t *argument”. Notwithstanding its extreme simplicity, Gott reckons it can be used to make predictions about most everything in heaven and on earth. It goes as follows.

^{2 }Gott’s version of DA is set forth in a paper in *Nature *dating from 1993 (Gott 1993); see also the responses (Buch 1994; Goodman 1994; Mackay 1994), and Gott’s replies (Gott 1994). A popularized exposition by Gott appeared (Gott 1997). In the original article, Gott not only sets forth a version of DA but also pursues its implications for the search of extraterrestrial life project and for the prospects of space travel. Further elaborations by Gott can be found in (Gott 1996, 2001).

Suppose we want to estimate how long some series of observations (or “measurements”) is going to last. Then,

Assuming that whatever we are measuring can be observed only in the interval between times t_{begin }and t_{end}, if there is nothing special about t_{now} we expect t_{now} to be randomly located in this interval. (Gott 1993), p. 315

Using this randomness assumption, we can make the estimate t_{future} =(t_{end} – t_{now}) ˜ t_{past}=(t_{now}–t_{begin}).

t_{future }is the estimated value of how much longer the series will last. What this means is that we make the estimate that the series will continue for roughly as long as it has already lasted when we make the random observation. This estimate will overestimate the true value half of the time and underestimate it half of the time. It also follows that a 50% confidence interval is given by 1/3 t_{past }< t_{future}< 3 t_{past} and a 95% confidence interval is given by 1/39 t_{past }< t_{future }< 39 t_{past}.

Gott gives some illustrations of how this reasoning can be applied:

[In] 1969 I saw for the first time Stonehenge (t_{past} ˜ 3,868 years) and the Berlin Wall (t_{past }˜ 8 years). Assuming that I am a random observer of the Wall, I expect to be located randomly in the time between t_{begin }and t_{end }(t_{end }occurs when the Wall is destroyed or there are no visitors left to observe it, whichever comes first). (Gott 1993), p. 315

At least in the case of the Berlin Wall, the delta *t *argument seems to have worked! (We may have to wait a while for the results to come in on Stonehenge, though.) A popular exposition that Gott wrote for *New Scientist *article also features a sidebar inviting the reader to use the arrival date of that issue of the magazine to predict how long their current romantic relationship will last. Presumably you can use this book for the same purpose. How long has your present relationship lasted? Use that value for t_{past }and you get your prediction from the expressions above, complete with an exact confidence interval.

Wacky? Yes, but all this does indeed follow from the assumption that t_{now} is randomly (and uniformly) sampled from the interval t_{begin }to t_{end}. Gott admits that this imposes some restrictions on the applicability of the delta *t *argument:

[At] a friend’s wedding, you couldn’t use the formula to forecast the marriage’s future. You are at the wedding precisely to witness its beginning. Neither can you use it to predict the future of the Universe itself—for intelligent observers emerged only long after the Big Bang, and so witness only a subset of its timeline. (Gott 1997), p. 39

Unfortunately, Gott does not discuss in any more detail the all-important question of when, in practice, the delta *t *argument is applicable. Yet it is clear from his examples that he thinks it should be applied in a very broad range of real-world situations.

In order to apply the delta *t *argument to estimate the life-expectancy of the human species, we must measure time on a “population clock” where one unit of time corresponds to the birth of one human. This modification is necessary because the human population is not constant. Thanks to population growth, most humans who have been born so far find themselves later rather than earlier in the history of our species. According to SSA, we should consequently assign a higher prior probability to finding ourselves at these later times. By measuring time as the number of humans who have come into existence, we obtain a scale where we can assign a uniform sampling density to all points of time.

There have been something like 60 billion humans so far. Using this value as t_{past}, the delta *t *argument gives the 95% confidence interval

1.5 *billion *< t_{future }< 2.3 *trillion*.

The units are human births. To convert this into years, we would have to estimate what the future population figures will be at different times given that a total of *N *humans will have existed. Absent such an estimate, DA leaves room for alternative interpretations. If the world population levels out at 12 billion and human life-expectancy stabilizes at approximately 80 years, then disaster is likely to put an end to our species fairly soon (within 1200 years with 75% probability). If population grows larger, the prognosis is even worse. But if population decreases drastically, or individual human life-spans get much longer, then the delta *t *argument would be compatible with survival for millions of years.

The probability of space colonization looks dismal in the light of Gott’s version of DA. Reasoning via the delta *t *argument, Gott concludes that the probability that we will colonize the galaxy is about *p* = 10^{-9}, because if we did, we would expect there to be at least a billion times more humans in the future than have been born to date.

The incorrectness of Gott’s argument

A crucial flaw in Gott’s argument is that it fails to take into account our empirical prior probability of the hypotheses under consideration. Even granting that SSA is applicable to all the situations and in the manner that Gott suggests (and we shall argue in a later chapter that that is not generally the case, because the “no-outsider requirement” is not satisfied), the conclusion would not necessarily be the one intended by Gott once this omission is rectified.

And it is clear, once we focus our attention on it, that our prior probabilities must be considered. It would be foolish when estimating the future duration of Stonehenge or the Berlin Wall not to take into account any other information you might have. Say you are part of a terrorist organization that is planning to destroy Stonehenge. Everything has been carefully plotted. The explosives are in the truck, the detonators are in your suitcase; tonight at 11 P.M. your confederates will pick you up from King’s Cross St. Pancras... Knowing this, surely the odds of Stonehenge lasting another year are different from, and much lower than, what a straightforward application of the delta *t* argument would suggest. In order to save the delta *t* argument, Gott would have to restrict its applicability to situations where we in fact lack other relevant information. But then the argument cannot be used to estimate the future longevity of the human species, for we certainly have plenty of extraneous information that is relevant to that. So Gott's version of DA fails.

That leaves open the question whether the delta *t* argument might not perhaps provide interesting guidance in some other estimation problems. Suppose we are trying to guess the future duration of some phenomenon, and that we have a "prior" probability distribution (after taking into accound all other empirical information available) that is uniform for total duration *T* in the interval 0 = *T* = *T _{max}*, and is zero for

*T*

*T*:

_{max}
Suppose you make an observation at time *T _{0}* and find that the phenomenon at that time has lasted for (

*T*- 0) and is still ongoing. Let us assume, further, that there is nothing "special" about the time you choose to make the observation. That is, we assume that the case is not like using the delta

_{0}*t*argument to forecast the prospects of a friend's marriage at his wedding. We have made quite a few assumptions here, but if the argument could be shown to work under these conditions it might still find considerable practical use. Some real-world cases at least approximate this ideal setting.

Even under these favorable conditions, however, the argument is inconclusive, because it neglects a potentially important observation selection effect. The probability of your observation ocurring at a time when the phenomenon is taking place may be positively correlated with the duration of the phenomenon. We will discuss this in more detail in the next chapter, in the context of what we shall call the "no-outsider" requirement. For now, it suffices to note that if your observation is sampled from a time interval that is longer than the minimum guaranteed duration of the phenomenon - so that you could have made your observation before the phenomenon started or after it ended - then finding that the phenomenon is still in progress when you make your observation gives you some reason to think that the phenomenon probably lasts relatively long. The delta *t* argument fails to take account of this effect. The argument, hence, is flawed, unless we make the additional assumption (not made by Gott) that your observation point is sampled from a time interval that does not exceed the duration of the phenomenon. And this entails that in order to legitimately apply Gott's method, you must be convinced that your observation point's sampling interval co-varies with durations of the phenomenon. That is to say, you must be convinced that *given *the phenomenon lasts from *t _{a}*to

*T*,

_{b}*then*your observationpoint is sampled from the interval [

*T*,

_{a}*T*]; and that

_{b}*given*that the phenomenon lasts from

*T*to

_{a’ }*T*,

_{b’}*then*your observation point is sampled from the interval [

*T*,

_{a’}*T*]; and similarly for any other start- and end-points that you assign a non-zero prior probability. This imposes a strong additional constraint on the situations where the delta

_{b’}*t*argument can be applied.

^{3 }

^{3 }I made these two points—that Gott’s argument fails to take into account the empirical prior and that it ignores the selection effect just described—in a paper of 1997 (Bostrom 1997). More recently, Carlton Caves has independently rediscovered these two objections and presented them elegantly in (Caves 2000). See also (Ledford, Marriott, et al. 2001; Olum 2002), and for a reply by Gott, see (Gott 2000).

The failure of Gott’s approach to take into account the empirical prior probabilities and to respect the no-outsider requirement constitute the more serious difficulties with the “Copernican Anthropic Principle” alluded to in chapter 3 and are part of the reason why we replaced that principle with SSA.

Leslie’s presentation of DA differs in several respects from Gott’s. Stylistically, Leslie’s writing is more informal and his arguments often take the form of analogies. But he is much more explicit than Gott about the philosophical underpinnings and he places the argument in a Bayesian framework. Leslie also devotes considerable attention to the empirical considerations that determine the priors, as well as to the ethical imperative of working to reduce the risk of human extinction.

Leslie presents DA through a loosely arranged series of thought experiments and analogies, and a large part of the argumentation consists in refuting various objections that could be advanced against his preferred way of reasoning. This makes it hard to do justice to Leslie’s version of DA in a brief summary, but a characteristic passage runs as follows:

One might at first expect the human race to survive, no doubt in evolutionary much modified form, for millions or even billions of years, perhaps just on Earth but, more plausibly, in huge colonies scattered through the galaxy and maybe even through many galaxies. Contemplating the entire history of the race—future as well as past history—I should in that case see myself as a very unusually early human. I might well be among the first 0.00001 per cent to live their lives. But what if the race is instead about to die out? I am then a fairly typical human. Recent population growth has been so rapid that, of all human lives lived so far, anything up to about 30 per cent . . . are lives which are being lived at this very moment. Now, whenever lacking evidence to the contrary one should prefer to think of one’s own position as fairly typical rather than highly untypical. To promote the reasonable aim of making it quite ordinary that I exist where I do in human history, let me therefore assume that the human race will rapidly die out. (Leslie 1990), pp. 65f.

Leslie emphasizes that DA does not show that doom *will *strike soon. It only argues for a probability shift. If we started out being extremely confident that the humans will survive for a long time, we might still be fairly confident after having taken DA into account—though less confident than before. Also, it is possible for us to improve our prospects. Leslie hopes that having been convinced that the risks are greater than we previously thought, we will become more willing to take steps to diminish them. This could perhaps be done by pushing for nuclear disarmament, setting up an early-warning system for meteors on collision course with Earth, being careful with future very-high-energy particle physics experiments (which might, conceivably, knock our cosmic region out of a metaunstable vacuum state and destroy the world), and developing workable strategies for dealing with the weapons potential of future nanotechnology (Drexler 1985, 1992; Freitas, Jr. 1999). So we should not take DA as a ground for despair but as a call for greater caution and concern about potential species-annihilating disasters.

A major advantage over Gott’s version of Leslie’s is that it stresses that the empirical priors must be taken into account. Bayes’ theorem tells us how to do that. Suppose we are entertaining two hypotheses about how many humans there will have been in total:

*H _{1}*: There will have been a total of 200 billion humans.

*H _{2}*: There will have been a total of 200 trillion humans.

For simplicity, let us assume that these are the only possibilities. The next step is to assign prior probabilities to these hypotheses on the basis of available empirical information (but ignoring, for the moment, information about your birth rank). For example, you might think that:

P(*H _{1}*) = 5%

P(*H _{2}*) = 95%

All that remains now is to factor in the information about your birth rank, which is in the neighborhood of 60 billion (*R*) for those of us who are alive at the beginning of the 21^{st} century.

In this example, the prior probability of Doom soon (*H _{1}*) of 5% is increased to about 98% when you take into account your birth rank.

This is how calculations are to be made on Leslie’s version of DA. The calculation is not the argument, however. Rather, the calculation is a derivation of a specific prediction from assumptions which DA seeks to justify. Let’s look in more detail at what these assumptions are and whether they can be supported.

The premisses of DA, and the Old evidence problem

Leslie talks of the principle that, lacking evidence to the contrary, one should think of one’s position as “fairly typical rather than highly untypical”. SSA can be viewed as an explication of this rather vague idea. The crucial question now is whether SSA can be applied in the context of DA in the way the above calculation presupposes.

Let’s suppose for a moment that it can. What other assumptions does the argument use? Well, an assumption was made about the prior probabilities of *H _{1 }*and

*H*. This assumption is no doubt incorrect, since there are other hypotheses that we want to assign non-zero probability. However, it is clear that choosing different values of the prior will not change the fact that hypotheses that postulate fewer observers will gain probability relative to hypotheses that postulate more observers.

_{2}^{4 }>The absolute posterior probabilities depend on the precise empirical prior, but the fact that there is this probability shift does not. Further, (#) is merely a formulation of Bayes’ theorem. So once we have the empirical priors and the conditional probabilities, the prediction follows mathematically.

^{4 }Provided, of course, that the prior probabilities are non-trivial, i.e. not equal to zero for all but one hypothesis. But that is surely a very reasonable assumption. The probabilities in questions are subjective probabilities, credences, and I for one am uncertain about how many humans there will have been in total; my prior is smeared out—non-zero—over a wide range of possibilities.

The premiss that bears the responsibility for the surprising conclusion is that SSA can be applied to justify these conditional probabilities. Can it?

Recall that we argued for Model 2 in version I of *Incubator *in chapter 4. If DA could be assimilated to this case, it would be justified to the extent that Model 2 is justified. The cases are in some ways similar, but there are also differences. The question is whether the differences are relevant. In this section, we shall examine whether the arguments that were made in favor of Model 2 can be adapted to support DA. We will find that there are significant disanalogies between the two cases. It might be possible to bridge these dis-analogies, but until that is done the attempt to support the assumptions of DA by assimilating it to something like Model 2 remains inconclusive. This is not to say that the similarities between the two cases cannot be *persuasive *for some people. So this section is neither an attack on nor a defense of DA. (On the other hand, in chapter 9 we will find that the reasoning used in

Model 2 leads to quite strongly counterintuitive results, and in chapter 10 we will develop a new way of thinking about cases like *Incubator *that need not lead to DA-like conclusions. Those results will suggest that even if we are persuaded that DA could be assimilated to Model 2, we may still not accept DA because we reject Model 2!)

One argument that was used to justify Model 2 for *Incubator *was that if you had at first been ignorant of the color of your beard, and you had assigned probabilities to all the hypotheses in this state of ignorance, and you then received information about your beard color and updated your beliefs using Bayesian conditionalization, then you would end up with the probability assignments that Model 2 prescribes. This line of reasoning does not presuppose that you actually were, at some point in time, ignorant of your beard color. Rather, considering what you would have thought if you had been once ignorant of your beard color is merely a way of clarifying your current conditional probabilities of being in a certain room given a certain outcome of the coin flip in *Incubator*.

I hasten to stress that I’m not suggesting a counterfactual analysis as a general account of conditional degrees of belief. I am not saying that P(*e*|*h*) should in general be indentified with the credence you would have assigned to *e *had you not known whether *e *but known that *h*. A solution to the so-called Old evidence problem (see e.g. (Eells 1990; Howson 1991; Schlesinger 1991; Earman 1992; Achinstein 1993)) no doubt requires a much more complicated account than that. Nonetheless, thinking in terms of such counterfactuals can in *some *cases be a useful way of getting clearer about what your subjective probabilities are. Take the following case.

Two indistinguishable urns are placed in front of Mr. Simpson. He is credibly informed that one of them contains ten balls and the other a million balls, but he is ignorant as to which is which. He knows the balls in each urn are numbered consecutively 1, 2, 3, 4… and so on. Simpson flips a coin, which he is convinced is fair, and based on the outcome he selects one of the urns—as it happens, the left one. He picks a ball at random from this urn. It is ball number 7. Clearly, this is a strong indication that the left urn contains only ten balls. If originally the odds were fifty-fifty (which is reasonable given that the urn was selected randomly), a swift application of Bayes’ theorem gives the posterior probability: P(Left urn contains 10 balls | Sample ball is #7) = 99.999%.

Simpson, however, had never much of a proclivity for cognitive exertions. When he picks the ball number 7 and is asked to give his odds for that urn being the one with only ten balls, he says: “D’oh, fifty-fifty!”

Before Mr. Simpson stakes his wife’s car on these inclement odds, what can we say to him to help him come to his senses? When we start explaining about conditional probabilities, Simpson decides to stick to his guns rather than admit that his initial response is incorrect. He accepts Bayes’ theorem, and he accepts that the probability that the ten-ball urn would be selected by the coin toss was 50%. What he refuses to accept is that the conditional probability of selecting ball number 7 is one in ten (one in a million), given that the urn contains ten (one million) balls. Instead he thinks that there was a 50% probability of selecting ball number 7 on each hypothesis about the total number of balls in the urn. Or maybe he declares that he simply doesn’t have any such conditional credence.

One way to proceed from here is to ask Simpson, “What probability would you have assigned to the sample you have just drawn being number 7 if you hadn’t yet looked at it but you knew that it had been picked from the urn with ten balls?” Suppose Simpson says, “One in ten.” We may then appropriately ask, “So why then does not your conditional probability of picking number 7 given that the urn contains ten balls equal one in ten?”

There are at least two kinds of reasons that one could give to justify a divergence of one’s conditional probabilities from what one thinks one would have believed in a corresponding counterfactual situation. First, one may think that one would have been irrational in the counterfactual situation. What one thinks one would have believed in a counterfactual situation in which one was drugged into a state of irrationality is usually irrelevant for the purpose of determining one’s current conditional credences.^{5 }In the case of Simpson, this response is unavailable, because Simpson does not believe he would have been irrational in the counterfactual situation where he hadn’t yet observed the number on the selected ball; in fact (let’s suppose) Simpson thinks that in the counterfactual situation, he would have believed precisely that which it would have been rational for him to believe.

^{5 }One obvious exception is in evaluating hypotheses *about how one would behave if one were drugged, *etc.

A second reason for divergence is if the counterfactual situation (where one doesn’t know that *e*) doesn’t exactly “match” the conditional probability P(*h*|*e*) being assessed. The corresponding counterfactual situation might contain features—other than one’s not knowing that *e*—that would rationally influence one’s degree of belief in *h*. For instance, suppose we add the following feature to the example: Simpson has been credibly informed at the beginning of the experiment that *if *there is a gap in time (*“Delay”*) between the selection of the ball and his observing what number it is (so that he has the opportunity to be for a while in a state of ignorance as to the number of the selected ball), *then *the experiment has been rigged in such a way that he was bound to have selected either ball number 6 or 7. Then in the counterfactual situation where Simpson is ignorant of the number on the selected ball, *Delay *would be true; and Simpson would have known that. In the counterfactual situation he would therefore have had the additional information that the experiment was rigged (an event to which, we can assume, he assigned a low prior probability). Clearly, what he would have thought in that counterfactual situation does not determine the value that he should, in the actual case, assign to the conditional probability P(*h*|*e*), since in the actual case (where *Delay *is false) he does not have that extra piece of information. (What he thinks he would have thought in the counterfactual situation would rather be relevant to what value he should now give to the conditional probability P(*h*|*e*&*Delay*); but that is not what he needs to know in the present case.)

This second source of divergence suggests a more general limitation of the counterfactual-test of what your current conditional probabilities should be. In many cases, there is no clearly defined, unique situation that would have obtained if you had not known some data that you in fact know. There are many ways of not knowing something. Take “the counterfactual situation” where you don’t know whether there have ever been any clouds in the sky. Is that a situation where you have never been outdoors and don’t know whether there is a sky? Or is it a situation where you don’t know what condensation is? Or perhaps a situation where you are unsure about whether the fluffy things you see up there are really clouds rather than, say, large chunks of cotton candy? It seems clear that we have not specified the hypothetical state of “you not knowing whether clouds have ever existed in the sky” sufficiently to get an unambiguous answer to what else you would or would not believe if you were in that situation.

In *some *cases, however, the counterfactual situation *is *sufficiently specified. Take the original case with Mr. Simpson again (where there is no complication such as the selection potentially being rigged). Is there a counter-factual situation that we can point to as the counterfactual situation that Simpson would be in if he didn’t know the number on the selected ball? It seems there is. Suppose that in the actual course of the experiment there was a one-minute interval of ignorance between Simpson’s selecting a ball and his looking to see what number it was. Suppose that during this minute Simpson contemplated his probability assignments to the various hypotheses and reached a reflective equilibrium. Then one can plausibly maintain that, at the later stage when Simpson has looked at the ball and knows its number, what he *would have *rationally believed if he didn’t know its number is what he *did *in fact believe a moment earlier before he learned what the number was. Moreover, even if, in fact, there never was an interval of ignorance where Simpson didn’t know that *e*, it can still make sense to ask what he would have thought if there *had *been one. At least in this kind of example, there is a suitably definite counterfactual from which we can read off the conditional probability P(*h*|*e*) that Simpson was once implicitly committed to.

If this is right, then there are at least some cases where P(*h*|*e*) can be meaningfully assigned a non-trivial probability even if there never was any time when *e *was not known. The “Old evidence problem” retains its bite in the general case, but in some special cases it can be tamed. This is indeed what one should have expected, since otherwise the Bayesian method could never be applied except in cases where one had *in advance *contemplated and assigned probabilities to all relevant hypotheses and possible evidence. That would fly in the face of the fact that we are often able to plausibly model the evidential bearing of old evidence on new hypotheses within the Bayesian framework.

Returning now to the *Incubator *(version I) gedanken, recall that it was not assumed that there actually was a point in time when the people created in the rooms were ignorant about the color of their beards. They popped into existence, we could suppose, right in front of the mirror and gradually formed a system of beliefs as they reflected on their circumstances.^{6 }Nonetheless, we can use an argument involving a counterfactual situation where they were ignorant about their beard color to motivate a particular choice of conditional probability.

^{6 }That this is possible is not entirely uncontroversial. Some hold the view that knowledge requires that the knower and her epistemic faculties have a particular kind of causal origin. For the purposes of the present investigation, we can set such scruples aside.

Let’s look more closely at how this can be done. Let *I *be the set of all information that you have received up to the present time. *I *can be decomposed in various ways. For example, if *I *is logically equivalent to *I _{1}*&

*I*then

_{2 }*I*can be decomposed into

*I*and

_{1 }*I*. You currently have some credence function that specifies your present degree of belief in various hypotheses (conditional or otherwise), and this credence is conditionalized on the background information

_{2}*I*. Call this credence function

*C*. But although this is the credence function you have, it may not be the credence function you ought to have. You may have failed to understand all the probabilistic connections among the facts that you have learnt. Let

_{I}*C*be a rival credence function, conditionalized on the same information

_{I}^{* }*I*. The task is now to determine whether on reflection you ought to switch to

*C*or stick with

_{I}^{* }*C*.

_{I}
The relation to DA should be clear. *C _{I }*can be thought of as your credence function before you heard about DA;

*C*, the credence function that the proponent of DA (the “doomsayer”) seeks to persuade you to adopt. Both these functions are based on the same background information

_{I }*I*, which includes everything you have learnt up until now. What the doomsayer argues is not that she can give you some new piece of relevant information that you did-n’t have before, but rather that she can point out a probabilistic implication of information you already have that you hitherto has failed to fully realize or take into account—in other words, that you have been in error in your assessment of the probabilistic bearing of your evidence on hypotheses about how long the human species will last. How can she go about that? Since, presumably, you haven’t made any explicit calculations to decide what credence to attach to these hypotheses, she cannot point to any mistakes that you’ve made in some mathematical derivation.

But here is one method she *can *use. She can specify some decomposition of your evidence into *I _{1 }*and

*I*. She can then ask you what you think you ought to have rationally believed if all the information you had were

_{2}*I*(and you didn’t know

_{1 }*I*). (This thought operation involves reference to a counterfactual situation, and, as we saw above, whether such a procedure is legitimate depends on the particulars; sometimes it works, sometimes it doesn’t. Let’s assume for the moment that it works in the present case.) What she is asking for, thus, is what credence function

_{2}*C*you think you ought to have had if your total information were

_{I1 }*I*. In particular,

_{1}*C*assigns values to certain conditional probabilities of the form

_{I1 }*C*(

_{I1}***|

*I*). This means we can then use Bayes’ theorem to conditionalize on

_{2}*I*and update the credence function. If the result of this updating is

_{2 }*C*, then she will have shown that you are committed to jettisoning your present credence function

_{I}^{*}*C*and replacing it with

_{I }*C*(provided you choose to adhere to

_{I}^{* }*C*(

_{I1}***|

*I*) even after realizing that this obligates you to change

_{2}*C*). For

_{I}*C*and

_{I }*C*are based on the same information, and you have just acknowledged that you think that if you were ignorant of

_{I}^{* }*I*you should set your credence equal to

_{2 }*C*, which results in

_{I1}*C*when conditionalized on

_{I}^{* }*I*. One may summarize this, roughly, by saying that the order in which you choose to consider the evidence should not make any difference to the probability assignment you end up with.

_{2}^{7 }

^{7 }Subject to the obvious restriction that none of the hypotheses under consideration is about the order in which you consider the evidence. For instance, the probability you assign to the hypothesis “I considered evidence *e _{1 }*before I considered evidence

*e*.” is not independent of the order in which you consider the evidence!

_{2}
This method can be applied to the case of Mr. Simpson. *I _{1 }*is all the information he would have had up to the time when the ball was selected from the urn.

*I*is the information that this ball is number 7. If Simpson firmly maintains that what would have been rational for him to believe had he not known the number of the selected ball (i.e. if his information were

_{2 }*I*) is that the conditional probability of the selected ball being number 7 given that the selected urn contains ten balls (a million balls) is one in ten (one in a million), then we can show that his present credence function ought to assign a 99.999% credence to the hypothesis that the left urn, the urn from which the sample was taken, contains only ten balls.

_{1}
In order for the doomsayer to use the same method to convince somebody who resists DA on the grounds that the conditional probabilities used in DA do not agree with his actual conditional probabilities, she’d have to define some counterfactual situation *S *such that the following holds:

(1) In *S *he does not know his birth rank.

(2) The probabilities assumed in DA are the probabilities he now thinks that it would be rational for him to have in *S*.

(3) His present information is logically equivalent to the information he would have in *S *conjoined with information about his birth rank (modulo information which he thinks is irrelevant to the case at hand).

The probabilities referred to in (2) are of two sorts. There are the “empirical” probabilities that DA uses—the ordinary kind of estimates of the risks of germ warfare, asteroid impact, abuse of military nanotechnology, etc. And then there are the conditional probabilities of having a particular birth rank given a particular hypothesis about the total number of humans that will have lived. The conditional probabilities presupposed by DA are the ones given by applying SSA to that situation. *S *should therefore ideally be a situation where he possesses all the evidence he actually has which is relevant to establishing the empirical prior probabilities, but where he lacks any indication as to what his birth rank is.

Can such a situation *S *be conceived? That is what is unclear. Spot the flaw in the following beguiling but unworkable argument:

*An erroneous argument*

What if we in actual fact don’t know our birth ranks, even approximately? What if we actually *are *in a situation *S *that is characterized by precisely the sort of partial ignorance that the argument urging a DA-like choice of conditional probabilities presupposes? “But,” you object, “didn’t you say that our birth ranks are about 60 billion? If I know that this is (approximately) the truth, how can I be ignorant about my birth rank?”

Well, what I said was that your birth rank *in the human species *is about 60 billion. Yet that does not imply that your birth rank *simpliciter *is anywhere near 60 billion. There could be other intelligent species in the universe, extraterrestrials who count as observers, and I presume you would not assert with any confidence that your birth rank *within this larger group *is about 60 billion. You presumably agree that you are highly uncertain about your relative temporal position in the set of all observers in the cosmos, if there are many alien civilizations out there.

Now, if you go back and re-examine the arguments that were given in chapters 4 and 5, you will find that they can be adapted to show that intelligent aliens should be included in the reference class to which SSA is applied, at least if they are not too different from human observers. Indeed, the arguments that were based on how SSA seems to be the most plausible way of deriving observational predictions from multiverse theories and of making sense of the objection against Boltzmann’s attempted explanation of the arrow of time *presuppose *such an inclusive reference class. And the arguments that were based on the thought experiments can easily be adapted to include extraterrestrials—draw antennas on some of the people in the illustrations, adjust the terminology accordingly, and these arguments go through as before.

We should consequently propose for Mr. Simpson’s consideration (who now plays the role of a skeptic about DA) the following hypothetical situation *S *(which might be a counterfactual situation or a situation that will actually occur in the future): Scientists report having obtained evidence strongly favoring the disjunction *H _{1 }*?

*H*, where

_{2}*H*is the hypothesis that our species is the only intelligent life-form in the world, and

_{1 }*H*is the hypothesis that our species is one out of a total of one million intelligent species throughout spacetime, each of which is pretty much like our own in terms of its nature and population size. Mr. Simpson knows what his birth rank would be given

_{2 }*H*, namely 60 billion; but he does not know, even approximately, what his birth rank would be given

_{1}*H*.

_{2}By considering various sequences of additional incoming evidence favoring either

*H*or

_{1 }*H*, we can thus probe how Simpson does or does not take into account the information about his birth rank in evaluating hypotheses about how long the human species will last.

_{2}Suppose first that evidence comes in strongly favoring

*h*. We then have a situation

_{2}*S*satisfying the three criteria listed above. Mr. Simpson acknowledges that he is ignorant about his birth rank, and so he now thinks that in this situation it would be rational for him to apply SSA. This gives him the conditional probabilities required by DA. The empirical priors are, let us assume, not substantially affected by the information favoring

*H*, so they are the same in

_{2}*S*as they are in his actual situation.

Suppose, finally, that scientists a while later and contrary to expectation obtain new evidence that very strongly favors

*H*. When Simpson learns about this, his evidence becomes equivalent to the information he has in the actual situation (where we assume that Simpson does not believe there are any extraterrestrials). All the input needed by the DA-calculation has now been supplied, and Bayes’ theorem yields a posterior probability (that is properly conditionalized on all available information, including the indexical information about Simpson’s birth rank). This posterior reflects the probability shift in favor of hypotheses of impending doom, which Simpson and other DA-skeptics had thought they could avoid.

_{1}
It could seem as if this argument has successfully described a hypothetical situation *S *that satisfies criteria (1)–(3) and thus verifies DA. Not so. The weakness of the scenario is that although Simpson doesn’t know even approximately what his birth rank is in *S*, he still knows in *S *his *relative *rank within the human species: he knows that he is about the 60 billionth human.

Thus, the option remains for Simpson to maintain that when he applies SSA, he should assign probabilities that are invariant between various specifications of our species’ position among all the extraterrestrial species—since he is ignorant about that—but that the probabilities should not be uniform over various positions within the human species—since he is not ignorant about that. For example, if we suppose that the various species are temporally non-overlapping so that they exist one after another, then he might assign a probability close to one that his absolute birth rank is either about 60 billion, or about 120 billion, or about 180 billion, or... Suppose this is what he now thinks it would be rational for him to do in *S*. Then the DA-calculation does not get the conditional probabilities it needs in order to produce the intended conclusion, and DA fails. For after conditioning on the strong evidence for *H _{1}*, the conditional probability of having a birth rank of roughly 60 billion will be the same given any of the hypotheses about the total size of the human species that he might entertain.

It might be possible to find some other hypothetical situation *S *that would really satisfy the three constraints, and that could thereby serve to compel a person like Simpson to adopt the conditional probabilities that DA requires.^{8 }But unless and until such a situation is described (or some other argument is provided for why we should accept those probabilities), this is a loose end to which those may gladly cling whose intuitions do not drive them to adopt the requisite probabilities without argument.

^{8 }In order for *S *to do this, it would have to be the case that the subject decides to retain his initial views about *S *even after it is pointed out to him that those views commit him to accepting the DA-conclusion given he accepts Model 2 for *Incubator*. Some might elect to revise their views about a situation *S, *which *prima facie *satisfies the three conditions, rather than to change their minds about DA.

Leslie’s views on the reference class problem

Returning to problem of the reference class (the class from which one should reason as if one were randomly selected), let’s consider what John Leslie has to say on the topic. As a first remark, Leslie suggests that “perhaps nothing too much hangs on it.” ((Leslie 1996), p. 257):

[DA] can give us an important warning even if we confine our attention to the human race’s chances of surviving for the next few centuries. All the signs are that these centuries would be heavily populated if the race met with no disaster, and they are centuries during which there would presumably be little chance of transferring human thought-processes to machines in a way which would encourage people to call the machines ‘human’. (Leslie 1996), p. 258

There are two problems with this reply. First, the premise that there is little chance of creating machines with human-level thought processes within the next few centuries is a claim that many of those who have thought seriously about these things disagree with. Many thinkers in this field believe that these developments will happen within the first half of the present century (e.g. (Drexler 1985; Moravec 1989, 1998, 1999; Minsky 1994; Bostrom 1998; Kurzweil 1999)). Second, the comment does nothing to allay the suspicion that the difficulty of determining an appropriate reference class might be symptomatic of an underlying ill in DA itself.

Leslie proceeds, however, to offer a positive proposal for how to settle the question of which reference class to choose. The first part of this proposal is best understood by expanding our urn analogy in which we previously made the acquaintance of Mr. Simpson. Suppose that the balls in the urns come in different colors (while still being numbered consecutively as before). Your task is to guess how many red balls there are in the left urn. Now, “red” is a vague concept; when does red become orange, brown, purple, or pink? This vagueness could be seen as corresponding to the vagueness about what to classify as an observer for the purposes of DA. So, if some vagueness like this is present in the urn example, does that mean that the Bayesian induction used in the original example can no longer be made to work?

By no means. The right response in this case is that you get to choose how you define the reference class. The choice depends on what hypothesis you are interested in testing. Suppose that you want to know how many balls there are in the urn of the color faint-pink-to-dark-purple. Then all you have to do is to classify the random sample you select as being either faintpink-to-dark-purple or not faint-pink-to-dark-purple. Once the classification is made, the calculation proceeds as before. If instead you are interested in knowing how many faint-pink-to-medium-red balls there are, then you classify the sample according to whether it has *that *property, and proceed as before. The Bayesian apparatus is neutral as to how you define hypotheses. There is no right or wrong way, just different questions you might be interested in asking.

Applying this idea to DA, Leslie writes:

The moral could seem to be that one’s reference class might be made more or less what one liked for doomsday argument purposes. What if one wanted to count our much-modified descendants, perhaps with three arms or with godlike intelligence, as ‘genuinely human’? There would be nothing wrong with this. Yet if we were instead interested in the future only of two-armed humans, or of humans with intelligence much like that of humans today, then there would be nothing wrong in refusing to count any others. (Leslie 1996), p. 260

This passage seems to suggest that if we are interested in the survival-prospects of just a special kind of observers, we are entitled to apply DA to this subset of the reference class. Suppose you are a person with hemophilia and you want to know how many hemophiliacs there will have been. Solution: Count the number of hemophiliacs that have existed before you and use the DA-style calculation to update your prior probabilities (given by ordinary empirical considerations) to take account of the fact that this random sample from the set of all hemophiliacs—*you*—turned out to be living when just so many hemophiliacs had already been born.

How far can one push this mode of reasoning though, before crashing into absurdity? If the reference class is defined to consist of all those people who were born on the same day as you or later, then you should expect doom to strike quite soon. Worse still, let’s say you want to know how many people there will have been with the property of being born either on the day when you were born or after the year 2002. If humans continue to be sired after the year 2002, you will become “improbably early” in this “reference class” alarmingly soon. Should you therefore have to conclude that humankind is likely to go extinct in the first few months of 2003? Crazy!

How can the doomsayer avoid this conclusion? According to Leslie, by adjusting the prior probabilities in a suitable way, a trick that he says was suggested to him by Carter ((Leslie 1996), p. 262). Leslie thinks that defining the reference class as humans-born-as-late-as-you-or-later is fine and that ordinary inductive knowledge will make the priors so low that no absurd consequences will follow:

No inappropriately frightening doomsday argument will result from narrowing your reference class . . . provided you adjust your prior probabilities accordingly. Imagine that you’d been born knowing all about Bayesian calculations and about human history. The prior probability of the human race ending in the very week you were born ought presumably to have struck you as extremely tiny. And that’s quite enough to allow us to say the following: that although, if the human race had been going to last for another century, people born in the week in question would have been exceptionally early in the class of those-born-either-in-that-week-or-in-thefollowing-century, this would have been a poor reason for you to expect the race to end in that week, instead of lasting for another century. (Leslie 1996), p. 262

But alas, it is a vain hope that the prior will cancel out the distortions of a gerrymandered reference class. Suppose that you are convinced that the population of beings who know that Francis Crick and James Watson discovered the structure of DNA will go extinct no sooner and no later than the human species. You want to evaluate the hypothesis that this will occur before the year 2100. Based on ordinary empirical considerations, you assign, say, a 25% credence to this hypothesis. The doomsayer then presents you with DA. Now, should you use the reference class consisting of human beings, or the reference class consisting of human beings who know that Francis Crick and James Watson discovered the structure of DNA? You get a different posterior probability for the hypothesis depending on which of these reference classes you use. The problem is not that you have chosen the wrong prior probability, one giving “too frightening” a conclusion when used with the latter reference class. The problem is that for any prior probability, you get many different—incompatible—predictions depending on which reference class you use.

Of course, it is trivially true that given any non-trivial reference class one can always pick some numbers such that when one plugs them into Bayes’ formula together with the conditional probabilities based on that chosen reference class, one gets any posterior probability function one pleases. But these numbers one plugs in will not in general be one’s prior probabilities. They’ll just be arbitrary numbers of no significance or relevance.

The example in which a hemophiliac applies DA to predict how many hemophiliacs there will have been may at first sight appear to work quite well and to be no more implausible than applying DA to predict the total number of observers. Yet it would be a mistake to take this as evidence that the reference class varies depending on what one is trying to predict. If the hemophiliac example has an air of plausibility, it is only because one tacitly assumes that the hemophiliac population constitutes a roughly constant fraction of the human population. Suppose one thinks otherwise. Genetic treatments for hemophilia being currently in clinical trial, one may speculate that one day a germ-line therapy will be used to eliminate the hemophiliac type from the human gene pool, long before the human species goes extinct. Does a hemophiliac reading these lines have especially strong reason for thinking that the speculation will come true, on grounds that it would make her position within the class of all hemophiliacs that will ever have lived more probable than the alternative hypothesis, that hemophilia will always be a part of the human condition? It would seem not.

So the idea that it doesn’t matter how we define the reference class because we can compensate by adjusting the priors is misconceived. We saw in chapter 4 that your reference class must not be too wide. It can’t include rocks, for example. Now we have seen that it must not be too narrow either, such as by excluding everybody born before yourself. We also know a given person at a given time cannot have multiple reference classes for the same application of DA-reasoning, on pain of incoherence. Between these constraints there is still ample space for divergent definitions, which further studies may or may not further restrict. (We shall suggest in chapter 10 that there is an ineludible subjective component in a thinker’s choice of reference class, and moreover that the same thinker can legitimately use different reference classes at different times.)

It should be pointed out that *even if *DA were basically correct, there would still be room for other interpretations of the result than that humankind is likely to go extinct soon. For example, one may think that:

- The priors are so low that even after a substantial probability shift in favor of earlier doom, we remain likely to survive for quite a while.
- The size of the human population will decrease in the future; this reconciles DA with even extremely long survival of the human species.
- Humans evolve (or we reengineer ourselves using advanced technology) into “posthumans”, who belong in a different reference class than humans. All that DA would show in this case is that the posthuman transition is likely to happen before there have been vastly more humans than have lived to date.
- There will be infinitely many humans, in which case it is unclear what DA amounts to. In some sense, each observer would be “infinitely early” if there are infinitely many.
^{9 }

^{9 }Further, John Leslie thinks that DA is seriously weakened if the world is indeterministic. I don’t accept that that would be the case.

A better way of expressing what DA aims to show is therefore as a disjunction of possibilities rather than as the simple statement “Doom will probably strike soon.” Of course, even this more ambiguous prediction would be a remarkable result from both a practical and a philosophical perspective.

Bearing in mind that we understand by DA the general form of reasoning described in this chapter, one that is not necessarily wedded to the prediction that doomsday is impending, let us consider some objections from the recent literature.