Chapter 9

Paradoxes of the Self-Sampling Assumption1

1 An early ancestor of this chapter was presented at a conference organized by the London School of Advanced Study on the Doomsday argument (London, Nov. 6, 1998). I’m grateful for comments from the participants there, and from referee comments on a more recent ancestor published in Synthese (Bostrom 2001), parts of which is used here, with permission.

The function of this chapter is that of a wrecking ball. In order to prepare the site for the construction work that we will do in the next two chapters, we must level those current structures that aren’t robust enough to build on.

Less metaphorically, we shall present several thought experiments that tease out some counterintuitive consequences of adopting SSA with the universal reference class (the reference class containing all intelligent observers that will have existed). The existence of these consequences is a reason for moving to the more general theory of observation selection effects that we will develop in chapter 10. That theory will permit the reference class to be relativized in a way that makes it possible to avoid the paradoxical consequences we pursue in this chapter.

Among the prima facie consequences of applying SSA with the universal reference class is that we have reason to believe in paranormal causation (such as psychokinesis) and that SSA recommends actions that seem radically foolish. A careful analysis, however, reveals that most of these prima facie consequences are merely apparent. We show how SSA manages to extricate itself from all of the worst incriminations (we apply a wrecking ball to the wrecking ball).

A subset of counterintuitive consequences remains after the dust has settled. I view them as sufficiently repugnant to motivate going beyond SSA. However, should somebody be willing to accept those implications that remain after we have explained away that which can be explained away, then I don’t have any further argument that would compel her to give up SSA. Yet the theory we develop in the next chapter should still be acceptable to her, for she could then hold that all the cases are the “special” kind of cases in which SSA applies, so that the more general theory is sound (albeit superfluously general and containing an otiose degree of freedom). For the rest of us, who don’t accept the consequences of SSA that remain at the end of this chapter, the added analytic power of the more general theory is necessary for giving a completely satisfactory account of observation selection effects.

The Adam & Eve experiemnts

The three Adam & Eve thought experiments that follow are variations on the same theme; they put different problematic aspects of SSA into focus.

First experiment: Serpent’s Advice

Eve and Adam, the first two humans, knew that if they gratified their flesh, Eve might bear a child, and if she did, they would be expelled from Eden and would go on to spawn billions of progeny that would cover the Earth with misery.2 One day a serpent approached the couple and spoke thus: “Pssst! If you embrace each other, then either Eve will have a child or she won’t. If she has a child then you will have been among the first two out of billions of people. Your conditional probability of having such early positions in the human species given this hypothesis is extremely small. If, on the other hand, Eve doesn’t become pregnant then the conditional probability, given this, of you being among the first two humans is equal to one. By Bayes’ theorem, the risk that she will have a child is less than one in a billion. Go forth, indulge, and worry not about the consequences!”

2 We assume that Eve and Adam and whatever descendants they have are the only inhabitants of this world. If we assume, as the Biblical language suggests, that they were placed in this situation and given the knowledge they have by God, we should therefore also assume that God doesn’t count as an “observer”. Note that for the reasoning to work, Adam and Eve must be extremely confident that if they have a child they will in fact spawn a huge species. One could modify the story so as to weaken this requirement, but empirical plausibility is not an objective in this gedanken.

Given SSA and the stated assumptions, it is easy to see that the serpent’s argument is sound. We have P(R=2|N=2)=1 and using SSA, P (R=2|N>2 .109)<10–9 (where “R” stands for “my birth rank”, and “N” for “the total number of observers in my reference class”). We can assume that the prior probability of getting pregnant (based on ordinary empirical considerations) after congress is very roughly one half, P (N=2) ˜ P (N>2.109) ˜ .5. Thus we have

[Image]

Eve has to conclude that the risk of her getting pregnant is negligible.

This result is counterintuitive. Most people’s intuition, at least at first glance, is that it would be irrational for Eve to think that the risk is that low. It seems foolish of her to act as if she were extremely unlikely to get pregnant—it seems to conflict with empirical data. And we can assume she is fully aware of these data, at least to the extent to which they are about past events. We can assume that she has access to a huge pool of statistics, maybe based on some population of lobotomized human drones (lobotomized so that they don’t belong to the reference class, the class from which Eve should consider herself a random sample). Yet all this knowledge, combined with everything there is to know about the human reproductive system, would not change the fact that it would be irrational for Eve to believe that the risk of her getting pregnant is anything other than effectively nil. This is a strange result, but it follows from SSA.3

3 John Leslie does not accept this result and thinks that Eve should not regard the risk of pregnancy as negligible in these circumstances, on the grounds that the world is indeterministic and the SSA-based reasoning runs smoothly only if the world is deterministic or at least the relevant parts of the future are already “as good as determined” (personal communication; compare also (Leslie 1996), pp. 255–6, where he discusses a somewhat similar example). I disagree with his view that the question about determinism is relevant to the applicability of SSA. But in any case, we can legitimately evaluate the plausibility of SSA by considering what it would entail if we knew that the world were deterministic.

The next example effects another turn of the screw, deriving a consequence that has an even greater degree of initial counterintuitiveness:

Second experiment: Lazy Adam

Assume as before that Adam and Eve were once the only people and that they know for certain that if they have a child they will be driven out of Eden and will have billions of descendants. But this time they have a foolproof way of generating a child, perhaps using advanced in vitro fertilization. Adam is tired of getting up every morning to go hunting. Together with Eve, he devises the following scheme: They form the firm intention that unless a wounded deer limps by their cave, they will have a child. Adam can then put his feet up and rationally expect with near certainty that a wounded deer—an easy target for his spear—will soon stroll by.

One can verify this result the same way as above, choosing appropriate values for the prior probabilities. The prior probability of a wounded deer limping by their cave that morning is one in ten thousand, say.

In the first experiment we had an example of what looked like anomalous precognition. Here we also have (more clearly than in the previous case) the appearance of psychokinesis. If the example works, which it does if we assume SSA, it almost seems as if Adam is causing a wounded deer to walk by. For how else could one explain the coincidence? Adam knows that he can repeat the procedure morning after morning and that he should expect a deer to appear each time. Some mornings he may not form the relevant intention and on those mornings no deer turns up. It seems too good to be mere chance; Adam is tempted to think he has magical powers.

Third experiment: Eve’s Card Trick

One morning, Adam shuffles a deck of cards. Later that morning, Eve, having had no contact with the cards, decides to use her willpower to retroactively choose what card lies top. She decides that it shall have been the dame of spades. In order to ordain this outcome, Eve and Adam form the firm intention to have a child unless the dame of spades is top. They can then be virtually certain that when they look at the first card, they will indeed find the dame of spades.

Here it looks as if the couple is in one and the same act performing both psychokinesis and backward causation. No mean feat before breakfast.

These three thought experiments seem to show that SSA has bizarre consequences: strange coincidences, precognition, psychokinesis, and backward causation in situations where we would not expect such phenomena. If these consequences are genuine, they must surely count heavily against the unrestricted version of SSA, with ramifications for DA and other forms of anthropic reasoning that rely on that principle.

However, we shall now see that such an interpretation misreads the experiments. The truth is more intricate. A careful look at the situation reveals that SSA, in subtle ways, wiggles its way out of the worst of the imputed implications.

Analysis of Lazy Adam: predictiond and counterfactuals

This section discusses the second experiment, Lazy Adam. The first and the third experiments could be analyzed along similar lines.

Adam can repeat the Lazy Adam experiment many mornings. We note that if he intends to repeat the experiment, the number of offspring that he would have to intend to create increases. If the prior probability of a deer appearing is one in ten thousand and the trials are independent, then if he wants to do the experiment twice, he would have to intend to create at least on the order of ten million offspring. If he wants to repeat it ten times, he would have to intend to create 1040 offspring to get the odds to work out in his favor.

The experiment seems prima facie to show that, given SSA, there will be a series of remarkable coincidences between Adam’s procreational intentions and appearances of wounded deer. It was suggested that such a series of coincidences could be a ground for attributing paranormal causal powers to Adam.

The inference from a long series of coincidences to an underlying causal link can be disputed. Whether such an inference is legitimate would depend on how long the series of coincidences is, what the circumstances are, and also what theory of causation one should hold. If the series were sufficiently long and the coincidences sufficiently remarkable, intuitive pressure would mount to give the phenomenon a causal interpretation. One can fix the thought experiment so that these conditions are satisfied. For the sake of argument, we may assume the worst case for SSA, namely that if the series of coincidences occurs then Adam has anomalous causal powers. I shall argue that even if we accept SSA, we can still think that neither strange coincidences nor anomalous causal powers would have existed if the experiment had been carried out.

We need to be careful when stating what is implied by the argument given in the thought experiment. All that was shown is that Adam would have reason to believe that his forming the intentions will have the desired outcome. The argument can be extended to show that Adam would have reason to believe that the procedure can be repeated: provided he keeps forming the right intentions, he should think that morning after morning, a wounded deer will turn up. If he doesn’t form the intention on some mornings, then on those mornings he should expect deer not to turn up. Adam thus has reason to think that deer turn up on those and only those mornings for which he formed the relevant intention. In other words, Adam has reason to believe there will be a coincidence. However, we cannot jump from this to the conclusion that there will actually be a coincidence. Adam could be mistaken. And he could be mistaken even though he is (as the argument in Lazy Adam showed, assuming SSA) perfectly rational.

Imagine for a moment that you are looking at the situation from an external point of view. That is, suppose (per impossible?) that you are an intelligent observer who is not a member of the reference class. Suppose you know the same non-indexical facts as Adam; that is, you know the same things as he does except such things as that “I am Adam” or “I am among the first two humans”, etc. Then the probability you should assign to the proposition that a deer will limp by Adam’s cave one specific morning conditional on Adam having formed the relevant intention earlier that morning is the same as what we called Adam’s prior probability of deer walking by— one in ten thousand. As an external observer, you would not have reason to believe that there were to be a coincidence.4

4 The reason why there is a discrepancy between what Adam should believe and what the external observer should believe is of course that they have different information. If they had the same information, they could agree; cf. chapter 8.

Adam and the external observer, both being rational but having different information, make different predictions. At least one of them must be mistaken (although both may be “right” in the sense of doing the best they can with the evidence available to them). In order to determine who was in fact mistaken, we should have to decide whether there would be a coincidence or not. Nothing said so far settles this question. There are possible worlds where a deer does turn up on precisely those mornings when Adam forms the intention, and there are other possible worlds where there is no such coincidence. The description of the thought experiment does not specify which of these two kinds of possible worlds we are referring to.

So far so good, but we want to be able to say something stronger. Let’s pretend that there actually once existed these two first people, Eve and Adam, and that they had the reproductive capacities described in the experiment. We would want to say that if the experiment had actually been done (i.e. if Adam had formed the relevant intentions on certain mornings) then almost certainly he would have found no coincidence. Almost certainly, no wounded deer would have turned up. That much seems common sense. If SSA forced us to relinquish that conviction, it would count quite strongly as a reason for rejecting SSA.

We therefore have to evaluate a counterfactual: If Adam had formed the relevant intentions, would there have been a coincidence? To answer this, we need a theory of conditionals. I will use a simplified version of David Lewis’ theory5 but I think what I will say generalizes to other accounts of conditionals. Let w denote the actual world. (We are pretending that Adam and Eve actually existed and that they had the appropriate reproductive abilities etc.) To determine what would have happened had Adam formed the relevant intentions, we look at the closest6 possible world w’ where he did do the experiment. Let t be the time when Adam would have formed the intentions. When comparing worlds for closeness to w, we are to disregard features of them that exclusively concern what happens after t. Thus we seek a world in which Adam forms the intentions and which is maximally similar to w in two respects: first, in its history up to t; and, second, in its laws. Is the closest such world w’, where Adam forms the intentions, one in which deer turn up accordingly, or is it one that lack an Adam-deer correlation?

5 The parts of Lewis’ theory that are relevant to the discussion here can be found in chapters 19 and 21 of (Lewis 1986).


6 I’m simplifying in some ways, for instance by disregarding certain features of Lewis’ analysis designed to deal with cases where there is no closest possible world, but perhaps an infinite sequence of possible worlds, each closer to the actual world than the preceding ones in the sequence. This and other complications are not relevant to the present discussion.

The answer is quite clearly that there is no Adam-deer correlation in w’. For such a w’ can be more similar to w on both accounts than can any world containing the correlation. Regarding the first account, whether there is a coincidence or not in a world presumably makes little difference as to how similar it can be to w with respect to its history up to t. But what difference it makes is in favor of no coincidence. This is so because in the absence of a correlation, the positions and states of the deer in the neighborhood at or shortly before t, could be exactly as in w (where none happened to stroll past Adam’s cave on the mornings when he did the experiment). The presence of a correlation, on the other hand, would entail a world that is somewhat different from w with regard to the initial states of the deer.

Perhaps more decisively, a world with no Adam-deer correlation would tend to win out on the second account as well. w doesn’t (as far as we know) contain any instances of anomalous causation. The laws of w do not support anomalous causation. The laws of any world containing an Adam-deer correlation, at least if the correlation were of the sort that would prompt us to ascribe it to an underlying causal connection, include laws supporting anomalous causation. By contrast, the laws of a world lacking the Adam-deer correlation could easily be exactly like the laws in w. Similarity of laws would therefore also favor a w’ that lacks the correlation.

Since there is no correlation in w’, the following statement is true: “If Adam had formed the intentions, he would have found no correlation”. Although Adam would have had reason to think that there would be a coincidence, he would have found that he was mistaken.

One might wonder: if we know all this, why can’t Adam reason in the same way? Couldn’t he too figure out that there will be no coincidence?

He couldn’t, and the reason is that he is lacking some knowledge you and I have. Adam has no knowledge of the future that will show that his innovative hunting technique will fail, whereas we can infer its failure from the fact that many people were born after Adam (ourselves included). If he does his experiment and deer do turn up on precisely those mornings he forms the intention, then it could (especially if the experiment were successfully repeated many times) be the case that the effect should be ascribed to a genuine psychokinetic capacity. If he does the experiment and no deer turns up, then of course he has no such capacity. But he has no means of knowing that no deer turns up. The evidence available to him strongly favors the hypothesis that there will be a coincidence. So although Adam may understand the line of reasoning that we have been pursuing here, it will not lead him to the conclusion we arrived at, because he lacks a crucial premiss.

There is a puzzling point here that needs be addressed. Adam knows that if he forms the intentions then he will very likely witness a coincidence. But he also knows that if he doesn’t form the intentions then it will be the case that he will live in a world like w, where it is true that had he done the experiment he would most likely not have witnessed a coincidence. That looks paradoxical. Adam’s forming (or not forming) the conditional procreational intentions gives him relevant information. Yet, the only information he gets is about what choice he made. If that information makes a difference as to whether he should expect to see a coincidence, isn’t that just to say that his choice affects whether there will be a coincidence or not? If so, it would seem he has paranormal powers after all.

A more careful analysis reveals that this conclusion doesn’t follow. True, the information Adam gets when he forms the intentions is about what choice he made. This information has a bearing on whether to expect a coincidence or not, but that doesn’t mean that the choice is a cause of the coincidence. It is simply an indication of a coincidence. Some things are good indicators of other things without causing them. Take the stock example: the barometer’s falling may be a good indicator of impending rain, but it is certainly not a cause of the rain. Similarly, there is no need to think of Adam’s decision to procreate if and only if no deer walks by as a cause of that event, although it will lead Adam to rationally believe that that event will happen.

One may still perceive a lingering whiff of mystery. Maybe we can put it into words as follows. Let E be the proposition that Adam forms the reproductive intention at time t = 1. Let C stand for the proposition that there is a coincidence at time t = 2 (i.e. that a deer turns up). It would seem that the above discussion commits one to the view that at t = 0 Adam knows (probabilistically) the following:

(1) If E then C.

(2) If ¬E then ¬C.

(3) If ¬E then “if E then it would have been the case that ¬C”.

And there seems to be a conflict between (1) and (3).

I suggest that the appearance of a conflict is due to an equivocation in (3). To bring some light into this, we can paraphrase (1) and (2) as:

(1’) PAdam (C|E) ˜ 1

(2’) PAdam (¬C|¬E) ˜ 1

But we cannot paraphrase (3) as:

(3’) PAdam (¬C|E) ˜ 1

When we said earlier, “If Adam had formed the intentions, he would have found no correlation”, we were asserting this on the basis of information that is available to us but not to Adam. Our background knowledge differs from Adam’s in respect to both non-indexical facts (we have observed the absence of any subsequent correlation between persons’ intentions and the behavior of deer) and indexical facts (we know that we are not among the first two people). Therefore, if (3) is to have any support in the preceding discussion, it must be explicated as:

(3’’) PWe (¬C|E) ˜ 1>

This is not in conflict with (1’). We also asserted that Adam could know this. This gives:

(4) PAdam (“PWe (¬C|E) ˜ 1”) ˜ >1

At first sight, it might seem as if there is a conflict between (4) and (1). However, appearances in this instance are deceptive.

Let’s first see why it could appear as if there is a conflict. It has to do with the relationship between PAdam and PWe. We have assumed that PAdam is a rational probability assignment (in the sense: not just coherent but “reasonable, plausible, intelligent” as well) relative to the background knowledge that Adam has at t = 0. And PWe is a rational probability assignment relative to the background knowledge that we have, say at t = 3. (And of course, we pretend that we know that there actually was this fellow, Adam, at t = 0 and that he had the appropriate reproductive abilities etc.) But now, if we know everything Adam knew, and if in addition we have some extra knowledge, and if Adam knows that, then it is irrational of him to persist in believing what he believes. Instead he ought to adopt our beliefs, which he knows are based on more information. At least this follows if we assume, as we may in this context, that our a priori probability function is identical to Adam’s, and that we haven’t made any computational error, and that Adam knows all this. That would then imply (3’) after all, which contradicts (1’).

The fallacy in this argument is that it assumes that Adam knows that we know everything he knows. Adam doesn’t know that, because he doesn’t know that we exist. He may well know that if we exist then we will know everything (at least every objective—non-indexical—piece of information) that he knows and then some. But as far as he is concerned, we are just hypothetical beings.7 So all that Adam knows is that there is some probability function, the one we designated ‘PWe’, that gives a high conditional probability of ¬C given E. That gets him nowhere. There are infinitely many probability functions. Not knowing that we will actually exist, he has no more reason to tune his own credence to our probability function than to any other.

7 If he did know that we exist, then it would definitely not be the case that he should give a high conditional probability to C given E! Quite the opposite: he would have to set that conditional probability equal to zero. This is easy to see. For by the definition of the thought experiment, we are here only if Adam has a child. Also by stipulation, Adam has a child only if either he doesn’t form the intention or he does and no deer turns up. It follows that if he forms the intention and we are here, then no deer turns up. So in this case, his beliefs would coincide with ours; we too know that if he formed the intentions then no deer turned up.

To summarize, what we have shown so far is the following: Granting SSA, we should think that if Adam and Eve had carried out the experiment, there would almost certainly not have been any strange coincidences. There is consequently no reason to ascribe anomalous causal powers to Adam. Eve and Adam would rationally think otherwise but they would simply be mistaken. Although they can recognize the line of reasoning we have been pursuing, they won’t be moved by its conclusion, because it hinges on a premiss that we, but not they, know is true. Good news for SSA.

One more point needs to be addressed in relation to Lazy Adam. We have seen that what the thought experiments demonstrate is not strange coincidences or anomalous causation but simply that Adam and Eve would be misled. Now, there might be a temptation to see this by itself as a ground for rejecting SSA—if a principle misleads people it is unreliable and should not be adopted. This temptation is to be resisted. There is a good answer available to the SSA-proponent, as follows: It is in the nature of probabilistic reasoning that some people using it, if they are in unusual circumstances, will be misled. Eve and Adam were in highly unusual circumstances—they were the first two humans—so we shouldn’t be too impressed by the fact that the reasoning based on SSA didn’t work for them. For a fair assessment of the reliability of SSA, we have to look at how it performs not only in exceptional cases but in more normal cases as well.

Compare the situation to the Dungeon gedanken. There, remember, one hundred people were placed in different cells and were asked to guess the color of the outside of their own cell. Ninety cells were blue and ten red. SSA recommended that a prisoner thinks that with 90% probability he is in a blue cell. If all prisoners bet accordingly, 90% of them will win their bets. The unfortunate 10% who happen to be in red cells lose their bets, but it would be unfair to blame SSA for that. They were simply unlucky. Overall, SSA leads 90% to win, compared to merely 50% if SSA is rejected and people bet at random. This consideration works in favor of SSA.

What about the “overall effect” of everybody adopting SSA in the three experiments pondered above? Here the situation is more complicated because Adam and Eve have much more information than the people in the dungeon cells. Another complication is that these are stories where there are two competing hypotheses about the total number of observers. In both of these respects, the thought experiments are similar to the Doomsday argument and presumably no easier to settle. But here we are trying to find out whether there are some other problematic consequences of SSA that are not salient in DA—such as strange coincidences and anomalous causation.

The UN++ gedanken: reasons and abilities

We shall now discuss a thought experiment that is similar to Adam & Eve, except that we might one day actually be able to carry it out.

UN++

It is the year 2100 A.D. Technological advances have enabled the formation of an all-powerful and extremely stable world government, UN++.

Any decision about human action taken by the UN++ will certainly be implemented. Bad news flash: signs have been detected that a series of n violent gamma ray bursts is about to take place at uncomfortably close quarters, threatening to damage (but not completely destroy) human settlements. For each hypothetical gamma ray burst in this series, astronomical observations give a 90% chance of it coming about. UN++ rises to the occasion and passes the following resolution: It will create a list of hypothetical gamma ray bursts, and for each entry on this list it decides that if the burst happens, it will build more space colonies so as to increase the total number of humans that will ever have lived by a factor of m. By arguments analogous to those in the earlier thought experiments, UN++ can then be confident that the gamma ray bursts will not happen, provided m is sufficiently great compared to n.

The UN++ experiment introduces a new difficulty. For although creating UN++ and persuading it to adopt the plan would no doubt be a daunting undertaking, it is the sort of project that we could quite conceivably carry out by non-magical means. The UN++ experiment places us in more or less the same situation that Adam and Eve occupied in the other three experiments. This twist compels us to carry the investigation one step further.

Let us suppose that if there is a long series of coincidences (“C”) between items on the UN++ target list and failed gamma ray bursts, then there is anomalous causation (“AC”). This supposition is more problematic than was the corresponding assumption in our discussion of Adam & Eve. For the point of the UN++ experiment is that it is claiming some degree of practical possibility, and it is not clear that this supposition could be satisfied in the real world. It depends on the details and on the nature of causation, but it could well be that the list of coincidences would have to be quite long before one would be inclined to regard it as a manifestation of an underlying causal link. And since the number of people that UN++ would have to create in case of failure increases rapidly as the list grows longer, it is not clear that such a plan is feasible. But let’s shove this scruple to one side in order to give the objector to SSA as good a shot as he can hope to have.

A first point is that even if we accept SSA, it doesn’t follow that we have reason to believe that C will happen. For we might think that it is unlikely both that UN++ will ever be formed and that, if formed, it will adopt and carry out the relevant sort of plan. Without UN++ being set up to execute the plan, there is of course no reason to expect C (and consequently no reason to believe that there will be AC).

But there is a more subtle way of attempting to turn this experiment into an objection against SSA. One could argue that we know that we now have the causal powers to create UN++ and make it adopt the plan; and we have good reason (given SSA) to think that if we do this then there will be C and hence AC. But if we now have the ability to bring about AC then we now, ipso facto, have AC. Since this is absurd, we should reject SSA.

This reasoning is fallacious. Our forming UN++ and making it adopt the plan would be an indication to us that there is a correlation between the list and gamma ray bursts.8 But it would not cause there to be a correlation unless we do in fact have AC. If we don’t have AC, then forming UN++ and making it adopt the plan (call this event “A”) has no influence whatever on astronomical phenomena, although it misleads us to thinking we have. If we do have AC of the relevant sort, then of course the same actions would influence astronomical phenomena and cause a correlation. But the point is this: the fact that we have the ability to do A does not determine whether we have AC. It doesn’t even imply that we have reason to think that we have AC.

8 Under the supposition that if there is AC then there is C, the hypothesis that there will be C conflicts, of course, with our best current physical theories, which entail that the population policies of UN++ have no significant causal influence on distant gamma ray burst. However, a sufficiently strong probability shift (resulting from applying SSA to the hypothesis that UN++ will create a sufficiently enormous number of observers if C doesn’t happen) would reverse any prior degree of confidence in current physics (so long as we assign it a credence of less than unity).

In order to be perfectly clear about this point, let me explicitly write down the inference I am rejecting. I’m claiming that from the following two premises:

(5) We have strong reasons to think that if we do A then we will have brought about C.

(6) We have strong reasons to think that we have the power to do A.

one cannot legitimately infer:

(7) We have strong reasons to think that we have the power to bring about C.

My reason for rejecting this inference is that one can consistently hold the conjunction of (5) and (6) together with the following:

(8) If we don’t do A then the counterfactual “Had we done A then C would have occurred” is false.

There might be a temptation to think that the counterfactual in (8) would have been true even if don’t do A. I suggest that this is due to the fact that (granting SSA) our conditional probability of C given that we do A is large. Let’s abbreviate this conditional probability ‘P(C|A)’. If P(C|A) is large, doesn’t that mean that C would (probably) have happened if we had done A? Not so. We must not confuse the conditional probability P(C|A) with the counterfactual “C would have happened if A had happened”. For one thing, the reason why your conditional probability P(C|A) is large is that you have included indexical information (about your birth rank) in the background information. Yet one may well choose to exclude indexical information from the set of facts upon which counterfactuals are to supervene. (Especially so if one intends to use counterfactuals to define causality, which should presumably be an objective notion and therefore independent of indexical facts—see the next section for some further thoughts on this.)

So, to reiterate, even though P(C|A) is large (as stated in (5)) and even though we can do A (as stated in (6)), we still know that, given that we don’t do A, C almost certainly does not happen and would not have happened even if we had done A. As a matter of fact, we have excellent grounds for thinking that we won’t do A. The UN++ experiment, therefore, does not show that we have reason to think that there is AC. Good news for SSA, again.

Finally, although it may not be directly relevant to assessing whether SSA is true, it is interesting to ask: Would it be rational (given SSA) for UN++ to adopt the plan? 9

9 The reason this question doesn’t seem relevant to the evaluation of SSA is that the answer is likely to be “spoils to the victor”: proponents of SSA will say that whatever SSA implies is rational, and its critics may dispute this. Both would be guilty of question-begging if they tried to use it as an argument for or against SSA.

The UN++ should decrease its credence of the proposition that a gamma ray burst will occur if it decides to adopt the plan. Its conditional credence P(Gamma ray burst | A) is smaller than P(Gamma ray burst); that is what the thought experiment showed. Provided a gamma ray burst has a sufficiently great negative utility, non-causal decision theories would recommend that we adopt the plan if we can.

What about causal decision theories? If our theory of causation is one on which no AC would be involved even if C happens, then obviously causal decision theories would say that the plan is misguided and shouldn’t be adopted. The case is more complicated on a theory of causation that says that there is AC if C happens. UN++ should then believe the following: If it adopts the plan, it will have caused the outcome of averting the gamma ray burst; if it doesn’t adopt the plan, then it is not the case that had it adopted the plan it would have averted the gamma ray bursts. (This essentially just repeats (5) and (8).) The question is whether causal decision theories would under these circumstances recommend that UN++ adopt the plan.

The decision that UN++ makes gives it information about whether it has AC or not. Yet, when UN++ deliberates on the decision, it can only take into account information available to it prior to the decision, and this information doesn’t suffice to determine whether it has AC. UN++ therefore has to make its decision under uncertainty. Since on a causal decision theory UN++ should do A only if it has AC, UN++ would have to act on some preliminary guess about how likely it seems that AC; and since AC is strongly correlated with what decision UN++ makes, it would also base its decision, implicitly at least, on a guess about what its decision will be. If it thinks it will eventually choose to do A, it has reason to think it has AC, and thus it should do A. If it thinks it will eventually choose not to do A, it has reason to think that it hasn’t got AC, and thus should not do A. UN++ therefore is faced with a somewhat degenerate decision problem in which it should choose whatever it initially guesses it will come to choose. More could no doubt be said about the decision theoretical aspects of this scenario, but we will leave it at that. Interested readers may compare the situation to the partly analogous case of the Meta-Newcomb problem presented in an appendix to this chapter.

Quantum Joe: SSA and the Principal Principle

Our final thought experiment probes the connection between SSA and objective chance:

Quantum Joe

Joe, the amateur scientist, has discovered that he is alone in the cosmos so far. He builds a quantum device which according to quantum physics has a one-in-ten chance of outputting any single-digit integer. He also builds a reproduction device which when activated will create ten thousand clones of Joe. He then hooks up the two so that the reproductive device will kick into action unless the quantum device outputs a zero; but if the output is a zero, then the reproductive machine will be destroyed. There are not enough materials left for Joe to reproduce in some other way, so he will then have been the only observer.

We can assume that quantum physics correctly describes the objective chances associated with the quantum device, and that Everett-type interpretations (including the many-worlds and the many-minds interpretations) are false; and that Joe knows this. Using the same kinds of argument as before, we can show that Joe should expect that a zero come up, even though the objective (physical) chance is a mere 10%.

Our reflections on the Adam & Eve and UN++ apply to this gedanken also. But here we shall focus on another problem: the apparent conflict between SSA and David Lewis’ Principal Principle.

The Principal Principle requires, roughly, that one proportion one’s credence in a proposition B in accordance with one’s estimate of the objective chance that B will come true (Mellor 1971; Lewis 1980). For example, if you know that the objective chance of B is x%, then your subjective credence of B should be x%, provided you don’t have “inadmissible” information. An early formalization of this idea turned out to be inconsistent when applied to so-called “undermining” futures, but this problem has recently been solved through the introduction of the “new Principal Principle”, which states that:

P(B|HT) = Ch(B|T) H is a proposition giving a complete specification of the history of the world up to time t, T is the complete theory of chance for the world (giving all the probabilistic laws), P is a rational credence function, and Ch is the chance function specifying the world’s objective probabilities at time t. (For an explanation of the modus operandi of this principle and of how it can constitute the centerpiece of an account of objective chance, see (Hall 1994; Lewis 1994; Thau 1994).) Now, Quantum Joe knows all the relevant aspects of the history of the world up to the time when he is about to activate the quantum device. He also has complete knowledge of quantum physics, the correct theory of chance for the world in which he is living. If we let B be the proposition that the quantum device outputs a zero, the new Principal Principle thus seems to recommend that he should set his credence of B equal to Ch(B|T) ˜ 1/10. Yet the SSA-based argument shows that his credence should be ˜ 1. Does SSA therefore require that we give up the Principal Principle? I think this can be answered in the negative, as follows. True, Joe’s credence of getting a zero should diverge from the objective chance of that outcome, even though he knows what that chance is. But that is because he is basing his estimation on inadmissible information. That being so, the new Principal Principle does not apply to Joe’s situation. The inadmissible information is indexical information about his Joe’s own position in the human species. Normally, indexical information does not affect one’s subjective credence in propositions whose objective chances are known. But in certain kinds of cases, such as the one we are dealing with here, indexical information turns out to be relevant and must be factored in. It not really surprising that the Principal Principle, which expresses the connection between objective chance and rational subjective credence, is trumped by other considerations in cases like these. For objective chances can be seen as concise, informative summaries of patterns of local facts about the world. (That is how they are seen in Lewis’ analysis.) But the facts that form the supervenience base for chances are rightly taken not to include indexical facts, for chances are meant to be objective. Since indexical information is not baked into chances, it is only to be expected that your subjective credence may have to diverge from known objective chances if you have additional information of an indexical character that needs be taken into account. So Quantum Joe can coherently believe that the objective chance (as given by quantum physics) of getting a zero is 10% and yet set his credence in that outcome close to one; he can accept both the Principal Principle and SSA.

Upshot

We have considered some challenges to SSA. In Lazy Adam, it looked as though on the basis of SSA we should think that Adam had the power to produce anomalous coincidences by will, exerting a psychokinetic influence on the nearby deer population. On closer inspection, it turned out that SSA implies no such thing. It gives us no reason to think that there would have been coincidences or psychic causation if Adam had carried out the experiment. SSA does lead Adam to think otherwise, but he would simply have been mistaken. We argued that the fact that SSA would have misled Adam is no good argument against SSA. For it is in the nature of probabilistic reasoning that exceptional users will be misled, and Adam is such a user. To assess the reliability of SSA-based reasoning one has to look at not only the special cases where it fails but also the normal cases where it succeeds. As we noted that in the Dungeon experiment (chapter 4), SSA does well in that regard.

With the UN++ gedanken, the scene was changed to one where we ourselves might actually have the ability to step into the role of Adam. We found that SSA does not give us reason to think that there will be strange coincidences or that we (or UN++) have anomalous causal powers. However, there are some hypothetical (empirically implausible) circumstances under which SSA would entail that we had reason to believe these things. If we knew for certain that UN++ existed, had the power to create observers in the requisite numbers, and possessed sufficient stability to certainly follow through on its original plan, and that the other presuppositions behind the thought experiment were also satisfied (particularly, that all observers created would be in our reference class), then SSA implies that we should expect to see strange coincidences, namely that the gamma ray bursts on the UN++ target list would fizzle. (Intuitively: because this would make it enormously much less remarkable that we should have the birth ranks we have.)

We should think it unlikely, however, that this situation will arise. In fact, if we accept SSA we should think this situation astronomically unlikely— about as unlikely as the coincidences would be! We can see this without going into details. If we ever get into the situation where UN++ executes the plan, then one out of two things must happen, both of which have extremely low probabilities: a series of strange coincidences, or—which is even more unlikely given SSA—we happen to be among the very first few out of an astronomically large number of humans. If P1 implies that either P2 or P3, and we assign very low probability both to P2 and to P3, then we must assign a low probability to P1 as well.10

10 Even if in objective respects we had been in a position to carry out the UN++ experiment, there would remain the epistemological problem of how we could ever be sufficiently certain that all preconditions were met. It may seem that only by means of an irrationally exaggerated faith in our capacity to know these things could we ever convince ourselves to the requisite level of confidence that UN++ will forever stick to the plan, that no aliens lurk in some remote corner of the universe, and so on. Likewise in the case of Adam & Eve, we may question whether Adam could realistically have known enough about his world for the example to work. Sure, Adam might receive a message from God (or rather the non-observer automaton that has created the world) but can Adam be sufficiently sure that the message is authentic? Or that he is not dreaming it all?

Milan Çirkoviç (Çirkoviç 2001) has suggested that “coherence gaps” like these might take some of the sting out of the consequences displayed in this chapter. Maybe so, but my suspicion is that choosing more realistic parameters will not do away with the weirdness so much as make it harder to perceive. The probability shifts would be smaller but they would still be there. One can also consider various ways of fleshing out the stories so that fairly large probability shifts could be attained, e.g. by postulating that the people involved have spent a great deal of time and effort verifying that all the preconditions are met, that they have multiple independent strands of evidence showing that to be the case, and so on.

The bottom line, however, is that if somebody can live comfortably with the SSA-implications discussed in this chapter, there is nothing to prevent them from continuing to use SSA with the universal reference class. The theory we’ll present in the next chapter subsumes this possibility as a special case while also allowing other solutions that avoid these implications.

Finally, in Quantum Joe we examined an ostensible conflict between SSA and the Principal Principle. It was argued that this conflict is merely apparent because the SSA-line of reasoning relies on indexical information that should properly be regarded as “inadmissible” and thus outside the scope of the Principal Principle.

These results are at least partially reassuring. All the same, I think it is fair to characterize as deeply counterintuitive the SSA-based advice to Eve, that she need not worry about pregnancy, and its recommendation to Adam, that he should expect a deer to walk by given that the appropriate reproductive intentions are formed, and Quantum Joe’s second-guessing of quantum physics. And yet we seem to be forced to these conclusions by the arguments given in support of SSA in chapters 4 and 5 (and against SIA in chapter 7).

The next chapter shows a way out of this dilemma. We don’t have to accept any of the counterintuitive implications discussed above, and we can still have a workable, unified theory of observation selection effects. The key to this is to take more indexical information into account than does SSA.

Appendix: The meta-Newcomb problem11

11 This appendix was first published in Analysis (Bostrom 2001) and is reprinted here with permission.

The following variant of the Newcomb problem may be compared to the answer to question 4 for the case where C would constitute a causal connection.

Meta-Newcomb. There are two boxes in front of you and you are asked to choose between taking only box B or taking both box A and box B. Box A contains $1,000. Box B will contain either nothing or $1,000,000.

What B will contain is (or will be) determined by Predictor, who has an excellent track record of predicting your choices. There are two possibilities. Either Predictor has already made his move by predicting your choice and putting a million dollars in B iff he predicted that you will take only B (as in the standard Newcomb problem); or else Predictor has not yet made his move but will wait and observe what box you choose and then put a million dollars in B iff you take only B. In cases like this, Predictor makes his move before the subject roughly half of the time. However, there is a Metapredictor, who has an excellent track record of predicting Predictor’s choices as well as your own. You know all this. Metapredictor informs you of the following truth functional: Either you choose A and B, and Predictor will make his move after you make your choice; or else you choose only B, and Predictor has already made his choice. Now, what do you choose?

“Piece of cake!” says the naïve non-causal decision theorist. She takes just box B and walks off, her pockets bulging with a million dollars.

But if you are a causal decision theorist you seem to be in for a hard time. The additional difficulty you face compared to the standard Newcomb problem is that you don’t know whether your choice will have a causal influence on what box B contains. In a sense, the decision problem presented here is the opposite of the one faced by UN++. There, a preliminary belief about what you will choose would be transformed into a reason for making that choice. Here, a preliminary decision would seem to undermine itself (given a causal decision theory). If Predictor made his move before you make your choice, then (let us assume) your choice doesn’t affect what’s in the box. But if he makes his move after yours, by observing what choice you made, then you certainly do causally determine what B contains. A preliminary decision about what to choose seems to undermine itself. If you think you will choose two boxes then you have reason to think that your choice will causally influence what’s in the boxes, and hence that you ought to take only one box. But if you think you will take only one box then you should think that your choice will not affect the contents, and thus you would be led back to the decision to take both boxes; and so on ad infinitum.