Chapter 8
Observer-Relative Chances in Anthropic Reasoning?1
1 This chapter is adapted from a paper previously published in Erkenntnis (Bostrom 2000), with permission.
Here we examine an argument by John Leslie (Leslie 1997) purporting to show that anthropic reasoning gives rise to paradoxical “observer-relative chances”.2 We show that the argument trades on the sense/reference ambiguity and is fallacious. We then describe a different case where chances are observer-relative in an interesting, but not paradoxical way. The result can be generalized: at least for a very wide range of cases, SSA does not engender paradoxical observer-relative chances.
2 Leslie uses “chances” as synonymous with “epistemic probabilities”. I will follow his usage in this chapter and in later passages that refer to the results obtained here. Elsewhere in the book, I reserve the word “chance” for objective probabilities.
Leslie's argument and why it fails
Leslie seeks to establish the following conclusion:
Estimated probabilities can be observer-relative in a somewhat disconcerting way: a way not depending on the fact that, obviously, various observers often are unaware of truths which other observers know. (p. 435)
Leslie does not regard this as a reductio of anthropic reasoning and recommends that we bite the bullet: “Any air of paradox must not prevent us from accepting these things.” (p. 428)
Leslie’s argument takes the form of a thought experiment. We start with a batch of one hundred women and divide them randomly into two groups, one with ninety-five and one with five women. By flipping a fair coin, we then assign the name ‘the Heads group’ randomly to one of these groups and the name ‘the Tails group’ to the other. According to Leslie, it is now the case that an external observer, i.e. a person not in either of the two groups, ought to derive radically different conclusions than an insider:
All these persons—the women in the Heads group, those in the Tails group, and the external observer—are fully aware that there are two groups, and that each woman has a ninety-five per cent chance of having entered the larger. Yet the conclusions they ought to derive differ radically. The external observer ought to conclude that the probability is fifty per cent that the Heads group is the larger of the two. Any woman actually in [either the Heads or the Tails group], however, ought to judge the odds ninety-five to five that her group, identified as ‘the group I am in’, is the larger, regardless of whether she has been informed of its name. (p. 428)
Even without knowing her group’s name, a woman could still appreciate that the external observer estimated its chance of being the larger one as only fifty per cent—this being what his evidence led him to estimate in the cases of both groups. The paradox is that she herself would then have to say: ‘In view of my evidence of being in one of the groups, ninety-five per cent is what I estimate.’ (p. 429)
Somewhere within these two paragraphs a mistake has been made. It is not hard to locate the error if we look at the structure of the reasoning. Let’s say there is a woman in the larger group who is called Liz. The “paradox” then takes the following form:
(1) PLiz (“The group that Liz is in is the larger group”) = 95%
(2) The group that Liz is in = the Heads group
(3) Therefore: PLiz (“The Heads group is the larger group”) = 95%
(4) But PExternal observer (“The Heads group is the larger group”) = 50%
(5) Hence chances are observer-relative.
Where it goes wrong is in step (3). The group that Liz is in is indeed identical to the Heads group, but Liz doesn’t know that. PLiz (“The Heads group is the larger group”) = 50%, not 95% as claimed in step (3). There is nothing mysterious about this, at least not subsequent to Gottlob Freye’s classic discussion of Hesperus and Phosphorus. One need not have rational grounds for assigning probability one to the proposition “Hesperus = Phosphorus”, even though as a matter of fact Hesperus = Phosphorus. For one might not know that Hesperus = Phosphorus. The expressions “Hesperus” and “Phosphorus” present their denotata under different modes of presentation; they denote the same object while connoting different concepts. While there is still some dispute over how best to characterize this difference and over what general lessons we can pick up from it, the basic observation that you can learn something from being told “a = b” (even if a = b) is old hat.
Let’s see if Leslie’s conclusion can be resuscitated in some way by modifying the thought experiment.
Suppose that we change the example so that Liz knows that the sentence “Liz is in the Heads group” is true. Then step (3) will be correct. But now we run into trouble when we try to take step (5). It is no longer true that Liz and the external observer know about the same facts. Liz now has the information “Liz is in the Heads group”; the external observer doesn’t. No interesting observer-relative chances have been produced.
What if we change the example again by assuming that the external observer, too, knows that Liz is in the Heads group? Well, if Liz and the external observer agreed on the chance that the Heads group is the large group before they both learnt that Liz is in the Heads group, they will continue to agree about this chance after they have received that information— provided they agree about the conditional probability P(The Heads group is the larger group | Liz is in the Heads group). Do they?
First, look at it from Liz’s point of view. Let’s go along with Leslie and assume that she should think of herself as a random sample from the batch of one hundred women. Suppose she knows that her name is Liz (and that she is the only woman in the batch with that name). Then, before she learns that she is in the Heads group, she should assign that a probability of 50%. (Recall that what group should be called “the Heads group” was determined by tossing of a fair coin.) She should think that the chance of the sentence “Liz is in the larger group” is 95%, since ninety-five out of the hundred women are in the larger group, and she can regard herself as a random sample from these hundred women. After learning that she is in the Heads group, the chance of her being in the larger group remains 95%. (“The Heads group” and “the Tails group” are just arbitrary labels at this point. Randomly calling one group the Heads group doesn’t change the likelihood that it is the big group.) Hence, the probability she should give to “The Heads group is the larger group” is now 95%. Therefore, the conditional probability which we were looking for is PLiz (“The Heads group is the larger group” | “Liz is in the Heads group”) = 95%.
Next, consider the situation from the external observer’s point of view. What is the probability for the external observer that the Heads group is the larger one, given that Liz is in it? Well, what’s the probability that Liz is in the Heads group? In order to answer these questions, we need to know something about (the external observer’s beliefs about) how this woman Liz was selected.
Suppose that she was selected as a random sample, with uniform sampling density, from among all the hundred women in the batch. Then the external observer would arrive at the same conclusion as Liz: if the random sample “Liz” is in the Heads group then there is a 95% chance that the Heads group is the bigger group.
If, instead, we suppose that Liz was selected randomly from some subset of the hundred women, then it might happen that the external observer’s estimate diverges from Liz’s. For example, if the external observer randomly selects one individual x (whose name happens to be “Liz”) from the large group, then, when he finds that x is in the Heads group, he should assign a 100% probability to the sentence “The Heads group is the larger group.” This is indeed a different conclusion than the one that the insider Liz draws. She thought the conditional probability of the Heads group being the larger one given that Liz is in the Heads group was 95%.
In this case, however, we have to question whether Liz and the external observer know about the same evidence. (If they don’t, then the disparity in their conclusions doesn’t signify that chances are observer-relative in any paradoxical sense.) But it is clear that their information does differ in a relevant way. For suppose Liz got to know what the external observer is stipulated to already know: that Liz had been selected by the external observer through some random sampling process from among a certain subset of the hundred women. That implies that Liz is a member of that subset. This information would change her probability assignment so that it once again becomes identical to the external observer’s. In the above case, for instance, the external observer selected a woman randomly from the large group. Now, evidently, if Liz gets this extra piece of information, that she has been selected as a random sample from the large group, then she knows with certainty that she is in that group. So her conditional probability that the Heads group is the larger group given that Liz is in the Heads group should then be 100%, the same as what the outside observer should believe.
We see that as soon as we give the two persons access to the same evidence, their disagreement vanishes. No paradoxical observer-relative chances are to be found in this thought experiment.3
3 The only way, it seems, of maintaining that there are observer-relative chances in a nontrivial sense in Leslie’s example is on pain of opening oneself up to systematic exploitation, at least if one is prepared to put one’s money where one’s mouth is. Suppose there is someone who insists that the odds are different for an insider than they are for an outsider, and not only because the insider and the outsider don’t know about the same facts. Let’s call this hypothetical person Mr. L. (John Leslie, we hope, would not take this line of defence.)
At the next major philosophy conference that Mr. L attends we select a group of one hundred philosophers and divide them into two subgroups which we name by means of a coin toss, just as in Leslie’s example. We let Mr. L observe this event. Then we ask him what is the probability—for him as an external observer, one not in the selected group—that the large group is the Heads group. Let’s say he claims this probability is p. We then repeat the experiment, but this time with Mr. L as one of the hundred philosophers in the batch. Again we ask him what he thinks the probability is, now from his point of view as an insider, that the large group is the Heads group. (Mr. L doesn’t know at this point whether he is in the Heads group or the Tails group. If he did, he would know about a fact that the outsiders do not know about, and hence the chances involved would not be observer-relative in any paradoxical sense.) Say he answers p’.
If either p or p’ is anything other than 50% then we can make money out of him by repeating the experiment many times with Mr. L either in the batch or as an external observer, depending on whether it is p or p’ that differs from 50%. For example, if p’ is greater than 50%, we repeat the experiment with Mr. L in the batch, and we keep offering him the same bet, namely that the Heads group is not the larger group, and Mr. L will happily bet against us, e.g. at odds determined by p* = (50% + p’) / 2 (the intermediary odds between what Mr. L thinks are fair odds and what we think are fair odds). If, on the other hand, p’ < 50%, we bet (at odds determined by p*) that the Head’s group is the larger group. Again Mr. L should willingly bet against us.
In the long run (with probability asymptotically approaching one), the Heads group will be the larger group approximately half the time. So we will win approximately half of the bets. It is easy to verify that the odds to which Mr. L has agreed are such that this will earn us more money than we need pay out. We will be making a net gain, Mr. L a net loss.
Observer-relative chances: another go
In this section we shall give an example where chances could actually be said to be observer-relative in an interesting—though by no means paradoxical—sense. What philosophical lessons we should or shouldn’t learn from this phenomenon will be discussed in the next section. Here is the example:
Suppose the following takes place in an otherwise empty world. A fair coin is flipped by an automaton and if it falls heads, ten humans are created; if it falls tails, one human is created. In addition to these people, one other human that is created independently of how the coin falls. This latter human we call the bookie. The people created as a result of the coin toss we call the group. Everybody knows these facts. Furthermore, the bookie knows that she is the bookie, and the people in the group know that they are in the group.
The question is, what would be the fair odds if the people in the group want to bet against the bookie on how the coin fell? One could think that everybody should agree that the chance of it having fallen heads is fifty-fifty, since it was a fair coin. That overlooks the fact that the bookie obtains information from finding that she is the bookie rather than one of the people in the group. This information is relevant to her estimate of how the coin fell. It is more likely that she should find herself being the bookie if one out of two is a bookie than if the ratio is one out of eleven. So finding herself being the bookie, she obtains reason to believe that the coin probably fell tails, leading to the creation of only one other human. In a similar way, the people in the group, by observing that they are in the group, obtain some evidence that the coin fell heads, resulting in a large fraction of all observers observing that they are in the group.
It is a simple exercise to calculate what the posterior probabilities are after this information has been taken into account.
We see that the bookie should think there is a 2/13 chance that the coin fell heads while the people in the group should think that the chance is 20/31.
Discussion: indexical facts - no conflict with physicalism
While it might be slightly noteworthy that the bookie and the people in the group are rationally required to disagree in the above scenario, it isn’t the least bit paradoxical, as they have different information. For instance, the bookie knows that “I am the bookie”. This piece of information is clearly different from the corresponding one—“I am in the group”—known by the people in the group. So chances have not been shown to be observer-relative in the sense that people with the same information can be rationally required to disagree. And if we were to try to modify the example so as to give the participants the same information, we would see that their disagreement evaporates, as it did when we attempted various twists of the Leslie gedanken.
There is a sense, though, in which the chances in the present example can be said to be observer-relative. The sets of evidence that the bookie and the people in the group have, while not identical, are quite similar. They differ only in regard to such indexical facts4 as “I am the bookie” or “I am in the group”. We could say that the example demonstrates, in an interesting way, that chances can be relative to observers in the sense that people whose sets of evidence are the same up to indexical facts can be rationally required to disagree about non-indexical facts.
4 The metaphysics of indexical facts is not our topic here, but a good starting point for studying that is chapter 10 in (Lewis 1986). David Lewis argues that one can know which possible world is actual and still learn something new when one discovers which person one is in that world. Lewis, borrowing an example from John Perry (Perry 1977) (who in turn is indebted to Henri Castañeda (Castañeda 1966, 1968)) discusses the case of the amnesiacs in the Stanford library. We can imagine (changing the example slightly) that two amnesiacs are lost in the library on the first and second floor respectively. From reading the books they have learned precisely which possible world is actual—in particular they know that two amnesiacs are lost in the Stanford library. Nonetheless, when one of the amnesiacs sees a map of the library saying “You are here” with an arrow pointing to the second floor, he learns something new despite already knowing all non-indexical facts.
This kind of observer-relativity is not particularly counterintuitive and should not be taken to cast doubt on SSA, from which it was derived. That indexical matters can have implications for what we should believe about nonindexical matters should not surprise us. It can be shown by a trivial example. From “I have blue eyes” it follows that somebody has blue eyes.
The rational odds in the example above being different for the bookie than for the punters in the group, we might begin to wonder whether it is possible to formulate some kind of bet for which all parties would calculate a positive expected payoff? This would not necessarily be an unacceptable consequence since the bettors have different information. Still, it could seem a bit peculiar if we had a situation where purely by applying SSA rational people were led to start placing bets against one another. So it is worth calculating the odds to see if there are cases where they do indeed favour betting. This is done in an appendix to this chapter. The result is negative—no betting. In the quite general class of cases considered, there is no combination of parameter values for which a bet is possible in which both parties would rationally expect a positive non-zero payoff.5
5 One could also worry about another thing: doesn’t the doctrine defended here commit one to the view that observation reports couched in the first person should be evaluated by different rules from those pertaining to third-person reports of what is apparently the same evidence? Yes and no. Maybe the best way of putting it is that the evaluation rule is the same in both cases but its application is more complicated in the case of third-person reports (by which we here mean statements about some other person’s observations). Third-person reports can become evidence for somebody only by first coming to her knowledge. While you may know your own observations directly, there is an additional step that other people’s observations must go through before they become evidence for you: they must somehow be communicated to you. That extra step may involve additional selection effects that are not present in the first-person case. This accounts for the apparent evidential difference between first- and third-person reports. For example, what conclusions you can draw from the third-person report “Mr. Ping observes a red room” depends on what your beliefs are about how this report came to be known (as true) to you—why you didn’t find out about Mr. Pong instead, who observes a green room. By contrast, there is no analogous underspecification of the first-person report “I observe a red room”. There is no relevant story to be told about how it came about that you got to know about the observation that you are making.
This is an encouraging finding for the anthropic theorizer. Yet we are still left with the fact that there are cases where observers come to disagree with one each other just because of applying SSA. While it is true that these disagreeing observers will have different indexical information, and while there are trivial examples in which a difference in indexical information implies a difference in non-indexical information, it might nonetheless be seen as objectionable that anthropic reasoning should lead to these kinds of disagreements. Doesn’t that presuppose that we ascribe some mysterious quality to the things we call “observers”, some property of an observer’s mind that cannot be reduced to objective observer-independent facts?
The best way to allay this worry is by demonstrating how the above example, in which the “observer-relative” chances appeared, can be recast in purely physicalistic terms:
A coin is tossed and either one or ten human brains are created. These brains make up “the group”. Apart from these there is only one other brain, the “bookie”. All the brains are informed about the procedure that has taken place. Suppose Alpha is one of the brains that have been created and that Alpha remembers recently having been in the brain states A1, A2, ..., A. (I.e. Alpha recognizes the descriptions “A1”, “A2”, ..., “A” as descriptions of these states, and Alpha knows “this brain was recently in states A1, A2, ..., A” is true. Cf. Perry (1979).)
At this stage, Alpha should obviously think the probability of Heads is 50%, since it was a fair coin. But now suppose that Alpha is informed that he is the bookie, i.e. that the brain that has recently been in the states A1, A2, ..., Ais the brain that is labeled “the bookie”. Then Alpha will reason as follows: “Let A be the brain that was recently in states A1, A2, ..., A. The conditional probability of A being labeled ‘the bookie’ given that A is one of two existing brains is greater than the conditional probability of A being the brain labeled ‘the bookie’ given that A is one out of eleven brains. Hence, since A does indeed turn out to be the brain labeled ‘the bookie’, there is a greater than 50% chance that the coin fell tails, creating only one brain.”
A parallel line of reasoning can be pursued by a brain labeled “a brain in the group”. The argument can be quantified in the same way as in the earlier example and will result in the same “observer-relative” chances. This shows that anthropic reasoning can be understood in a physicalistic framework.
The observer-relative chances in this example too are explained by the fact that the brains have access to different evidence. Alpha, for example, knows that (SAlpha:) the brain that has recently been in the states A1, A2, ... , A is the brain that is labeled “the bookie”. A brain, Beta, who comes to disagree with Alpha about the probability of Heads, will have a different information set. Beta might for instance rather know that (SBeta:) the brain that has recently been in the states B1, B2, ... , B is a brain that is labeled “a member of the group”. SAlpha is clearly not equivalent to SBeta. It is instructive to see what happens if we take a step further and eliminate from the example not only all non-physicalistic terms but also its ingredient of indexicality:
In the previous example we assumed that the proposition (SAlpha) which Alpha knows but Beta does not know was a proposition concerning the brain states A1, A2, ... , Aof Alpha itself. Suppose now instead that Alpha does not know what label the brain Alpha has (whether it is “the bookie” or “a brain in the group”) but that Alpha has been informed that there are some recent brain states G1, G2, ... , Gof some other existing brain, Gamma, and that Gamma is labeled “the bookie”.
At this stage, what conclusion Alpha should draw from this piece of information is underdetermined by the given specifications. It depends on what Alpha knows or guesses about how this other brain, Gamma, had been selected to come to Alpha’s notice. Suppose we specify the thought experiment further by stipulating that, as far as Alpha’s knowledge goes, Gamma can be regarded as a random sample from the set of all existing brains. Alpha may know, say, that one ball for each existing brain was put in an urn and that one of these balls was drawn at random and it turned out to be the one corresponding to Gamma. Reasoning from this information, Alpha will arrive at the same conclusion as if Alpha had learnt that Alpha was labeled “the bookie” as in the previous version of the thought experiment. Similarly, Beta may know about another random sample, Epsilon, that is labeled “a brain in the group”. This will lead Alpha and Beta to differ in their probability estimates, just as before. In this version of the thought experiment no indexical evidence is involved. Yet Alpha’s probabilities differ from Beta’s.
What we have here is hardly distinct from any humdrum situation where John and Mary know different things and therefore estimate probabilities differently. The only difference from a standard urn game is that instead of balls or raffle tickets, we’re randomizing brains—surely not philosophically relevant.
But what exactly did change when we removed the indexical element? If we compare the two last examples, we see that the essential disparity is in how the random samples were produced.
In the second of the two examples, there was a physical selection mechanism that generated the randomness. We said that Alpha knew that there was one ball for each brain in existence, that these balls had been put in an urn, and that one of these balls had then been selected randomly and had turned out to correspond to a brain that was labeled “the bookie”.
In the other example, by contrast, there was no such physical mechanism. Instead, there the randomness did somehow arise from each observer considering herself as a random sample from the set of all observers. Alpha and Beta observed their own states of mind (i.e. their own brain states). Combining this information with other, non-indexical, information allowed them to draw conclusions about non-indexical states of affairs that they could not draw without the indexical information obtained from observing their own states of mind. But there was no physical randomization mechanism at work analogous to selecting a ball from an urn.
Not that it is unproblematic how such reasoning can be justified or explained—that is after all the subject matter of this book. However, SSA is what is used to get anthropic reasoning off the ground in the first place; so the discovery that SSA leads to “observer-relative” chances, and that these chances arise without an identifiable randomization mechanism, is not something that should add new suspicions. It is merely a restatement of the assumption from which we started.
Leslie’s argument that anthropic reasoning gives rise to paradoxical observer-relative chances does not hold up to scrutiny. We argued that it rests on a sense/reference ambiguity and that when this ambiguity is resolved, the purported observer-relativity disappears. Several ways in which one could try to salvage Leslie’s conclusion were explored and it turned out that none of them would work.
We then considered an example where observers applying SSA end up disagreeing about the outcome of a coin toss. The observers’ disagreement depends on their having different information and is not paradoxical; there are completely trivial examples of the same kind of phenomenon. We also showed that (at least for a wide range of cases) this disparity in beliefs cannot be marshaled into a betting arrangement where all parties involved would expect to make a gain.
This example was given a physicalistic reformulation, showing that the observers’ disagreement does not imply some mysterious irreducible role for the observers’ consciousness. What does need to be presupposed, however, unless the situation be utterly trivialized, is SSA. This is not a finding that should be taken to cast doubt on anthropic reasoning. Rather, it simply elucidates one aspect of what SSA really means. The absence of the sort of paradoxical observer-relative chances that Leslie claimed to have found could even be taken to give some indirect support for SSA.
Appendix: The no-betting results
This appendix shows, for a quite general set of cases, that adopting and applying SSA does not lead rational agents to bet against one another.
Consider again the case where a fair coin is tossed and a different number of observers are created depending on how the coin falls. The people created as a result of the coin toss make up “the group”. In addition to these, there exists a set of people we call the “bookies”. Together, the people in the group and the bookies make up the set of people who are said to be “in the experiment”. To make the example more general, we also allow there to be (a possibly empty) set of observers who are not in the experiment (i.e. who are not bookies and are not in the group); we call these observers “outsiders”.
We introduce the following abbreviations:
Number of people in the group if coin falls heads = h
Number of people in the group if coin falls tails = t
Number of bookies = b
Number of outsiders = u
For “The coin fell heads”, write H
For “The coin fell tails”, write ¬H
For “I am in the group”, write G
For “I am a bookie”, write B
For “I am in the experiment (i.e. I’m either a bookie or in the group)”, write E
First we want to calculate P(H|G&E) and P(H|B&E), the probabilities that the group members and the bookies, respectively, should assign to the proposition that the coin fell heads. Since G implies E, and B implies E, we have P(H|G&E) = P(H|G) and P(H|B&E) = P(H|B). We can derive P(H|G) from the following equations:
P(H|G) = P(G|H) P(H) / P(G) (Bayes’ theorem)
P(G|H) = h / (h + b + u) (SSA)
P(G|¬H) = t / (t + b + u) (SSA)
P(H) = P(¬H) = 1/2(Fair coin)
P(G) = P(G|H) P(H) + P(G|¬H) P(¬H) (Theorem)
This gives us
In analogous fashion, using P(B|H) = b / (h + b + u) and P(B|¬H) = b / ( t + b + u), we get
We see that P(H|B&E) is not in general equal to P(H|G&E). The bookies and the people in the group will arrive at different estimates of the probability of Heads. For instance, if we have the parameter values {h = 10, t = 1, b = 1, u = 10} we get P(H|G&E) ˜ 85% and P(H|B&E) ˜ 36%. In the limiting case when the number of outsiders is zero, {h = 10, t = 1, b = 1, u = 0}, we have P(H|G&E) ˜ 65% and P(H|B&E) ˜ 15%. In the opposite limiting case, when the number of outsiders is large, {h = 10, t = 1, b = 1, u ?8}, we get P(H|G&E) ˜ 91% and P(H|B&E) = 50%. In general, we should expect the bookies and the group members to disagree about the outcome of the coin toss.
Now that we know the probabilities, we can check whether a bet occurs. There are two types of bet that we will consider. In a type 1 bet, a bookie bets against the group as a whole, and the group members bet against the set of bookies as a whole. In a type 2 bet, an individual bookie bets against an individual group member.
Let’s look at the type 1 bet first. The maximum amount $x that a person in the group is willing to pay to each bookie if the coin fell heads in order to get $1 from each bookie if it fell tails is given by
P(H|G)(–x)b + P(¬H|G)b = 0.
When calculating the rational odds for a bookie, we have to take into account the fact that depending on the outcome of the coin toss, the bookie will turn out to have betted against a greater or a smaller number of group members. Keeping this in mind, we can write down a condition for the minimum amount $y that a bookie has to receive (from every group member) if the coin fell heads in order to be willing to pay $1 (to every group member) if it fell tails:
P(H|B) y . h + P(¬H|B)(–1)t = 0.
Solving these two fairness equations, we find that x = y =
, which means that nobody expects to win from a bet of this kind.
Turning now to the type 2 bet, where individual bookies and individuals in the group bet directly against each other, we have to take into account an additional factor. To keep things simple, we assume that it is assured that all of the bookies get to make a type 2 bet and that no person in the group bets against more than one bookie. This implies that the number of bookies isn’t greater than the smallest number of group members that could have resulted from the coin toss; for otherwise there would be no guarantee that all bookies could bet against a unique group member. But this means that if the coin toss generated more than the smallest possible number of group members, a selection has to be made as to which of the group members get to bet against a bookie. Consequently, a group member who finds that she has been selected obtains reason for thinking that the coin fell in such a way as to maximize the proportion of group members that get selected to bet against a bookie. (The bookies’ probabilities remain the same as in the previous example.)
Let’s say that it is the Tails outcome that produces the smallest group. Let s denote the number of group members that are selected. We require that s = t. We want to calculate the probability for the selected people in the group that the coin fell heads, i.e. P(H|G&E&S). Since S implies both G and E, we have P(H|G&E&S) = P(H|S). From
P(H|S) = P(S|H) P(H) / P(S) (Bayes’ theorem)
P(S|H) = s / (h + b + u) (SSA)
P(S|¬H) = s / (t + b + u) (SSA)
P(H) = P(¬H) = 1/2(Fair coin)
P(S) = P(S|H)P(H) + P(S|¬H)P(¬H) (Theorem)
we then get
P(H|G&E&S) =
Comparing this to the result in the previous example, we see that P(H|G&E&S) = P(H|B&E). This means that the bookies and the group members that are selected now agree about the odds. So there is no possible bet between them for which both parties would calculate a positive nonzero expected payoff.
We conclude that adopting SSA does not lead observers to place bets against each other. Whatever the number of outsiders, bookies, group members, and selected group members, there are no bets, either of type 1 or of type 2, from which all parties should expect to gain.