Feeds:
Posts

## Coincidences and the Lottery

Coincidences happen surprisingly often. Yet they are often not meaningful, i.e. they are “just a coincidence” and do not imply that we should change our worldview. For example, suppose there are a million people in contention for a lottery, and John Smith is found to win. Before knowing this, our probability for it is $10^{-6}$:

$P(\textnormal{John Smith wins} | \textnormal{fair lottery}) = 10^{-6}$

People often get afraid of this tiny probability, and proclaim something like “it’s not the probability of John Smith winning the lottery that is relevant, but the probability that someone wins”. However, this is anti-Bayesian nonsense. This tiny probability is, by Bayes’ rule, relevant for getting a posterior probability for $\textnormal{fair lottery}$. So how is it that we often still believe in the fair lottery (or that a coincidence is not meaningful)?

The answer is quite simple: the likelihood for the alternative, $\textnormal{unfair lottery}$ hypothesis, is just as small:
$P(\textnormal{John Smith wins} | \textnormal{unfair lottery}) = 10^{-6}$.
The reason is that before we knew who won, we had no reason to single out John Smith, and had to spread the total probability (1) over a million minus one alternatives (that the lottery was rigged in favor of one of the other entrants). Using analogous reasoning, yes, coincidences have tiny probability, but they also have tiny probability given the hypothesis of a mysterious force operating, because before the coincidence happened we didn’t know which of the multitude of coincidences were going to occur.

For more on this topic, you may be interested in this paper (by myself and Matt).

### 3 Responses

1. […] has made this point quite nicely in a previous post, using the example of a lottery. Note, in particular, that we […]

2. […] The slogan I want to invoke here is “don’t treat a likelihood as if it were a posterior”. That’s a bit to jargon-y. The likelihood is the probability of what we know, assuming that some theory is true. The posterior is the reverse – the probability of the theory, given what we know. It is the posterior that we really want, since it reflects our situation: the theory is uncertain, the data is known. The likelihood can help us calculate the posterior (using Bayes theorem), but in and of itself, a small likelihood doesn’t mean anything. The calculation Maudlin alludes to above is a likelihood: what is the probability that I would exist, given that the events that lead to my existence came about by chance? The reason that this small likelihood doesn’t imply that the posterior – the probability of my existence by chance, given my existence – is small is that the theory has no comparable rivals. Brendon has explained this point elsewhere. […]

3. […] super-simple example will help here (I’ve used this example before, and it’s basically Ed Jaynes’ “sure thing hypothesis”). Consider a lottery […]