## Coincidences and the Lottery

November 29, 2010 by Brendon J. Brewer

Coincidences happen surprisingly often. Yet they are often not meaningful, i.e. they are “just a coincidence” and do not imply that we should change our worldview. For example, suppose there are a million people in contention for a lottery, and John Smith is found to win. Before knowing this, our probability for it is :

People often get afraid of this tiny probability, and proclaim something like “it’s not the probability of John Smith winning the lottery that is relevant, but the probability that *someone* wins”. However, this is anti-Bayesian nonsense. This tiny probability is, by Bayes’ rule, relevant for getting a posterior probability for . So how is it that we often still believe in the fair lottery (or that a coincidence is not meaningful)?

The answer is quite simple: the likelihood for the alternative, hypothesis, is just as small:

.

The reason is that before we knew who won, we had no reason to single out John Smith, and had to spread the total probability (1) over a million minus one alternatives (that the lottery was rigged in favor of one of the other entrants). Using analogous reasoning, yes, coincidences have tiny probability, but they also have tiny probability given the hypothesis of a mysterious force operating, because before the coincidence happened we didn’t know *which* of the multitude of coincidences were going to occur.

For more on this topic, you may be interested in this paper (by myself and Matt).

### Like this:

Like Loading...

*Related*

on October 26, 2013 at 11:52 am |10 Nice things about Bayes’ theorem | Letters to Nature[…] has made this point quite nicely in a previous post, using the example of a lottery. Note, in particular, that we […]

on November 13, 2013 at 12:38 am |Reply to Maudlin: The Calibrated Cosmos | Letters to Nature[…] The slogan I want to invoke here is “don’t treat a likelihood as if it were a posterior”. That’s a bit to jargon-y. The likelihood is the probability of what we know, assuming that some theory is true. The posterior is the reverse – the probability of the theory, given what we know. It is the posterior that we really want, since it reflects our situation: the theory is uncertain, the data is known. The likelihood can help us calculate the posterior (using Bayes theorem), but in and of itself, a small likelihood doesn’t mean anything. The calculation Maudlin alludes to above is a likelihood: what is the probability that I would exist, given that the events that lead to my existence came about by chance? The reason that this small likelihood doesn’t imply that the posterior – the probability of my existence by chance, given my existence – is small is that the theory has no comparable rivals. Brendon has explained this point elsewhere. […]

on April 14, 2016 at 11:42 pm |Methinks it is like a weasel | Plausibility Theory[…] super-simple example will help here (I’ve used this example before, and it’s basically Ed Jaynes’ “sure thing hypothesis”). Consider a lottery […]