More about Bayes’ theorem; an introduction was given here. Once again, I’m not claiming any originality.
You can’t save a theory by stapling some data to it, even though this will improve its likelihood. Let’s consider an example.
Suppose, having walked into my kitchen, I know a few things.
= There is a cake in my kitchen.
= The cake has “Happy Birthday Luke!” on it, written in icing.
= My name is Luke + Today is my birthday + whatever else I knew before walking to the kitchen.
Obviously,
i.e.
presupposes
. Now, consider two theories of how the cake got there.
= my Wife made me a birthday cake.
= a cake was Accidentally delivered to my house.
Consider the likelihood of these two theories. Using the product rule, we can write:


Both theories are equally able to place a cake in my kitchen, so
. However, a cake made by my wife on my birthday is likely to have “Happy Birthday Luke!” on it, while a cake chosen essentially at random could have anything or nothing at all written on it. Thus,
. This implies that
and the probability of
has increased relative to
since learning
and
.
So far, so good, and hopefully rather obvious. Let’s look at two ways to try to derail the Bayesian account.
Details Details
Before some ad hoc-ery, consider the following objection. We know more than
and
, one might say. We also know,
= there is a swirly border of piped icing on the cake, with a precisely measured pattern and width.
Now, there is no reason to expect my wife to make me a cake with that exact pattern, so our likelihood takes a hit:

Alas! Does the theory that my wife made the cake become less and less likely, the closer I look at the cake? No, because there is no reason for an accidentally delivered cake to have that pattern, either. Thus,

And so it remains true that,

and the wife hypothesis remains the prefered theory. This is point 5 from my “10 nice things about Bayes’ Theorem” – ambiguous information doesn’t change anything. Additional information that lowers the likelihood of a theory doesn’t necessarily make the theory less likely to be true. It depends on its effect on the rival theories.
Ad Hoc Theories
What if we crafted another hypothesis, one that could better handle the data? Consider this theory.
= a cake with “Happy Birthday Luke!” on it was accidentally delivered to my house.
Unlike
,
can explain both
and
. Thus, the likelihoods of
and
are about equal:
. Does the fact that I can modify my theory to give it a near perfect likelihood sabotage the Bayesian approach?
Intuitively, we would think that however unlikely it is that a cake would be accidentally delivered to my house, it is much less likely that it would be delivered to my house and have “Happy Birthday Luke!” on it. We can show this more formally, since
is a conjunction of propositions
, where
= The cake has “Happy Birthday Luke!” on it, written in icing.
But the statement
is simply the statement
. Thus
. Recall that, for Bayes’ Theorem, what matters is the product of the likelihood and the prior. Thus,




Thus, the product of the likelihood and the prior the same for the ad hoc theory
and the original theory
. You can’t win the Bayesian game by stapling the data to your theory. Ad hoc theories, by purchasing a better likelihood at the expense of a worse prior, get you nowhere in Bayes’ theorem. It’s the postulates that matter. Bayes’ Theorem is not distracted by data smuggled into the hypothesis.
Too strong?
While all this is nice, it does assume rather strong conditions. It requires that the theory in question explicitly includes the evidence. If we look closely at the statements that make up
, we will find
amongst them, i.e. we can write the theory as
. A theory can be jerry-rigged without being this obvious. I’ll have a closer look at this in a later post.
Read Full Post »