Feeds:
Posts

## Bayes’ Theorem: what is this “background information”?

I laid down the basics of probability theory and Bayes’ theorem in a previous post. Here’s the story, as a reminder. We have some background information B and some data D. We want to know: what is the probability that some theory T is true, given the background information and the data? We write this as $p(T | DB)$, and expand it using Bayes’ theorem:

$p(T | DB) = \frac{p(D | TB) p(T | B)} {p(D | B)}$

However, this “background information” is a little vague. What puts something in the background? How much background do I need to dig up? How do we divide our knowledge into background information B and data D? Does it matter? Is it that background information is being assumed as known, while for the data, as with all measurements, we must acknowledge a degree of uncertainty?

Here’s a few things about background information.

1. Tell me everything

2. The posterior views data and background equally

3. Calculate probabilities of data, with background information

4. Both background and data are taken as given

5.  You should divide K cleanly

### 1. Tell me everything

The question is this: given everything I know $K$, what is the probability that some statement $T$ is true? The idea is that a rational thinker, in evaluating some statement T, will take into account everything they know. Remember that one of the desiderata of probability theory, taken as a rational approach to reasoning with uncertainty, is that information must not be arbitrarily ignored. In principle, everything we know should be in $K$ somewhere. So tell me everything.

In practice, thankfully, irrelevant information can be ignored as it will factor out anyway (Point 5). That gives us the definition of “relevant” in probability theory: a statement is relevant if including it as given changes our probabilities.

### 2. The posterior views data and background equally

Why, then, have we decided to break up everything we know $K$ into “data” and “background” $DB$? (Remember: DB means “both D and B are true”). The reason is that probabilities don’t grow on trees. If we had a black box that handed out posteriors $p(T | K)$ for any information K and theory T we care to think of, then we wouldn’t need to worry about Bayes theorem or background and data. Remember: the whole point of probability identities is to take the probability we want and write it in terms of probabilities we have.

If K contains a great many statements (as it always does) then there will be multiple ways to divide it into “background” and “data”. We could have $K = D_1 B_1$, or $K = D_2 B_2$, or … . It is one of the desiderata of probability theory that identical states of knowledge should result in identical assigned probabilities. Since knowing $D_1$ and $B_1$ is the same state of knowledge as knowing $D_2$ and $B_2$, the posterior must be the same. If you like, $K = D_1 B_1 = D_2 B_2 \ldots$. In particular, $D B = B D$. So take heart: there is no right way and wrong way to divide K. There are, however, easier and harder ways.

### 3. Calculate probabilities of data, with background information.

There is a unique posterior probability of T, given K. Thus, any division of K into D and B that allows us to calculate the likelihood, prior and marginal likelihood (i.e. the right hand side of Bayes’ theorem) will do. The question to ask is: how can I divide K into parts such that I can calculate the probability of one part with the other part (and T)? That’s all there is to it – call the first part the “data” and the second part the “background”.

These names – background and data – are simply the way it usually works out practice. I am in principle free to reverse the labels of the background and data. In practice, however, I will usually be unable to calculate the terms of Bayes theorem in this case.

### 4. Both background and data are taken as given.

Looking again at the terms of Bayes theorem, notice that the background information appears as a given in each term, while two of the terms consider the probability of the data. Does this mean that, while the uncertainty of the data is taken into account, the background is treated as certain for the sake of this calculation?

The discussion above shows that the background and data are simply labels, and thus they cannot have a different status in the calculation. In particular, nothing would change in principle if I reversed the labels.

The posterior $p(T | DB)$ shows that both the data and the background are taken as given. The posterior is the question of a rational enquiry – what follows from what I know, and with what degree of certainty? Calculating the probability of D given TB (the likelihood) doesn’t imply that we are treating D as uncertain. Rather, it is really T we are probing. We are testing the strength of the connection between T and D by asking: “how probable would D be if all I knew were T and B?”. I’ll have more to say about this is a future post.

### 5. You should divide K cleanly

I’ve been assuming that, when apportioning everything you know K into B and D, there is no overlap between the pieces. This is a good idea. Not because it would be wrong  – as long as everything is in there somewhere, you’re OK. But you’re wasting your time, because this will happen …

Suppose:

$B = B'~C$

$D = D'~C$

where C represents the common statement which you’ve unwisely left in both B and D. Recall that, in a boolean algebra, AA = A, i.e. if A is true if and only if “A and A” is true. Now, here’s what happens in Bayes theorem, for any theory T:

$p(T | DB) = \frac{p(D | TB) p(T | B)} {p(D | B)}$   (Bayes theorem)

$= \frac{p(D'C | TB'C) p(T | B)} {p(D'C | B'C)}$      (substituting from above)

$= \frac{p(D' | TB'C) p(C | TB'C) p(T | B)} {p(D' | B'C) p(C | B'C)}$ (product rule, CC = C)

$= \frac{p(D' | TB'C) p(T | B)} {p(D' | B'C)}$            (p(C|C) = 1)

$= p(T | D'B)$

Since D and B are just labels, I could swap them and prove that $p(T | DB) = p(T | DB')$. In other words, if there is an overlap between B and D, you’ll just end up doing the calculations as if you’d put the overlap C into one or the other. So you might as well divide K cleanly to start with.

### 7 Responses

1. […] « Bayes’ Theorem: what is this “background information”? […]

2. […] on my series on Bayes’ Theorem, recall that the question of any rational investigation is this: what is the […]

3. […] pretending that we don’t know whether we exist, or that this fact doesn’t count. See also this post regarding Carrier’s discussion of Collins – if your posterior changes when you move […]

4. […] crucially on placing o in b, then something is wrong. I’ve explained this in more detail here. The stuff we label “background” is not special, not the stuff we really know. As […]

5. […] life forms, then it is heavily penalised by the falsity of that statement. (We all understand background information, right?) We end up talking about universes without observers because those kind of universes are […]

6. […] Instead of adding Evolution to Naturalism as a theory, they decide to add Evolution to our background information (in other words, they’ll say Evolution is common knowledge to everyone; it’s something […]

7. […] this example yourself to include more theories). On Monday, the available data implies that their (prior) probabilities are equal. By Friday, new data has arrived from two independent sources (say, […]