Feeds:
Posts

## Richard Carrier: One Liners, Part 3

Continuing my response to Carrier (here’s Part 1 and Part 2).

### Part Four: The Real Heart of the Matter

Note that this is actually not “my” conclusion. It is the conclusion of three mathematicians (including one astrophysicist) in two different studies converging on the same result independently of each other.

Wow! Two “studies”! (In academia, we call them “papers”. Though, neither were published in a peer-reviewed journal, so perhaps “articles”.) Three mathematicians! Except that Elliott Sober is a philosopher (and a fine one), not a mathematician – he has never published a paper in a mathematics journal. More grasping at straws.

Barnes wants to get a different result by insisting the prior probability of observers is low—which means, because prior probabilities are always relative probabilities, that that probability is low without God, i.e. that it is on prior considerations far more likely that observers would exist if God exists than if He doesn’t.

Those sentences fail Bayesian Probability 101. Prior probabilities are probabilities of hypotheses. Always. In every probability textbook there has ever been1. Probabilities of data given a hypothesis – such as the probability that this universe contains observers given naturalism – are called likelihoods. So, there is the prior probability of naturalism, and there is the likelihood of observers given naturalism, but there is no such thing as the “prior probability of observers”.

This is not a harmless slip in terminology. Carrier treats a likelihood as if it were a prior. He has confused the concepts, not just the names. Carrier states that “the only way the prior probability of observers can be low, is if the prior probability of observers is high on some alternative hypothesis.”2 This is true of prior probabilities, but it is not true of likelihoods. In the vernacular, likelihoods are not normalised with respect to hypotheses. They are normalised with respect to evidence: p(e|h.b) + p(~e|h.b) = 1.

It follows that this entire section on the “prior probability of observers” and the need to consider “some alternative hypothesis” is garbage. There is simply no argument to respond to, only a hopeless mess of Carrier’s confusions. It’s an extended discussion about prior probabilities from a guy who doesn’t know what a prior probability is. Given that he has previously confused priors and posteriors, he’s zero from three on the fundamentals of Bayes theorem. You cannot keep getting the basics of probability theory wrong and expect to be taken seriously.

Technical details: For any hypothesis h, and its negation ~h (which we can think of as the disjunction or union of all alternatives to h), p(h|b) +p(~h|b) = 1. So, the prior p(h|b) is small if and only if p(~h|b) is large, and vice versa. The same applies to posteriors: p(h|e.b) +p(~h|e.b) = 1. But there is no corresponding rule for likelihoods and hypotheses: p(e|h.b) is small does not imply that p(e|~h.b) is large. “p(e|h.b) + p(e|~h.b) = 1” is not an identity of probability theory.

This is where note 23 in my chapter comes in … Barnes never mentions this argument and never responds to this argument.

Addressed in Part 2, under “Bayes’ Theorem Omits Redundancies” and Part 4, under “The Main Attraction” and “My Reply”. I’ve put Carrier’s argument in mathematical notation, so it should be easy to demonstrate where my response falls short. No such demonstration is forthcoming, only repetition.

… [when you] remove even our knowledge of ourselves existing from b [the background evidence]. You end up making statements about universes without observers in them. Which can never be observed. … Either you are making statements about universes that have a ZERO% chance of being observed (and therefore cannot be true of our universe), or you are making statements that are 100% guaranteed to be observed.

This is exactly the point I discussed in detail in Part 4. Since Bayes theorem is an identity – that is, it can be used with any propositions – moving a particular fact between e and b can never be wrong. Carrier’s objections must be mistaken, since you can’t fight a mathematical identity.

And we can see where they are mistaken. In Bayesian probability theory, hypotheses are penalised for declaring as “highly likely” statements that are in fact false. For example, the hypothesis “the burglar guessed the 12-digit combination to the safe” implies that it is highly likely that the burglar didn’t open the safe. It is heavily penalised, then, if security camera footage shows the burglar opening the safe on the first attempt. We end up talking about burglars who didn’t open the safe because those kind of burglars are the most likely on the stated hypothesis.

If naturalism implies that, given only that a universe exists, it is highly likely that the universe does not contain life forms, then it is heavily penalised by the falsity of that statement. (We all understand background information, right?) We end up talking about universes without observers because those kind of universes are the most likely on naturalism. The fact that they cannot be observed does not matter; likelihoods are normalised over an exhaustive set of possibilities, not merely over the set of observable possibilities. Any possible state of affairs to which your theory assigns probability is relevant, observable or not, because there is only 1 unit of probability to go round. (This is the prior vs. likelihood blunder coming home to roost). If a large part of your likelihood is assigned to false statements, then your theory is penalised. That’s the fine-tuning argument.

### Summary

Let’s recap some highlights of these three posts.

• Carrier has not addressed the charge of inconsistency with probability theory. In fact, he given more examples of inconsistency by introducing “hypothetical reference classes”. He has not addressed the reference class problem.
• He has made up probability concepts that no one has ever heard of before, including “transfinite frequentism” and “existential probability calculus”.
• He has abandoned his previous claim that “all the scientific models we have … show life-bearing universes to be a common result of random universe variation, not a rare one.”
• He completely misunderstands my rather obvious point that “for a given possible universe, we specify the physics”, and in so doing, shows that he does not understand fine-tuning at its most basic level.
• And, finally, Carrier’s argument regarding the “Real Heart of the Matter” is rendered meaningless by a deep misunderstanding of probability theory’s basics.

Carrier, demonstrably, understands neither probability theory nor fine-tuning.

Barring a minor miracle, my next post will be my last about Richard Carrier. I’ll explain why there.

### Footnotes

1. I’m taking the term “hypotheses” in a general sense, so that it could include the hypothesis that an unknown parameter has a certain value. That is, priors can be distributions of unknown parameters.
2. This talk of “some alternative hypothesis” precludes the possibility that Carrier is actually referring to p(e|b), the marginal likelihood. If “e = this universe contains observers”, then p(e|b) could – I suppose – be referred to as the prior probability of observers, though no one would and Carrier’s argument would still be wrong.

### 11 Responses

1. That he can’t get his basic terminology right certainly doesn’t inspire confidence. Unless I’ve misunderstood your point here though, I think we can make sense of his discussion if we take him to be talking about the likelihoods rather than prior probabilities. It doesn’t change the fact that the prior he uses in his version of the Ikeda & Jeffreys (IJ) argument is wrong though.

So, the IJ argument can be modeled by p(N|FB) = p(F|NB) p(N|B)/p(F|B). Where B contains the fact that observers exist (O). And if I understand correctly, all parties agree that since p(F|NB) = 1, p(N|FB) >= p(N|B).

We can always split apart our set of background propositions so p(N|B) = p(N|OB’)= p(O|NB’) p(N|B’)/(p(O|NB’) p(N|B’) + p(O|~NB’) p(~N|B’) ). B’ is B without proposition O. There is the likelihood p(O|NB’), but its smallness will only matter in relationship to p(O|~NB’). If p(O|~NB’) is identically small then p(N|OB’) = p(N|B’). In other words, N will not be penalized after all. N will only be penalized if p(O|~NB) > p(O|NB). Carrier denies that we know this.

The problem is, his chapter (footnote 8 I think) has given us values for most the numbers we need (though I don’t think he realizes that). Via the principle of indifference he has set p(N|B’) = p(~N|B’) = 0.5. So, even if we accept his claim that we know nothing of the actual difference between p(O|NB’) and p(O|~NB’), and therefore should consider them equal, then we see he should have been using p(N|B) = 0.5 in the IJ model. Instead he has used p(N|B) = 0.75 and p(~N|B) = 0.25.

From the same footnote we can infer that he thinks p(O|~NB’) = 0.5 (at least I think that’s what it says. it is ambiguous imo). If we are to take his other comments seriously then he is also saying p(O|NB’) = 0.5.

I’m more sympathetic* than you (but less sympathetic than Carrier) to the idea that we don’t really know that p(O|NB’) differs much from p(O|~NB), but that doesn’t change the fact that he has ignored his own arguments when deriving his prior for p(~N|B).

*I’m more sympathetic to the idea for a reason similar to one of the criticisms of frequentism. No finite series puts any constraints on the limiting relative frequency of the infinite series. Similarly, it would seem that the likelihood in some subset of what is surely an infinity of possible laws, should not put any constraint on the likelihood as estimated against that full set of possible laws. Now, on the other hand, my intuition says that surely there are a lot of possible laws that go no where but maybe there are a lot that make life too. Anyway, maybe I’ve misunderstood what the physics research has shown but thought I’d throw my 2 cents in there to close.

2. […] Part 2 is here. Part 3 is here. […]

3. […] Part 3 is here. […]

4. So Carrier has responded, and part of his response is that you, Luke Barnes, are a kook because you believe hypothetical situations can’t be added to a reference class. He says:

“Because if he actually thinks you can’t put hypothetical possibilities in a reference class, then he must conclude the fine tuning argument invalid. Because if you can’t put hypothetical possibilities in a reference class, you can’t put hypothetical universes in a reference class. And if you can’t put hypothetical universes in a reference class, you can’t make any claim about the frequency of those universes that would or would not bear life.”

I could be wrong, but I think that’s a rather bizarre misreading. I left a comment on the post, and if it mischaracterizes your position in anyway, Luke, I’ll be happy to correct myself.

The comment follows:

Richard, I think it’s crystal clear from Luke’s posts that he does not think “you can’t put hypothetical possibilities in a reference class, then he must conclude the fine tuning argument invalid.” In fact he’s pretty clear he thinks the EXACT opposite of that. Rather, he’s saying that’s YOUR position, because you are claiming probability only measures the frequency of things happening or of things being true. Here’s the paragraph:

“Carrier’s approach to probability is inconsistent. He keeps shifting the goalposts. In TEC, when talking about a cosmic designer, he says “Probability measures frequency (whether of things happening or of things being true)”. Only known cases, verified by science, can be allowed in a reference class. But now, in OBR, it’s OK to put hypothetical possibilities in a reference class.”

I invite Luke to come and correct me if I’ve misstated his position here at all.

That being said, it is, however, impressive that you took the time to respond to him at all, and I think we’re all grateful for your time in doing so. But really, take a deep breath next time before responding.

• Pretty much spot on, I’d say.

• This is also a good example of why it is so hard to get anywhere in this discussion. You start with what is just a basic application of Bayes theorem in TEC, and you end up having to interpret arguments such as:

Because if he actually thinks you can’t put hypothetical possibilities in a reference class, then he must conclude the fine tuning argument invalid.

So instead of using Bayes theorem to clarify an argument, it is used as a vehicle to bring in very dense, poorly defined and seemingly changing terminology about what probabilities ought to mean (and supposedly as shown in “proving history”, can only mean) and subsequent discussion in that terminology is translated back into assertions about what one THEN must commit to believing (for instance, in the above Luke “must” conclude the fine-tuning argument is invalid).

5. Speaking of publishing too quickly, that should read: … “I think it’s crystal clear from Luke’s posts that he does not think “you can’t put hypothetical possibilities in a reference class.” Full stop.

6. […] Richard Carrier: One Liners, Part 3 […]

7. […] sometimes using probability, and Barnes responded with two posts mainly on the fine-tuning and two (here and here) on probability – which is what I’m focusing on […]