Archive for the ‘Uncategorized’ Category


In what follows, I’ll consider Carrier’s claims about the mathematical foundations of probability theory. What Carrier says about probability is at odds with every probability textbook (or lecture notes) I can find. He rejects the foundations of probability laid by frequentists (e.g. Kolmogorov’s axioms) and Bayesians (e.g. Cox’s theorem). He is neither, because we’re all wrong – only Carrier knows how to do probability correctly. That’s why he has consistently refused my repeated requests to provide scholarly references – they do not exist. As such, Carrier cannot borrow the results and standing of modern probability theory. Until he has completed his revolution and published a rigorous mathematical account of Carrierian probability theory, all of his claims about probability are meaningless.

Carrier’s version of Probability Theory

I intend to demonstrate these claims, so we’ll start by quoting Carrier at length. I won’t be relying on previous posts. In TEC, Carrier says:

Bayes’ theorem is an argument in formal logic that derives the probability that a claim is true from certain other probabilities about that theory and the evidence. It’s been formally proven, so no one who accepts its premises can rationally deny its conclusion. It has four premises … [namely P(h|b), P(~h|b), P(e|h.b), P(e|~h.b)]. … Once we have [those], the conclusion necessarily follows according to a fixed formula. That conclusion is then by definition the probability that our claim h is true given all our evidence e and our background knowledge b.

In OBR, he says:

[E]ver since the Principia Mathematica it has been an established fact that nearly all mathematics reduces to formal logic … The relevant probability theory can be deduced from Willard Arithmetic … anyone familiar with both Bayes’ Theorem (hereafter BT) and conditional logic (i.e. syllogisms constructed of if/then propositions) can see from what I show there [in Proving History] that BT indeed is reducible to a syllogism in conditional logic, where the statements of each probability-variable within the formula is a premise in formal logic, and the conclusion of the equation becomes the conclusion of the syllogism. In the simplest terms, “if P(h|b) is w and P(e|h.b) is x and P(e|~h.b) is y, then P(h|e.b) is z,” which is a logically necessary truth, becomes the concluding major premise, and “P(h|b) is w and P(e|h.b) is x and P(e|~h.b) is y” are the minor premises. And one can prove the major premise true by building syllogisms all the way down to the formal proof of BT, again by symbolic logic (which one can again replace with old-fashioned propositional logic if one were so inclined).

More specifically it is a form of argument, that is, a logical formula that describes a particular kind of argument. The form of this argument is logically valid. That is, its conclusion is necessarily true when its premises are true. Which means, if the three variables in BT are true (each representing a proposition about a probability, hence a premise in an argument), the epistemic probability that results is then a logically necessary truth. So, yes, Bayes’ Theorem is an argument.

He links to, and later shows, the following “Proof of Bayes Theorem … by symbolic logic”, saying that “the derivation of the theorem is this.”


For future reference, we’ll call this “The Proof”. Of his mathematical notation, Carrier says:

P(h|b) is symbolic notation for the proposition “the probability that a designated hypothesis is true given all available background knowledge but not the evidence to be examined is x,” where x is an assigned probability in the argument.

Like nothing we’ve ever seen

I have 13 probability textbooks/lecture notes open in front of me: Bain and Engelhardt; Jaynes (PDF); Wall and Jenkins; MacKay (PDF); Grinstead and Snell; Ash; Bertsekas and Tsitsiklis; Rosenthal; Bayer; Dembo; Sokol and Rønn-NielsenVenkateshDurrett; Tao. I recently stopped by Sydney University’s Library to pick up a book on nuclear reactions, and took the time to open another 15 textbooks. I’ve even checked some of the philosophy of probability literature, such as Antony Eagle’s collection of readings (highly recommended), Arnborg and SjodinCatichaColyvanHajek (who has a number of great papers on probability), and Maudlin.

When presenting the foundations of probability theory, these textbooks and articles roughly divide along Bayesian vs frequentist lines. The purely mathematical approach, typical of frequentist textbooks, begins by thinking about relative frequencies before introducing measure theory, explaining Kolmogorov’s axioms, motivating the definition of conditional probability, and then – in one line of algebra – giving “The Proof” of Bayes theorem. Says Mosteller, Rourke and Thomas: “At the mathematical level, there is hardly any disagreement about the foundations of probability … The foundation in set theory was laid in 1933 by the great Russian probabilitist, A. Kolmogorov.” With this mathematical apparatus in hand, we use it to analyse relative frequencies of data.

Bayesians take a different approach (e.g. Probability Theory by Ed Jaynes). We start by thinking about modelling degrees of plausibility. The frequentist, quite rightly, asks what the foundations of this approach are. In particular, why think that degrees of plausibility should be modelled by probabilities? Why think that “plausibilities” can be mathematised at all, and why use Kolmogorov’s particular mathematical apparatus? Bayesians respond by motivating certain “desiderata of rationality”, and use these to prove via Cox’s theorem (or perhaps via de Finetti’s “Dutch Book” arguments) that degrees of plausibility obey the usual rules of probability. In particular, the produce rule is proven, p(A and B | C) = p(A|B and C) p(B|C), from which Bayes theorem follows via “The Proof”.

In precisely none of these textbooks and articles will you find anything like Carrier’s account. When presenting the foundations of probability theory in general and Bayes Theorem in particular, no one presents anything like Carrier’s version of probability theory. Do it yourself, if you have the time and resources. Get a textbook (some of the links above are to online PDFs), find the sections on the foundations of probability and Bayes Theorem, and compare to the quotes from Carrier above. In this company, Carrier’s version of probability theory is a total loner. We’ll see why. (more…)

Read Full Post »

Continuing my response to Carrier (here’s Part 1 and Part 2).

Part Four: The Real Heart of the Matter

Note that this is actually not “my” conclusion. It is the conclusion of three mathematicians (including one astrophysicist) in two different studies converging on the same result independently of each other.

Wow! Two “studies”! (In academia, we call them “papers”. Though, neither were published in a peer-reviewed journal, so perhaps “articles”.) Three mathematicians! Except that Elliott Sober is a philosopher (and a fine one), not a mathematician – he has never published a paper in a mathematics journal. More grasping at straws.


Barnes wants to get a different result by insisting the prior probability of observers is low—which means, because prior probabilities are always relative probabilities, that that probability is low without God, i.e. that it is on prior considerations far more likely that observers would exist if God exists than if He doesn’t.


Those sentences fail Bayesian Probability 101. Prior probabilities are probabilities of hypotheses. Always. In every probability textbook there has ever been1. Probabilities of data given a hypothesis – such as the probability that this universe contains observers given naturalism – are called likelihoods. So, there is the prior probability of naturalism, and there is the likelihood of observers given naturalism, but there is no such thing as the “prior probability of observers”.

This is not a harmless slip in terminology. Carrier treats a likelihood as if it were a prior. He has confused the concepts, not just the names. Carrier states that “the only way the prior probability of observers can be low, is if the prior probability of observers is high on some alternative hypothesis.”2 This is true of prior probabilities, but it is not true of likelihoods. In the vernacular, likelihoods are not normalised with respect to hypotheses. They are normalised with respect to evidence: p(e|h.b) + p(~e|h.b) = 1.

It follows that this entire section on the “prior probability of observers” and the need to consider “some alternative hypothesis” is garbage. There is simply no argument to respond to, only a hopeless mess of Carrier’s confusions. It’s an extended discussion about prior probabilities from a guy who doesn’t know what a prior probability is. Given that he has previously confused priors and posteriors, he’s zero from three on the fundamentals of Bayes theorem. You cannot keep getting the basics of probability theory wrong and expect to be taken seriously. (more…)

Read Full Post »

Looking for a romantic evening on (the day after) Valentine’s day? Why not try the Macarthur Astronomy Forum!

Location: Western Sydney University, Lecture theatre, Building 30

Date: Monday 15th February, 7.30 pm

Title: There is more to the Universe than its good looks.

Abstract: The planets, stars and galaxies that fill the night sky obey elegant mathematical patterns: the laws of nature. Why does our Universe obey these particular laws? As a clue to answering this question, scientists have asked a similar question: what if the laws were slightly different? What if it had begun with more matter, had heavier particles, or space had four dimensions?

In the last 30 years, scientists have discovered something astounding: the vast majority of these changes are disastrous. We end up with a universe containing no galaxies, no stars, no planets, no atoms, no molecules, and most importantly, no intelligent life-forms wondering what went wrong. This is called the fine-tuning of the universe for life. After explaining the science of what happens when you change the way our universe works, we will ask: what does all this mean?

Read Full Post »

Continuing my response to Carrier.

Part Three

Barnes claims to have hundreds of science papers that refute what I say about the possibility space of universe construction, and Lowder thinks this is devastating, but Barnes does not cite a single paper that answers my point.

My comment was in response to the claim that the statement “the fundamental constants and quantities of nature must fall into an incomprehensibly narrow life-permitting range” has been “refuted by scientists”, not about what Carrier has to say about “universe construction”. The references are in my review paper.


Because we don’t know how many variables there are.

Carrier doesn’t – he still thinks that there are 6 fundamental constants of nature, but can’t say what they are. Actual physicists have no problem counting the free parameters of fundamental physics as we know it, which is what fine-tuning is all about.


We don’t know all the outcomes of varying them against each other.

We know enough, thanks to a few decades of scientific research. It is not an argument from ignorance – extensive calculations have been performed, which overwhelmingly support fine-tuning.


And, ironically for Barnes, we don’t have the transfinite mathematics to solve the problem.

This is probably a reference to “transfinite frequentism”, a term that, as we saw last time, Carrier invented.

In any case, we don’t need transfinite arithmetic here. Bayesian probability deals with free parameters with infinite ranges in physics all the time; fine-tuning is not a unique case. Many of the technical probability objections aimed at fine-tuning, such as those of the McGrews, would preclude a very wide range of applications of probability in physics.


I am not aware of any paper in cosmology that addresses these issues.

It’s called the “measure problem”. There are literally hundreds of papers on it, too. For example, here’s a relevant paper with over 100 citations: “Measure problem in cosmology“. Aguirre (2005), Tegmark (2005), Vilenkin (2006) and Olum (2012) are good places to start. The problem of infinities in cosmology (including in fine-tuning and the multiverse) is tricky, but few cosmologists believe that it is unsolvable.


Read Full Post »

In January 2014, I finished a series of four posts (one, two, three, four) critiquing some articles on fine-tuning by Richard Carrier, including one titled “Neither Life nor the Universe Appear Intelligently Designed” in The End of Christianity (following Carrier, I’ll refer to it as TEC). In May 2014, Jeffery Jay Lowder of The Secular Outpost reviewed these posts and Carrier’s responses, concluding that my posts were “a prima facie devastating critique”. Carrier recently responded to my posts on his blog (“On the Bayesian Reversal …“, hereafter OBR.)

(I don’t mind the delay. We’re all busy. I’ve still got posts I began in 2014 that I haven’t finished.)

First, a few short replies. I’ll skim through Carrier’s comments and provide a few one(-ish)-line responses. I’m assuming you’ve read Carriers’s post, so the quotes below (from OBR unless otherwise noted) are meant to point to (rather than reproduce) the relevant section. My discussion here is incomplete; later posts will go into more detail.

Part 1

Carrier notes that his argument is a popularisation of other works, saying later that “Barnes … ignores the original papers I’m summarizing.”

I’ve responded to Ikeda and Jeffrey’s article here and here. Their reasoning is valid, but is not about fine-tuning. I show how the fine-tuning argument, properly formulated, avoids their critique. My response to Sober would be similar.


Lowder agrees with Barnes on a few things, but only by trusting that Barnes actually correctly described my argument. He didn’t.

The first of umpteen “Barnes just does understand me” complaints. The reader will have to decide for themselves. Note both the numerous lengthy quotes I typed out in my posts, and my many attempts to formulate Carrier’s arguments in precise, mathematical notation.


On the general problem of deriving frequencies from reference classes, Bayesians have written extensively.

Deriving frequencies from reference classes is trivial – you just count members and divide. The problem that references classes create for finite frequentism is their definition, not how one counts their members. So, Carrier doesn’t understand the reference class problem.


Read Full Post »

A very interesting essay from Alex Vilenkin on whether the universe has a beginning and what this implies. If you want my opinion, “nothing” does not equal “physical system with zero energy”.

Read Full Post »

I’ll be speaking at the Sutherland Astronomical Society on Thursday 5th November. The meeting is at Green Point Observatory, Oyster Bay at 7:30 pm.

Title: The Fine-Tuning of the Universe for Intelligent Life

Abstract: Let’s make it slightly different from the one that we are familiar with. We could change the laws of nature, just a little bit. We could change how the universe begins, or make it four-dimensional. In the last 30 years, scientists have discovered something astounding: the vast majority of these changes are disastrous. We end up with a universe containing no galaxies, no stars, no planets, no atoms, no molecules, and most importantly, no intelligent life-forms wondering what went wrong. This fact is called the fine-tuning of the universe for life. After explaining the science of what happens when you change the way our universe works, we will ask: what does all this mean?

Read Full Post »

Older Posts »


Get every new post delivered to your Inbox.

Join 610 other followers