Continuing my response to Carrier (here’s Part 1 and Part 2).

Part Four: The Real Heart of the Matter

Note that this is actually not “my” conclusion. It is the conclusion of three mathematicians (including one astrophysicist) in two different studies converging on the same result independently of each other.

Wow! Two “studies”! (In academia, we call them “papers”. Though, neither were published in a peer-reviewed journal, so perhaps “articles”.) Three mathematicians! Except that Elliott Sober is a philosopher (and a fine one), not a mathematician – he has never published a paper in a mathematics journal. More grasping at straws.


Barnes wants to get a different result by insisting the prior probability of observers is low—which means, because prior probabilities are always relative probabilities, that that probability is low without God, i.e. that it is on prior considerations far more likely that observers would exist if God exists than if He doesn’t.


Those sentences fail Bayesian Probability 101. Prior probabilities are probabilities of hypotheses. Always. In every probability textbook there has ever been1. Probabilities of data given a hypothesis – such as the probability that this universe contains observers given naturalism – are called likelihoods. So, there is the prior probability of naturalism, and there is the likelihood of observers given naturalism, but there is no such thing as the “prior probability of observers”.

This is not a harmless slip in terminology. Carrier treats a likelihood as if it were a prior. He has confused the concepts, not just the names. Carrier states that “the only way the prior probability of observers can be low, is if the prior probability of observers is high on some alternative hypothesis.”2 This is true of prior probabilities, but it is not true of likelihoods. In the vernacular, likelihoods are not normalised with respect to hypotheses. They are normalised with respect to evidence: p(e|h.b) + p(~e|h.b) = 1.

It follows that this entire section on the “prior probability of observers” and the need to consider “some alternative hypothesis” is garbage. There is simply no argument to respond to, only a hopeless mess of Carrier’s confusions. It’s an extended discussion about prior probabilities from a guy who doesn’t know what a prior probability is. Given that he has previously confused priors and posteriors, he’s zero from three on the fundamentals of Bayes theorem. You cannot keep getting the basics of probability theory wrong and expect to be taken seriously. Continue Reading »

Looking for a romantic evening on (the day after) Valentine’s day? Why not try the Macarthur Astronomy Forum!

Location: Western Sydney University, Lecture theatre, Building 30

Date: Monday 15th February, 7.30 pm

Title: There is more to the Universe than its good looks.

Abstract: The planets, stars and galaxies that fill the night sky obey elegant mathematical patterns: the laws of nature. Why does our Universe obey these particular laws? As a clue to answering this question, scientists have asked a similar question: what if the laws were slightly different? What if it had begun with more matter, had heavier particles, or space had four dimensions?

In the last 30 years, scientists have discovered something astounding: the vast majority of these changes are disastrous. We end up with a universe containing no galaxies, no stars, no planets, no atoms, no molecules, and most importantly, no intelligent life-forms wondering what went wrong. This is called the fine-tuning of the universe for life. After explaining the science of what happens when you change the way our universe works, we will ask: what does all this mean?

Continuing my response to Carrier.

Part Three

Barnes claims to have hundreds of science papers that refute what I say about the possibility space of universe construction, and Lowder thinks this is devastating, but Barnes does not cite a single paper that answers my point.

My comment was in response to the claim that the statement “the fundamental constants and quantities of nature must fall into an incomprehensibly narrow life-permitting range” has been “refuted by scientists”, not about what Carrier has to say about “universe construction”. The references are in my review paper.


Because we don’t know how many variables there are.

Carrier doesn’t – he still thinks that there are 6 fundamental constants of nature, but can’t say what they are. Actual physicists have no problem counting the free parameters of fundamental physics as we know it, which is what fine-tuning is all about.


We don’t know all the outcomes of varying them against each other.

We know enough, thanks to a few decades of scientific research. It is not an argument from ignorance – extensive calculations have been performed, which overwhelmingly support fine-tuning.


And, ironically for Barnes, we don’t have the transfinite mathematics to solve the problem.

This is probably a reference to “transfinite frequentism”, a term that, as we saw last time, Carrier invented.

In any case, we don’t need transfinite arithmetic here. Bayesian probability deals with free parameters with infinite ranges in physics all the time; fine-tuning is not a unique case. Many of the technical probability objections aimed at fine-tuning, such as those of the McGrews, would preclude a very wide range of applications of probability in physics.


I am not aware of any paper in cosmology that addresses these issues.

It’s called the “measure problem”. There are literally hundreds of papers on it, too. For example, here’s a relevant paper with over 100 citations: “Measure problem in cosmology“. Aguirre (2005), Tegmark (2005), Vilenkin (2006) and Olum (2012) are good places to start. The problem of infinities in cosmology (including in fine-tuning and the multiverse) is tricky, but few cosmologists believe that it is unsolvable.

Continue Reading »

In January 2014, I finished a series of four posts (one, two, three, four) critiquing some articles on fine-tuning by Richard Carrier, including one titled “Neither Life nor the Universe Appear Intelligently Designed” in The End of Christianity (following Carrier, I’ll refer to it as TEC). In May 2014, Jeffery Jay Lowder of The Secular Outpost reviewed these posts and Carrier’s responses, concluding that my posts were “a prima facie devastating critique”. Carrier recently responded to my posts on his blog (“On the Bayesian Reversal …“, hereafter OBR.)

(I don’t mind the delay. We’re all busy. I’ve still got posts I began in 2014 that I haven’t finished.)

First, a few short replies. I’ll skim through Carrier’s comments and provide a few one(-ish)-line responses. I’m assuming you’ve read Carriers’s post, so the quotes below (from OBR unless otherwise noted) are meant to point to (rather than reproduce) the relevant section. My discussion here is incomplete; later posts will go into more detail.

Part 1

Carrier notes that his argument is a popularisation of other works, saying later that “Barnes … ignores the original papers I’m summarizing.”

I’ve responded to Ikeda and Jeffrey’s article here and here. Their reasoning is valid, but is not about fine-tuning. I show how the fine-tuning argument, properly formulated, avoids their critique. My response to Sober would be similar.


Lowder agrees with Barnes on a few things, but only by trusting that Barnes actually correctly described my argument. He didn’t.

The first of umpteen “Barnes just does understand me” complaints. The reader will have to decide for themselves. Note both the numerous lengthy quotes I typed out in my posts, and my many attempts to formulate Carrier’s arguments in precise, mathematical notation.


On the general problem of deriving frequencies from reference classes, Bayesians have written extensively.

Deriving frequencies from reference classes is trivial – you just count members and divide. The problem that references classes create for finite frequentism is their definition, not how one counts their members. So, Carrier doesn’t understand the reference class problem.

Continue Reading »

Just in time for Christmas, I’ve had a paper accepted by the Journal of Cosmology and Astroparticle Physics. It’s called “Binding the Diproton in Stars: Anthropic Limits on the Strength of Gravity“. Here’s the short version.

Diproton Disaster?

In 1971, Freeman Dyson discussed a seemingly fortunate fact about nuclear physics in our universe. Because two protons won’t stick to each other, when they collide inside stars, nothing much happens. Very rarely, however, in the course of the collision the weak nuclear force will turn a proton into a neutron, and the resulting deuterium nucleus (proton + neutron) is stable. These the star can combine into helium, releasing energy.

If a super-villain boasted of a device that could bind the diproton (proton + proton) in the Sun, then we’d better listen. The Sun, subject to such a change in nuclear physics, would burn through the entirety of its fuel in about a second. Ouch.

A very small change in the strength of the strong force or the masses of the fundamental particles would bind the diproton. This looks like an outstanding case of find-tuning for life: a very small change in the fundamental constants of nature would produce a decidedly life-destroying outcome.

Asking the Right Question

However, this is not the right conclusion. The question of fine-tuning is this: how would the universe have been different if the constants of nature had different values? In the example above, we took our universe and abruptly changed the constants half-way through its life. The Sun would explode, but would a bound-diproton universe create stars that explode? Continue Reading »

A very interesting essay from Alex Vilenkin on whether the universe has a beginning and what this implies. If you want my opinion, “nothing” does not equal “physical system with zero energy”.

I recently commented on Neil deGrasse Tyson’s chiding of Isaac Newton for failing to anticipate Laplace’s discovery of the stability of the Solar System. He has commented further on this episode and others in this article for Natural History Magazine.

Tyson’s thesis is as follows:

… a careful reading of older texts, particularly those concerned with the universe itself, shows that the authors invoke divinity only when they reach the boundaries of their understanding.

To support this hypothesis, Tyson quotes Newton, 2nd century Alexandrian astronomer Ptolemy and 17th century Dutch astronomer Christiaan Huygens. The remarkable thing about Tyson’s article is that none of the quotes come close to proving his thesis; in fact, they prove the opposite.

Newton and God

Tyson is quotes from Newton’s General Scholium, an essay appended to the end of the second and third editions of the Principia.

But in the absence of data, at the border between what he could explain and what he could only honor—the causes he could identify and those he could not—Newton rapturously invokes God:

“Eternal and Infinite, Omnipotent and Omniscient; … he governs all things, and knows all things that are or can be done. … We know him only by his most wise and excellent contrivances of things, and final causes; we admire him for his perfections; but we reverence and adore him on account of his dominion.”

To be blunt, what part of “he governs all things” doesn’t Tyson understand? God’s “dominion” – the extent of his rule – is “always and everywhere”. Clearly, Newton is not invoking God only at the edge of scientific knowledge, but everywhere and in everything. The Scholium is not long, so I invite you to read it; you will nowhere find Newton saying that God is only found where science has run out of answers. You will find him saying (echoing Paul) that “In him are all things contained and moved.” Continue Reading »


Get every new post delivered to your Inbox.

Join 659 other followers