While Craig has done his homework on fine-tuning, the video has problems. I’ll be commenting here on the physics of fine-tuning, not the fine-tuning *argument for God*. I’ll leave the metaphysics to the philosophers, for now. (The previous two sentences will be copied and pasted into the comments section as many times as necessary.)

Before addressing the video, I’ve heard Craig say a few times that “there are about **fifty** constants and physical quantities simply given in the Big Bang themselves that if they were altered even to one part in a hundred million million million the universe would not have permitted the existence of life.” There can’t be 50 fine-tuned constants. There aren’t even 50 fundamental constants of nature, including cosmic initial conditions. There are, in the usual count, 31. (I have a sneaking suspicion that Craig is thinking of the large numbers of fine-tuning criteria compiled by Hugh Ross, which are of varying quality.)

Let’s look at the video; all quotes are from the transcript.

From galaxies and stars, down to atoms and subatomic particles, the very structure of our universe is determined by these numbers.

So far, so good.

Speed of Light:

Gravitational Constant:

Planck’s Constant:

The final value is actually the reduced Planck constant , and the units are wrong; it should be . But there’s a bigger problem here.

[Edit (14/4/2016). I don’t know why I didn’t realise this sooner, but the list of constants and their values is from my review paper. Including the mistake in the units of the reduced Planck constant. So the culprit, it turns out, is me! Oops.]

There is no fact about what number we should attach to a certain length, mass or interval of time. There is only a fact about the *ratio* of two such quantities. When I report that I am 1.78m tall, what I am really reporting is the ratio of my height to a standard *unit of measurement.* I can use any units I like, and so the number 1.78 is not a property of me only. I am also 5 feet 10 inches, and 0.97 fathoms. The number only has meaning in a certain system of units, and the choice of units is arbitrary. It is for convenience only. (I thoroughly recommend the metric system).

Thus, we cannot simply talk about the fine-tuning of any constant with units. Until we have specified the system of units, changing the number is meaningless. In fact, in the metric system, is true *by definition*. Given the definition of the second, this equation doesn’t really state the speed of light; it defines the metre.

In fine-tuning, we want to explore different universes, that is, other ways that the universe could have been. Asking ‘what if the universe had ?’ makes no sense – it would merely redefine our unit of length. More generally, specifying the numerical value of G, c or h doesn’t fully specify the physics of the universe in question. We might merely have the same universe as ours, but described in a different units.

To avoid these problems, we should first fix a system of units in a given possible universe. The best way is to choose three constants with (some combination of length, mass and time) dimensions and set their values. I think that the most useful system is Planck units, which set the three constants above (c,G,h) to one. These, then, *cannot be changed*, but any change to the other constants of nature really does change the physics of the universe.

Planck Mass-Energy: MeV

The Planck mass is . Thus, it is not an independent constant, and it depends only on the arbitrary constants (c,G,h). Once we have specified that we are using Planck units, . It is doubly not fine-tuned: it is not fundamental, and it is not a free parameter.

Mass of Electron, Proton, Neutron: 0.511; 938.3; 939.6 MeV

The mass of the electron is a fundamental constant, and can be expressed in Planck units as . However, the masses of the proton and neutron are not fundamental, but derived. They have contributions from the strong force binding energy (about 780 MeV), the masses of the constituent (~10 MeV) and virtual (~150 MeV) quarks, and, for the proton, electromagnetism (~ 1 MeV).

Some very important fine-tuning cases consider the masses of the proton and the neutron, and particularly their mass difference. Most recently, see Hall, Pinner, and Ruderman. But these should be translated into limits on the relevant fundamental constants: up, down and strange quark mass, Higgs vev, QCD scale and strength of electromagnetism.

Mass of Up, Down, Strange Quark: 2.4; 4.8; 104 MeV (Approx.)

Now we’re talking. See, for example, Figure 2 of my review paper.

Ratio of Electron to Proton Mass:

Clearly not a fundamental parameter. Again, we can consider the effects of changing this number, but we should translate our findings to be in terms of the fundamental constants. Otherwise, we will overstate the degree of fine-tuning by artificially inflating the number of required tunings.

Gravitational Coupling Constant:

This number, known as , is in fact the ratio of the proton mass to the Planck mass (squared). So, having decided to work in Planck units, it simply is the proton mass squared, and is a derived rather than fundamental parameter.

Cosmological Constant:

Good. A fundamental constant and perhaps the best example of a fine-tuned parameter.

Hubble Constant: 71 km/s/Mpc (today)

Nope. Given a certain set of cosmological parameters, we can compute the entire expansion history of the universe. That is, we can calculate its relative size at all times. But there is one parameter that remains: when is now? At what point in cosmic history are we observing?

This is the age of the universe, and can equivalently be specified by the Hubble constant as (roughly) .So this is not a fundamental parameter. To consider a universe with increased by 86,400 seconds is simply to consider this universe tomorrow.

I think that the video means to refer to the fine-tuning of the universes’s expansion rate. But this fine-tuning case is about the expansion rate at very early times, not today. (Victor Stenger made exactly this mistake in his book on fine-tuning. See Appendix A.1 of my review paper.)

Higgs Vacuum Expectation Value: 246.2 GeV

An excellent example of a fine-tuned constant; see, for example, Agrawal et al. 1998.

Scientists have come to the shocking realization that each of these numbers have been carefully dialed to an astonishingly precise value – a value that falls within an exceedingly narrow, life-permitting range. If any one of these numbers were altered by even a hair’s breadth, no physical, interactive life of any kind could exist anywhere.

Actually, some of the fundamental constants don’t seem to have a significant effect on life – e.g. the various matrix angles, the QCD vacuum phase.

* Don’t get me wrong:* fine-tuning, as a phenomenon in physics, is real. But not every fundamental constant is fine-tuned.

Consider gravity, for example. The force of gravity is determined by the gravitational constant. If this constant varied by just one in parts, none of us would exist. … the universe would either have expanded and thinned out so rapidly that no stars could form and life couldn’t exist, or it would have collapsed back on itself with the same result: no stars, no planets, no life.

This is referring to the flatness problem, though expressing it in terms of the gravitational constant is a little unusual. The full case is as follows: for the universe to live long enough and create structure, we require that at early times,

.

where is the expansion rate, is the mass-energy density, and is a small number. If we take the “early time” to be one second (as BBN starts), then ; if the Planck time, then . So, if (not taking my advice above) we do not fix G with our choice of units, and we consider the density and expansion rate to be fixed at a certain time, then we do indeed have a tight constraint on the gravitational constant. However, this is an unusual way of presenting the case, and in particular, it obscures the dependence on the initial conditions of the universe.

However, this case of fine-tuning also implies that our cosmological model only explains our observations of the universe (not just that it is life-permitting) for a seemingly narrow range of its parameters. It has thus motivated the development of inflationary theory, according to which the universe undergoes a rapid burst of accelerating expansion in its earliest stages. *If* inflation happened (and lasted long enough), then the condition above is not particularly surprising. Also, there are further theoretical reasons to view the flatness problem with suspicion. This is not a typical fine-tuning case. A priori arguments and dynamical mechanisms *might* explain it, without simply shifting the problem elsewhere. Note, however, that many inflationary models do exactly this, exchanging on fine-tuned parameter for another.

Or consider the expansion rate of the universe. This is driven by the cosmological constant. A change in its value by a mere 1 part in would cause the universe to expand too rapidly or too slowly. In either case, the universe would, again, be life-prohibiting.

Two minor quibbles. Firstly, the cosmological constant does not necessarily drive the expansion of the universe, so this case should not be referred to as fine-tuning “the expansion rate of the universe”. It is the fine-tuning of the cosmological constant. Secondly, the problem is not that the universe might expand too slowly. The problem is that a negative value for the cosmological constant will, regardless of the expansion rate at any particular time, eventually cause the universe to transition from expansion to contraction and recollapse. If the cosmological constant is too large and negative, the universe will not live long enough to create life (E.g. Peacock).

Or, another example of fine-tuning: If the mass and energy of the early universe were not evenly distributed to an incomprehensible precision of 1 part in 10^10^123, the universe would be hostile to life of any kind.

Again, a choice has been made here between technical precision and explaining at a popular level. This case is really about the initial entropy of the universe, which is known to be very low because the early universe is very nearly perfectly homogeneous. I think I can forgive this one.

For future reference, the fundamental parameters upon which interesting fine-tuning constraints can be placed are as follows. From particle physics: Higgs vev, the masses (or, equivalently, Yukawa parameters) of the electron mass, up, down and strange quark, and neutrino, the strong force coupling constant, and the fine-structure constant. From cosmology: the cosmological constant, the scalar fluctuation amplitude (“lumpiness”, Q), the number of spacetime dimensions, the baryonic and dark matter mass-to-photon ratios, and the initial entropy of the universe.

After a few quotes, the rest of the video is philosophy. While the video needs a minor overhaul, the science upon which Craig wants to make his case is sound, in my opinion. And the opinion of many other scientists, *believing and non-believing alike*. The pressing question is: what, if anything, should we conclude from fine-tuning?

Filed under: Uncategorized ]]>

We have asked three scientists to discuss some of the latest research and scholarship regarding the place of life, including human life, in the universe. Sara Seager describes the search for Earth-like planets orbiting distant stars and explains what led her to join the hunt. Marcelo Gleiser shows why the findings of physics should help ease our sense of cosmic angst. And Luke A. Barnes (below) explains what it means to say that the universe appears “fine-tuned” for life.

Enjoy!

Filed under: Uncategorized ]]>

In an age when a number of prominent scientists have said profoundly idiotic things about philosophy, Bill Nye “the Science Guy” has produced the Gettysburg address of philosophical ignorance. It would be hard to write a parody that compressed more stupidity and shallowness into 4 minutes.

I’m no philosopher, but even I can see that almost every sentence is a complete misrepresentation of what philosophy is and what philosophers do. As a scientist, I find Nye’s comments – and those of some of his idols – deeply embarrassing. If you are a philosopher, please don’t judge all scientists by these philistines. (Nye, if it helps, is an engineer by training).

Let’s watch the trainwreck; all quotes are from Nye.

I’m not sure that Neil deGrasse Tyson and Richard Dawkins [actually, the questioner asked about Stephen Hawking], two guys I’m very well acquainted with, have declared philosophy to be irrelevant and are ‘blowing it off’.

Tyson said that philosophy can “mess you up” and thinks that “there is no such thing as consciousness” is a live option for explaining the nature of consciousness. (His history isn’t much better). Dawkins, who had no problem critiquing Aquinas for a few, fact-free pages in TGD (no, Aquinas did not assume that “There must have been a time when no physical things existed”), admitted 4 years later that he didn’t know what the word “epistemic” means. Stephen Hawking announced that “philosophy is dead” at the beginning of a book, before spending a few tens of pages doing some philosophy himself. Lawrence Krauss complained about “moronic philosophers” who criticised his book, before exhibiting a wide range of elementary fallacies in a debate with a philosopher.

Not all scientists are antagonistic to philosophy. George Ellis has written intelligently on the philosophy of cosmology and on philosophy more broadly, and I’m expecting good things from Sean Carroll‘s forthcoming book. There seems to be a very strong correlation among scientists between knowledge of philosophy and respect for philosophy.

I think that they’re just concerned that it doesn’t always give an answer that’s surprising. It doesn’t always lead you someplace that is inconsistent with common sense.

This is a common and worrying trope among popularisers – you’re not really doing science unless you’re contradicting what people naturally or normally believe. Rubbish. This idea has no place whatsoever in the actual practice of science. Imagine one astrophysicist criticising another’s model of the Sun on the grounds that it predicts that the sun is very bright and “that’s consistent with common sense”. This only feeds into the stereotype that science is incomprehensible, wacky, contradictory, likely to change and – obviously – opposed to common sense. (See Ben Goldacre on this point.) Yes, sometimes science is surprising. But sometimes it isn’t. And sometimes complete nonsense is surprising, too.

What is the nature of consciousness? Can we know that we know? Are we aware that we are aware? Are we not aware that we are aware?

The first question is actually meaningful. But Nye immediately demonstrates that he hasn’t the slightest clue what it means. “Can we know that we know?”, if it means anything, is about epistemology (theory of knowledge), not consciousness. “Are we aware that we are aware?” Yes. Yes we are. Because I am aware of my thoughts immediately, that is, without any intermediary. I can’t be mistaken about being aware of my thoughts, because the very act of *being mistaken* would involve a thought of which I am aware. This is Philosophy 101 – consciousness is data, not theory.

Is reality real or is reality not real and we are all living on a ping pong ball as part of a giant interplanetary ping pong game and we cannot sense it?

I suspect that part of the reason that philosophy can seem pointless is that some non-philosophers don’t understand the idea of a thought experiment. No, philosophers don’t sit around wondering whether we are really brains in vats or what the sound of one hand clapping is or imagining other fanciful scenarios to waste their time. (No one in the middle ages debated how many angels can dance on the head of a pin. That’s a joke from the 17th century.) Similarly, physicists aren’t obsessed with trains struck by lightning and unusual ways of killing cats. The point of the brains in vats thought experiment is to explore the relationship between our sensory experiences and extra-mental reality. If you want to explore this idea further, read Daniel Dennett’s marvellous essay “Where am I?“.

But the idea that reality is not real or what you sense and feel is not authentic is something I’m very skeptical of. I mean I think that your senses, the reality that you interact with with light, heat, sense of touch, taste, smell, hearing, absolutely hearing. These are real things.

In the development of a philosopher, learning that “assertion is not an argument” is a bit like learning to crawl. Nye isn’t quite there yet, and so can only … ahem … appeal to common sense. Nye’s position is known as “Philosophical realism”, and Nye would probably also affirm “Scientific realism”, which states that “we ought to believe in the unobservable entities posited by our most successful scientific theories.” The interested reader should start with Putnam’s “no miracles” argument, and work towards finding Nye on this diagram.

And to make a philosophical argument that they may not be real because you can’t prove – like for example you can’t prove that the sun will come up tomorrow. Not really, right. You can’t prove it until it happens. But I’m pretty confident it will happen. That’s part of my reality. The sun will come up tomorrow.

Nye confuses the problem of skepticism (are the objects of our sensory experience real?) with the problem of induction, which goes as follows. Consider the following argument. Will the Sun rise tomorrow? Probably yes, because it always has in our past experience. But why think the future will be like the past? Because in the past, the future has been like the past.

It shouldn’t take too much effort to convince yourself that this argument is circular: it should only convince us of the conclusion if we already know that the conclusion is true. This does not mean that philosophers sit around wondering whether the Sun will rise tomorrow, as if they’re just waiting for an engineer to burst in and announce “I’m pretty confident it will.” The point is that scientific induction cannot be justified ** in this way**. If it can be justified, it must be on other grounds.

… you start arguing in a circle where I think therefore I am. What if you don’t think about it? Do you not exist anymore? You probably still exist even if you’re not thinking about existence.

If you write that in a Philosophy 101 essay, you will get zero marks. Your tutor will show your paper to as many colleagues as she can find, and they will all have a good laugh. (Yes, we do this in academia.)

Here is Descartes’ point. Can we build our knowledge on sure, certain foundations? Is there anything we know with certainty? The “brain in the vat” thought experiment shows that I could be mistaken about the existence of the objects that I perceive. But, says Descartes, I cannot be mistaken about the fact that I exist. If I doubt that I exist, then I must exist, because otherwise there wouldn’t be anyone to do the doubting. I think, therefore I am. Moreover, whatever else I am, I am a thinking kind of thing. On this foundation, Descartes attempts to build his world view.

It does not follow, and Descartes does not go on to suggest, that anything that is not thinking does not exist. This is a textbook logical fallacy called denying the antecedent. The following is * not* a valid argument:

If A then B

Not A

Therefore, not B.

Also, none of this has anything to do with “arguing in a circle”. Nye has completely missed Descartes’ point.

And so, you know, this gets into the old thing if you drop a hammer on your foot is it real or is it just your imagination? You can run that test, you know, a couple of times and I hope you come to agree that it’s probably real.

(There’s that common sense, again.) The whole point of the brain in a vat thought experiment is that it could *feel* real every time and yet not be real. You could be Neo in the Matrix. We cannot justify realism by continually dropping hammers on our feet and asserting realism again. *So what now?* Should we accept some beliefs as properly basic? Should we be idealists (in the philosophical sense) and believe that reality is fundamentally mental?

A philosophy degree may not lead you on a career path.

An astronomy degree might not either – I’m still finding that out. Neither might an interest in poetry, history, music, literature, archaeology, or a thousand other good things.

*The unexamined life is not worth living* – Socrates.

Humans discovered or invented the process of science.

* Which is it!? *We discovered Pluto and we invented cricket. Is science a process that actually gives us knowledge of the external world, or are we playing a game of our own invention?

To defend science, you have to think about science. And thinking about science is not doing science. It is doing philosophy. Nye says “it’s important I think for a lot of people to be aware of philosophy”. A great idea; he should try it sometime.

Filed under: Uncategorized ]]>

When I was a postdoc at ETH Zurich in 2011, Kip Thorne gave a wonderful set of lectures to scientists and laypersons on gravitational waves astronomy. He was good enough to have lunch with the students and postdocs as well, where he regaled us with stories of working with the Russians in the 1970’s and a movie he was working on with Steven Spielberg. Given his decades of remarkable work in the field, I remember thinking “I really hope that he sees gravitational waves observed in his lifetime”. So it was great to see him sharing the stage at the LIGO press conference.

It’s also been a big 24 hours for me turning up in unusual places. The New York Times reported the trend, kicked off by Katie Mack, of anticipating the announcement by mimicking the LIGO chirp. I was at Monash University for the 10th conference-workshop of the Australian National Institute for Theoretical Astrophysics (ANITA), and joined an enthusiastic bunch of students and staff (including Katie) in staying up until 2:30 am to hear the announcement. We made our own chirping video, complete with background noise. And so, somehow, I ended up on the New York Times website.

Also today, my post about the effect of altitude on cricket ball trajectories was linked by ESPN’s cricinfo.com, previewing a game at the Wanderers Stadium in Johannesburg:

At 1633m above sea level, the Wanderers Stadium is at an unusually high altitude. Scientific models have worked out that a shot that would just reach the boundary at the Wanderers (approx. 65m) would fall some four metres short at lower-altitude venues.

Beer bottle performance art and sports science aren’t really my research focus at the moment, but I’m happy to branch out.

Filed under: Uncategorized ]]>

In what follows, I’ll consider Carrier’s claims about the mathematical foundations of probability theory. What Carrier says about probability is at odds with every probability textbook (or lecture notes) I can find. He rejects the foundations of probability laid by frequentists (e.g. Kolmogorov’s axioms) and Bayesians (e.g. Cox’s theorem). He is neither, because we’re all wrong – only Carrier knows how to do probability correctly. That’s why he has consistently refused my repeated requests to provide scholarly references – they do not exist. As such, Carrier cannot borrow the results and standing of modern probability theory. Until he has completed his revolution and published a rigorous mathematical account of *Carrierian probability theory*, all of his claims about probability are meaningless.

I intend to *demonstrate* these claims, so we’ll start by quoting Carrier at length. I won’t be relying on previous posts. In TEC, Carrier says:

Bayes’ theorem is an argument in formal logic that derives the probability that a claim is true from certain other probabilities about that theory and the evidence. It’s been formally proven, so no one who accepts its premises can rationally deny its conclusion. It has four premises … [namely P(h|b), P(~h|b), P(e|h.b), P(e|~h.b)]. … Once we have [those], the conclusion necessarily follows according to a fixed formula. That conclusion is then by definition the probability that our claim h is true given all our evidence e and our background knowledge b.

In OBR, he says:

[E]ver since the Principia Mathematica it has been an established fact that nearly all mathematics reduces to formal logic … The relevant probability theory can be deduced from Willard Arithmetic … anyone familiar with both Bayes’ Theorem (hereafter BT) and conditional logic (i.e. syllogisms constructed of if/then propositions) can see from what I show there [in Proving History] that BT indeed is reducible to a syllogism in conditional logic, where the statements of each probability-variable within the formula is a premise in formal logic, and the conclusion of the equation becomes the conclusion of the syllogism. In the simplest terms, “if P(h|b) is w and P(e|h.b) is x and P(e|~h.b) is y, then P(h|e.b) is z,” which is a logically necessary truth, becomes the concluding major premise, and “P(h|b) is w and P(e|h.b) is x and P(e|~h.b) is y” are the minor premises. And one can prove the major premise true by building syllogisms all the way down to the formal proof of BT, again by symbolic logic (which one can again replace with old-fashioned propositional logic if one were so inclined).

More specifically it is a form of argument, that is, a logical formula that describes a particular kind of argument. The form of this argument is logically valid. That is, its conclusion is necessarily true when its premises are true. Which means, if the three variables in BT are true (each representing a proposition about a probability, hence a premise in an argument), the epistemic probability that results is then a logically necessary truth. So, yes, Bayes’ Theorem is an argument.

He links to, and later shows, the following “Proof of Bayes Theorem … by symbolic logic”, saying that “the derivation of the theorem is *this*.”

For future reference, we’ll call this **“The Proof”. **Of his mathematical notation, Carrier says:

P(h|b) is symbolic notation for the proposition “the probability that a designated hypothesis is true given all available background knowledge but not the evidence to be examined is x,” where x is an assigned probability in the argument.

I have 13 probability textbooks/lecture notes open in front of me: Bain and Engelhardt; Jaynes (PDF); Wall and Jenkins; MacKay (PDF); Grinstead and Snell; Ash; Bertsekas and Tsitsiklis; Rosenthal; Bayer; Dembo; Sokol and Rønn-Nielsen; Venkatesh; Durrett; Tao. I recently stopped by Sydney University’s Library to pick up a book on nuclear reactions, and took the time to open another 15 textbooks. I’ve even checked some of the philosophy of probability literature, such as Antony Eagle’s collection of readings (highly recommended), Arnborg and Sjodin, Caticha, Colyvan, Hajek (who has a number of great papers on probability), and Maudlin.

When presenting the foundations of probability theory, these textbooks and articles roughly divide along Bayesian vs frequentist lines. The purely mathematical approach, typical of frequentist textbooks, begins by thinking about relative frequencies before introducing measure theory, explaining Kolmogorov’s axioms, motivating the definition of conditional probability, and then – in one line of algebra – giving “The Proof” of Bayes theorem. Says Mosteller, Rourke and Thomas: “At the mathematical level, there is hardly any disagreement about the foundations of probability … The foundation in set theory was laid in 1933 by the great Russian probabilitist, A. Kolmogorov.” With this mathematical apparatus in hand, we use it to analyse relative frequencies of data.

Bayesians take a different approach (e.g. Probability Theory by Ed Jaynes). We start by thinking about modelling degrees of plausibility. The frequentist, quite rightly, asks what the foundations of this approach are. In particular, why think that degrees of plausibility should be modelled by probabilities? Why think that “plausibilities” can be mathematised at all, and why use Kolmogorov’s particular mathematical apparatus? Bayesians respond by motivating certain “desiderata of rationality”, and use these to prove via Cox’s theorem (or perhaps via de Finetti’s “Dutch Book” arguments) that degrees of plausibility obey the usual rules of probability. In particular, the produce rule is proven, p(A and B | C) = p(A|B and C) p(B|C), from which Bayes theorem follows via “The Proof”.

*In precisely none of these textbooks and articles will you find anything like Carrier’s account. *When presenting the foundations of probability theory in general and Bayes Theorem in particular, no one presents anything like Carrier’s version of probability theory. Do it yourself, if you have the time and resources. Get a textbook (some of the links above are to online PDFs), find the sections on the foundations of probability and Bayes Theorem, and compare to the quotes from Carrier above. In this company, Carrier’s version of probability theory is a total loner. We’ll see why.

To draw out the various idiosyncrasies of Carrier’s account of Bayes Theorem, consider this parallel discussion of a different mathematical theorem:

*Pythagoras Theorem (PT) is an argument in formal logic. It’s been formally proven. It follows from two premises (a and b), from which the conclusion (c) follows according to a fixed formula, where a and b are assigned a value in the argument. PT is reducible to a syllogism in conditional logic as follows:*

*(1′) If the two shorter sides of a right-angled triangle are a and b, then the hypotenuse is c.*

*(2′) The shorter sides of a right-angled triangle are a and b*

*(3′) Therefore, the hypotenuse is c*

*One can prove (1′) by building syllogisms all the way down to the formal proof of PT. So, Pythagoras theorem is an argument, or a form of argument. Its conclusion (c) is necessarily true when its premises (a and b) are true.*

The problems are legion.

- This “argument form” of PT is missing PT itself; we must add “where c
^{2}= a^{2}+ b^{2}” to premise (1′) to give it any meaning. - It does not show that Pythagoras theorem
**is**an argument or a form of argument. It shows that PT can be used**in**an argument. But that’s trivial – any statement can be a premise in an argument. - The “form” of the argument is just
*modus ponens*, “If A then B. A. Therefore B”. There is no particular “form of argument” associated with PT. - Don’t call a, b and c “premises” or “symbolic notation for a proposition”. You can’t multiply and add premises or propositions, and that’s what happens in PT. They’re numbers. You put them in a formula.
- The discussion above is not a proof of PT because PT is (or should be; see A) included in premise (1′). Nor does it show how PT follows from an axiomatization of mathematics or reduces to symbolic logic. It shows how to argue
**from**PT, not**to**PT. - You cannot prove PT from the axioms of arithmetic because those axioms don’t define what a triangle and a right angle are, or what to do with them. You need axioms of plane geometry, such as Euclid’s axioms (or their more modern descendants).

We can apply A-F straightforwardly to Carrier’s discussion of BT. For convenience, I’ll number Carrier’s premises:

(1) If P(h|b) is w and P(e|h.b) is x and P(e|~h.b) is y, then P(h|e.b) is z

(2) P(h|b) is w and P(e|h.b) is x and P(e|~h.b) is y

(3) Therefore, P(h|e.b) is z

- This “argument form” of BT is missing BT itself. As it stands, (1) states that P(h|e.b) is equal to some unspecified, arbitrary number. We must add “where z = xw / [xw + y(1 – w)]” to premise (1) to give it any meaning.
- It does not show that Bayes theorem
**is**an argument or a form of argument. It shows that BT can be**used**in a syllogism … as can any other statement. It is also an unnecessary complication to use BT in this form – what Carrier calls the “reduction to a syllogism in conditional logic” every mathematician would call “putting numbers into a formula”. - The “form” of the argument is just
*modus ponens*, “If A then B. A. Therefore B”. There is no particular “form of argument” associated with BT. - Don’t call P(e|h.b) etc premises or “symbolic notation for [a] proposition”, because you can’t multiply and add and divide premises and propositions, and that’s what happens in BT. They’re numbers. You put them in a formula.
- The syllogism (1)-(3) is not a proof of BT, because BT is (or should be; see A) included in premise (1). Nor does it show how BT follows from an axiomatization of mathematics or reduces to symbolic logic. It shows how to argue
**from**BT, not**to**BT. - You cannot prove BT from the axioms of arithmetic because they don’t know what a probability is, or what to do with it. Read, for example, the Peano axioms. They define natural numbers, equality, succession – but not probability. You need the axioms of probability theory, such as Kolmogorov’s axioms.

All of which brings us to “The Proof”, which is nothing of the sort. It is an elementary probability exercise, the kind of thing you’d set a first year student: show that Bayes theorem follows from the product rule (Statement (1) in “The Proof”). Actually, the problem is so easy so most textbooks just do it in one line and move on. Venkatesh (page 56), for example, presents “The Proof” in one line and says that it is “… little more than a restatement of the definition of conditional probability.”

Showing that a statement follows from some other statement does not prove it. You have to show that it follows from the relevant axioms, or a theorem derived from those axioms. ** Given** that the product rule follows from the axioms of probability theory or Cox’s theorem or from the definition of conditional probabilities (which it does), “The Proof” does in fact establish BT. But Carrier is claiming more: “The Proof” is supposedly “the formal proof of BT … by symbolic logic”, showing how this mathematical theorem “reduces to formal logic” in the rigorous tradition of the Principia Mathematica.

This is just wrong. “The Proof” is a derivation of BT from the product rule. We’re a million miles from the axiomatic foundations of mathematics. Worse, Carrier thinks that “The Proof” is “by symbolic logic”, when it is quite plainly an exercise in algebra. The product rule involves the logical operator “and”, but all the manipulations are algebraic (multiplying and dividing), not logical. I’ll repeat that point: *this supposed proof “by symbolic logic” uses none of the rules of symbolic logic*.

This is not a small technicality. A trivial algebraic exercise, too easy for any competent student, is being presented by Carrier as a rigorous, formal, first-principles proof of Bayes theorem using symbolic logic alone. This is decisive evidence of Carrier’s utter cluelessness when it comes to probability theory. No mathematician, when asked about the foundations of probability theory, will point to “The Proof”.

Point F is perhaps the most important, so I’ll expand on it. As we saw above, there is a substantial academic literature on the foundations of probability, the status of Bayes Theorem, its relation to the interpretation of probability, and the various ways in which it can be derived. Generally, Bayesians go with Cox’s theorem or Dutch book arguments, while frequentists go with Kolmogorov.

** Carrier, all out on his own, needs none of this**. He is not using the established results of modern probability theory. He isn’t really a Bayesian or a frequentist or anything that mathematicians have seen before. He has his own,

In fact, in *Proving History* he makes these claims explicit. In the section “Bayesianism as Epistemic Frequentism” (Chapter 6), he outlines an approach to probabilities, according to which “** all Bayesians are in fact frequentists**“. When Bayesians claim that probabilities are not frequencies “

What is this frequency that all Bayesians everywhere have been ignoring all this time? According to Carrier, degrees of belief are really the ratio of the number of “beliefs that are true” to the number of “all beliefs backed by a certain comparable quantity and quality of evidence”. For example, “When a Bayesian says that the prior probability that a royal flush is fair is 95% … they are *really* saying that 95% of all royal flushes drawn (on relevantly similar occasions) are fair. Which is a physical frequency. Thus, epistemic probabilities always derive from physical probabilities.”

It may seem ungrateful to nit-pick such a monumental intellectual achievement, but he’s not quite done. To complete his exposition of Carrierian probability, and show those so-called mathematicians how probability should be done, Carrier has one more task: justify a ** mathematical method** that allows us to

Heck, I’ll even get him started. Perhaps, to allow for comparison, “quality and quantity” could be represented by a real numbers q. More evidence could mean larger q values, and adding evidence should never decrease q. We should stipulate consistency: equivalent states of evidence receive equal q values. In which case, good news! Just such a method already exists! The principles that govern q are basically the premises of Cox’s theorem, from which we get Bayesian probabilities.

So let’s review Carrier’s method. To calculate probabilities, we need to define the reference classes in which to place our various beliefs. And to define those, we’ll need a * measure* of the quantity and quality of evidence. This measure, it turns out, is basically Bayesian probabilities.

Carrierian probability, as presented, is thus hopelessly incomplete. And when completed, circular. The circularity is staring us in the face when Carrier says that the relevant reference class, from which we calculate probabilities, is defined as containing those beliefs “backed by the kind of evidence and data that produces those kinds of prior and consequent probabilities.” Only once we have probabilities can we form such reference classes (ignoring the issue of when two “amounts of evidence” are “comparable” – exactly equal? Within 1%?). It follows that these reference classes cannot be used to define the probabilities.

So, it’s time to put up or shut up. When invited to outline the foundations of his approach to probability theory (which I first did 2 years ago – question 5), Carrier snubs modern probability theory. Axioms, Kolmogorov, Cox, de Finetti … who needs them! But a mathematical theory without foundations is just hot air. Until a rigorous basis for Carrierian probability theory is provided, all his probability claims are meaningless.

Carrier must publish a **series of papers** in mathematical journals that substantiate his extraordinary claims about the foundations of probability theory, proving – in the face of centuries of work by mathematicians – that:

- None of the usual axioms or arguments or theorems are needed.
- Probability reduces to formal/symbolic logic
*alone.* - Bayesianism is really a kind of frequentism.
- “Quality and quantity of evidence” can be uniquely and precisely quantified.

He made these claims, so this is a task that he has set for himself. Citing his own books is not good enough.

Or else, we’ll know that he is all bluff and bluster. Pressed to present the foundations of probability theory, he has failed utterly. He could have just plagiarised any probability textbook, but instead invented a pile of garbage about conditional logic, building syllogisms, *Principia Mathematica*, the axioms of arithmetic, and quality of evidence.

Hence, this is my final word. If Carrierian probability is hailed as a revolution by mathematicians, then I will concede Carrier’s probabilistic credentials and be forever silenced. If he continues to talk about probabilities, then – since he doesn’t mean by “probability” what the term means in any rigorous mathematical theory – this must be regarded as literally meaningless. We need only reply: **where are the papers?**

If, alternatively, he realises that he is completely out of his depth, that he hasn’t got the first clue about the foundations of probability theory, he may (after learning probability theory – for the first time, it seems – from a textbook) try to claim that he has been a follower of Cox/Kolmogorov all along. However, as we have seen, this is complete shift in the foundations of his approach. All of his previous work that relies on Carrierian probability – including its extension to historical investigation in *Proving History* and *On the Historicity of Jesus* – must be discarded.

Filed under: Uncategorized ]]>

Note that this is actually not “my” conclusion. It is the conclusion of three mathematicians (including one astrophysicist) in two different studies converging on the same result independently of each other.

Wow! Two “studies”! (In academia, we call them “papers”. Though, neither were published in a peer-reviewed journal, so perhaps “articles”.) Three mathematicians! Except that Elliott Sober is a philosopher (and a fine one), not a mathematician – he has never published a paper in a mathematics journal. More grasping at straws.

Barnes wants to get a different result by insisting the prior probability of observers is low—which means, because prior probabilities are always relative probabilities, that that probability is low without God, i.e. that it is on prior considerations far more likely that observers would exist if God exists than if He doesn’t.

Those sentences fail Bayesian Probability 101. Prior probabilities are probabilities * of hypotheses*. Always. In every probability textbook there has ever been

This is not a harmless slip in terminology. Carrier treats a likelihood as if it were a prior. He has confused the ** concepts**, not just the names. Carrier states that “the only way the prior probability of observers can be low, is if the prior probability of observers is high on some alternative hypothesis.”

It follows that this entire section on the “prior probability of observers” and the need to consider “some alternative hypothesis” is garbage. There is simply no argument to respond to, only a hopeless mess of Carrier’s confusions. It’s an extended discussion about prior probabilities from a guy who doesn’t know what a prior probability is. Given that he has previously confused priors and posteriors, he’s zero from three on the fundamentals of Bayes theorem. You cannot keep getting the basics of probability theory wrong and expect to be taken seriously.

**Technical details: **For any hypothesis h, and its negation ~h (which we can think of as the disjunction or union of all alternatives to h), p(h|b) +p(~h|b) = 1. So, the prior p(h|b) is small if and only if p(~h|b) is large, and vice versa. The same applies to posteriors: p(h|e.b) +p(~h|e.b) = 1. But there is no corresponding rule for likelihoods and hypotheses: p(e|h.b) is small does not imply that p(e|~h.b) is large. “p(e|h.b) + p(e|~h.b) = 1” is * not* an identity of probability theory.

This is where note 23 in my chapter comes in … Barnes never mentions this argument and never responds to this argument.

Addressed in Part 2, under “Bayes’ Theorem Omits Redundancies” and Part 4, under “The Main Attraction” and “My Reply”. I’ve put Carrier’s argument in mathematical notation, so it should be easy to demonstrate where my response falls short. No such demonstration is forthcoming, only repetition.

… [when you] remove even our knowledge of ourselves existing from b [the background evidence]. You end up making statements about universes without observers in them. Which can never be observed. … Either you are making statements about universes that have a ZERO% chance of being observed (and therefore cannot be true of our universe), or you are making statements that are 100% guaranteed to be observed.

This is exactly the point I discussed in detail in Part 4. Since Bayes theorem is an identity – that is, it can be used with *any* propositions – moving a particular fact between e and b can never be wrong. Carrier’s objections must be mistaken, since you can’t fight a mathematical identity.

And we can see where they are mistaken. In Bayesian probability theory, hypotheses are penalised for declaring as “highly likely” statements that are in fact false. For example, the hypothesis “the burglar guessed the 12-digit combination to the safe” implies that it is highly likely that the burglar didn’t open the safe. It is heavily penalised, then, if security camera footage shows the burglar opening the safe on the first attempt. We end up talking about burglars who didn’t open the safe because those kind of burglars are the most likely on the stated hypothesis.

If naturalism implies that, given * only* that a universe exists, it is highly likely that the universe does not contain life forms, then it is heavily penalised by the falsity of that statement. (We all understand background information, right?) We end up talking about universes without observers because those kind of universes are the most likely on naturalism. The fact that they cannot be observed does not matter; likelihoods are normalised over an

Let’s recap some highlights of these three posts.

- Carrier has not addressed the charge of inconsistency with probability theory. In fact, he given more examples of inconsistency by introducing “hypothetical reference classes”. He has not addressed the reference class problem.
- He has made up probability concepts that no one has ever heard of before, including “transfinite frequentism” and “existential probability calculus”.
- He has abandoned his previous claim that “all the scientific models we have … show life-bearing universes to be a common result of random universe variation, not a rare one.”
- He completely misunderstands my rather obvious point that “for a given possible universe, we specify the physics”, and in so doing, shows that he does not understand fine-tuning at its most basic level.
- And, finally, Carrier’s argument regarding the “Real Heart of the Matter” is rendered meaningless by a deep misunderstanding of probability theory’s basics.

Carrier, demonstrably, understands neither probability theory nor fine-tuning.

Barring a minor miracle, my next post will be my last about Richard Carrier. I’ll explain why there.

- I’m taking the term “hypotheses” in a general sense, so that it could include the hypothesis that an unknown parameter has a certain value. That is, priors can be distributions of unknown parameters.
- This talk of “some alternative hypothesis” precludes the possibility that Carrier is actually referring to p(e|b), the marginal likelihood. If “e = this universe contains observers”, then p(e|b) could – I suppose – be referred to as the prior probability of observers, though no one would and Carrier’s argument would still be wrong.

Filed under: Uncategorized ]]>

**Location:** Western Sydney University, Lecture theatre, Building 30

**Date:** Monday 15th February, 7.30 pm

**Title:** There is more to the Universe than its good looks.

**Abstract: **The planets, stars and galaxies that fill the night sky obey elegant mathematical patterns: the laws of nature. Why does our Universe obey these particular laws? As a clue to answering this question, scientists have asked a similar question: what if the laws were slightly different? What if it had begun with more matter, had heavier particles, or space had four dimensions?

In the last 30 years, scientists have discovered something astounding: the vast majority of these changes are disastrous. We end up with a universe containing no galaxies, no stars, no planets, no atoms, no molecules, and most importantly, no intelligent life-forms wondering what went wrong. This is called the fine-tuning of the universe for life. After explaining the science of what happens when you change the way our universe works, we will ask: what does all this mean?

Filed under: Uncategorized ]]>

Barnes claims to have hundreds of science papers that refute what I say about the possibility space of universe construction, and Lowder thinks this is devastating, but Barnes does not cite a single paper that answers my point.

My comment was in response to the claim that the statement “the fundamental constants and quantities of nature must fall into an incomprehensibly narrow life-permitting range” has been “refuted by scientists”, not about what Carrier has to say about “universe construction”. The references are in my review paper.

Because we don’t know how many variables there are.

Carrier doesn’t – he still thinks that there are 6 fundamental constants of nature, but can’t say what they are. Actual physicists have no problem counting the free parameters of fundamental physics as we know it, which is what fine-tuning is all about.

We don’t know all the outcomes of varying them against each other.

We know enough, thanks to a few decades of scientific research. It is not an argument from ignorance – extensive calculations have been performed, which overwhelmingly support fine-tuning.

And, ironically for Barnes, we don’t have the transfinite mathematics to solve the problem.

This is probably a reference to “transfinite frequentism”, a term that, as we saw last time, Carrier invented.

In any case, we don’t need transfinite arithmetic here. Bayesian probability deals with free parameters with infinite ranges in physics all the time; fine-tuning is not a unique case. Many of the technical probability objections aimed at fine-tuning, such as those of the McGrews, would preclude a very wide range of applications of probability in physics.

I am not aware of any paper in cosmology that addresses these issues.

It’s called the “measure problem”. There are literally hundreds of papers on it, too. For example, here’s a relevant paper with over 100 citations: “Measure problem in cosmology“. Aguirre (2005), Tegmark (2005), Vilenkin (2006) and Olum (2012) are good places to start. The problem of infinities in cosmology (including in fine-tuning and the multiverse) is tricky, but few cosmologists believe that it is unsolvable.

In this case, it’s not even an argument in my chapter in TEC … Barnes has skipped to quoting and arguing against a completely unrelated blog post of mine.

My third post discusses a post of Carrier’s that a) discusses fine-tuning and b) quotes from TEC. Unrelated? Grasping at straws …

“We actually do not know that there is only a narrow life-permitting range of possible configurations of the universe.” Barnes can cite no paper refuting that statement.

We “do not know” only in the trivial sense that we aren’t *completely, 100% certain*, but almost the entire fine-tuning literature is evidence against that statement. For example, read just about any paper on the cosmological constant problem: Hartle et al. (2013) “Anthropic reasoning potentially explains why the observed value of the cosmological constant is small when compared to natural values set by the Planck scale as was discussed by Barrow and Tipler, and Weinberg.”

… some studies get a wide range not a narrow one … e.g. Fred Adams, “Stars in Other Universes: Stellar Structure with Different Fundamental Constants”

Adams does not get a wide range – the figure of “one fourth” mentioned in the abstract is not a measure of the life-permitting range. See this post, and my comments in the review paper.

… which suggests to me he is not being honest in what he claims to know about the literature. So we have inconsistent results.

I’m dishonest because I didn’t mention the “handful [of papers] that oppose this conclusion [of fine-tuning]”? Wait … that’s a quote from me. The results are anything but “inconsistent”.

Speaking of inconsistency, In his post from 2013, Carrier says “all the scientific models we have … show life-bearing universes to be a common result of random universe variation, not a rare one.” Now, in OBR, he says “some studies get a wide range not a narrow one … I know they exist, because I’ve read more than one.”

Then I go on to give the second reason, which is that even those papers are useless. Notice Barnes does not tell his readers this. … my very next sentence, the sentence Barnes hides until later. [Barnes] prefers to pretend [the second argument] didn’t exist than attempt to answer it.

Note the inconsistency between “doesn’t tell” and “hides until later”. Also, note my diabolical method of “hiding” Carrier’s arguments by quoting them. Read my post: I discuss the first reason. And then I *immediately* discuss the second reason, saying “For a given possible universe, we specify the physics. So we know that there are no other constants and variables.”

Carrier later responds to my reply. So he wants to complain that a) I didn’t respond and (b) my response is mistaken. You can’t have it both ways.

As an aside, if we want to talk about dishonesty: In his post and OBR (linked from “or the mathematical problem”), Carrier cites Tim and Lydia McGrew in support of his claim that infinities create serious problems for fine-tuning. What he doesn’t tell you is that Lydia describes Carrier as “styling himself some sort of probability expert” and “show[ing] a rather striking lack of understanding of probability”. Tim, meanwhile, has shown that Carrier’s attempts to teach basic probability theory are riddled with elementary errors, demonstrating that “Richard Carrier is completely out of his depth with respect to the mathematics of elementary probability. He garbles the explanation of elementary concepts, and he fumbles the computation of his own chosen examples. … Carrier has not crossed the *pons asinorum* of elementary probability. … Why on earth would anyone take Richard Carrier seriously on this topic when he’s shown himself to be wildly incompetent?”. I couldn’t agree more. Tim’s “Does Richard Carrier Exist?” is also well worth a read.

Does Carrier tell his readers this? Hostile witnesses, who admit something against their own biases, are fine, of course. But is it honest (or, indeed, a good idea) to cite, in support of your case, experts who think that you are wildly incompetent and cannot be taken seriously?

Lowder appears to have been duped by Barnes into thinking I said it was a fact now that “the number of configurations that are life permitting actually ends up respectably high (between 1 in 8 and 1 in 4…).” Nope. Because my very next sentence, the sentence Barnes hides until later, and pretends isn’t a continuation of the same argument, says: “And even those models are artificially limiting the constants that vary to the constants in our universe, when in fact there can be any number of other constants and variables.

Sorry, Jeff – my Jedi mind tricks must be better than I realised. What Carrier claimed was “When you allow all the constants to vary freely, the number of configurations that are life permitting actually ends up respectably high (between 1 in 8 and 1 in 4).” That claim is false. That Carrier has *more to say* does not excuse his mistake.

Carrier presents my response to his second argument as:

Walk through the thinking here. We know there cannot (!) be or ever have been or ever will be a different universe with different forces, dimensions, and particles than our universe has, because “we specify the physics” (Uh, no, sorry, nature specifies the physics; we just try to guess at what nature does and/or can do) and because “A universe with other constants would be a different universe.” WTF? Um, that’s what we are talking about … different universes! I literally cannot make any sense of Barnes’s argument here.

Yeah, no kidding. What I said was “For a given possible universe, we specify the physics”. It is manifestly not a claim about what cannot ever have been, or about what nature actually does, or about ** actual** universes other than ours, or that our universe could not have or does not have physics of which we are currently unaware. This is not a discussion of the multiverse. The context is the claim that “there is only a narrow life-permitting range of possible configurations of the universe”. A universe with different constants would be a different “possible configuration”.

Moreover, my claim is obviously about other possible universes. All fine-tuning claims are. Carrier’s huff and puff about “if Barnes has some fabulous logical proof that universes with different forces, dimensions, and particles than ours are logically impossible … ” is not just ridiculous. I said “In all the possible universes we have explored, we have found that a tiny fraction would permit the existence of intelligent life.” To misunderstand this point is to completely misunderstand not only what I wrote, but the most basic, definitional claims of fine-tuning.

So there’s an useful conclusion: *Carrier has not critiqued fine-tuning, because he does not know what it is.*

Filed under: Uncategorized ]]>

(I don’t mind the delay. We’re all busy. I’ve still got posts I began in 2014 that I haven’t finished.)

First, a few short replies. I’ll skim through Carrier’s comments and provide a few one(-ish)-line responses. I’m assuming you’ve read Carriers’s post, so the quotes below (from OBR unless otherwise noted) are meant to point to (rather than reproduce) the relevant section. My discussion here is incomplete; later posts will go into more detail.

Carrier notes that his argument is a popularisation of other works, saying later that “Barnes … ignores the original papers I’m summarizing.”

I’ve responded to Ikeda and Jeffrey’s article here and here. Their reasoning is valid, but is not about fine-tuning. I show how the fine-tuning argument, properly formulated, avoids their critique. My response to Sober would be similar.

Lowder agrees with Barnes on a few things, but only by trusting that Barnes actually correctly described my argument. He didn’t.

The first of umpteen “Barnes just does understand me” complaints. The reader will have to decide for themselves. Note both the numerous lengthy quotes I typed out in my posts, and my many attempts to formulate Carrier’s arguments in precise, mathematical notation.

On the general problem of deriving frequencies from reference classes, Bayesians have written extensively.

Deriving frequencies from reference classes is trivial – you just count members and divide. The problem that references classes create for finite frequentism is their definition, not how one counts their members. So, Carrier doesn’t understand the reference class problem.

This last is the more bizarre gaffe of his, because calculating the range of possible universes is a routine practice in cosmological science.

What I said was “The restriction to known, actual events creates an obvious problem for the study of unique events.” Bayesians can apply probability to the universe; finite frequentists can’t. That’s why most cosmologists are Bayesians.

Our universe is not the only logically possible one to have arisen. That in fact it is not sitting in a reference class of one, but a reference class of an infinite number of configurations of laws and constants.

Keep clearly in mind my claim in Part 1: Carrier’s approach to probability is inconsistent. He keeps shifting the goalposts. In TEC, when talking about a cosmic designer, he says “Probability measures frequency (whether of things happening or of things being true)”. Only known cases, verified by science, can be allowed in a reference class. But now, in OBR, it’s OK to put hypothetical possibilities in a reference class.

This destroys his argument on page 282-3 of TEC, in which Carrier distinguishes cases that science has verified from “alleged cases”, which must be excluded from the reference class. But alleged cases are logically possible, so they should have been included all along, according to OBR.

Barnes would notice that if he didn’t also repeatedly confuse my estimating of the prior (at 25% “God created the universe”) with the threshold probability of coincidences (a distinction I illustrated with the “miraculous machinegun” argument I discuss, a discussion Barnes never actually interacted with, in TEC, pp. 296-98).

My discussion is in Part 2, “The Firing Squad Machine” and following. I quote from Carrier’s essay at length, put Carrier’s argument into standard probability notation, and show that it is invalid. By confusing priors and posteriors, Carrier is not updating in the Bayesian way.

Barnes attacked what I addressed in the chapter as the “threshold” probability discussed in note 31 … [a complete reproduction of the note 31] … This argument Barnes never rebutted.

I never attacked that argument, because the Bayesian approach doesn’t need a probability “threshold”. Dembski’s approach is pure frequentism. His threshold applies to likelihoods; Dembski, as a frequentist, doesn’t believe in priors. As a Bayesian, I agree with Carrier that this approach is flawed. The footnote is not rebutted because there is nothing for the Bayesian to rebut.

In short, since the only universes that can ever be observed (if there is no God) are universes capable of producing life, if only fine tuned universes are capable of producing life, then if God does not exist, only fine tuned universes can ever be observed. This counter-intuitively entails that fine-tuning is 100% expected on atheism.

Again … A fine-tuned universe is 100% expected on atheism if and only if observers are 100% expected on atheism. Observers are not 100% expected on atheism, because most possible universes do not support observers – that’s the point of fine-tuning. Thus, a fine-tuned universe is not 100% expected on atheism. I formalise this argument in Part 4.

Not only did I never argue my chapter’s conclusion from a multiverse, I explicitly said I was rejecting the existence of a multiverse for the sake of a fortiori argument. That Barnes ignored me, even though I kept telling him this, and he instead kept trying to attack some argument from multiverses.

As Lowder’s quotes [6] and [7] demonstrate, I never contend that Carrier argues the “chapter’s conclusion from a multiverse”. Rather, Carrier’s discussion of the multiverse uses a different approach to probability, one that is inconsistent with the approach to probability applied to fine-tuning elsewhere in TEC. This inconsistency undermines his entire approach – the goalposts shift at will.

Because this is where Barnes flips his lid about “finite” frequentism (in case you were wondering what that was in reference to). Note I at no point rely on transfinite frequentism in the argument of my chapter

There is no such thing as “transfinite frequentism”. Take a moment to Google that phrase – the only result is Carrier’s blog post (and possibly now this one). Literally no one ever – no mathematician,no scientist, no philosopher … not even a clueless quack – has ever used that phrase before, so far as Google (and Google Scholar, Google Books, Wikipedia, arxiv.org, Bing, Yahoo!, and even Ask Jeeves) can tell. Draw your own conclusion.

The two kinds of frequentism are called “finite frequentism” and “hypothetical frequentism”. See, for example, the entry “Interpretations of Probability” at SEP, and these two MUST READ critiques by Alan Hajek: “Fifteen Arguments against Finite Frequentism” and “Fifteen Arguments Against Hypothetical Frequentism“.

This statement [“If we are using Bayes’ theorem, the likelihood of each hypothesis is extremely relevant”] simply repeats what I myself argue in my chapter in TEC. Illustrating how much Barnes is simply not even interacting with that chapter’s actual argument.

Again, my problem is that Carrier’s approach is inconsistent. He *says* that likelihoods are relevant, but abandons this principle when convenient. See Part 1 under “Forgetting Bayes’ Theorem” to see an **example** of this inconsistency. Restating the principle does not answer my charge.

Therefore life will never observe itself being in any other kind of universe than one that’s fine tuned. … Barnes to this day has never responded to it.

As above, and in Part 4, under “The Main Attraction”. If my mathematical formulation is in error, then correct it.

… the example proves to us that fine tuning never entails [intelligent design]. To the contrary, every randomly generated universe that has life in it will be finely tuned. That is what the example illustrates. Therefore, in cosmology, there is no meaningful correlation between fine tuning and intelligent design.

No entailment, therefore no correlation. In the context of a probabilistic argument.

Part 2 is here. Part 3 is here.

Filed under: Uncategorized ]]>

In 1971, Freeman Dyson discussed a seemingly fortunate fact about nuclear physics in our universe. Because two protons won’t stick to each other, when they collide inside stars, nothing much happens. Very rarely, however, in the course of the collision the weak nuclear force will turn a proton into a neutron, and the resulting deuterium nucleus (proton + neutron) is stable. These the star can combine into helium, releasing energy.

If a super-villain boasted of a device that could bind the *diproton* (proton + proton) in the Sun, then we’d better listen. The Sun, subject to such a change in nuclear physics, would burn through the entirety of its fuel in about a second. Ouch.

A very small change in the strength of the strong force or the masses of the fundamental particles would bind the diproton. This looks like an outstanding case of find-tuning for life: a very small change in the fundamental constants of nature would produce a decidedly life-destroying outcome.

However, *this is not the right conclusion*. The question of fine-tuning is this: how would the universe have been different if the constants of nature had different values? In the example above, we took *our universe* and abruptly changed the constants half-way through its life. The Sun would explode, but would a bound-diproton universe create stars that explode?

In my review paper, I reported a few reasons to suspect that the diproton disaster isn’t as clear cut as we think, but noted that detailed calculations have not been performed. I’m a cosmologist/galaxy formation kind of astrophysicist, and so hoped that someone else would do it! However, a talk by Mark Krumholz (UC Santa Cruz) showed the way forward. Stars in our universe have an initial deuterium-burning phase, where they burn leftover deuterium from the big bang. They only have a very small amount, but this reaction is very similar to diproton burning in alternative universes.

So I investigated stars that are initially 50% protons, 50% deuterium, and so are primed to burn via the strong force. The result: as expected, stars don’t explode. They simply burn at a lower temperature, and with less dense cores. In particular, for stars with the same total mass, there is only a factor of three difference in the total energy output per unit time. This means that their lifetimes are also similar.

Looking over all the stars available in parameter space – weak burning and strong burning – the most interesting constraint for a life-permitting universe is the maximum stellar lifetime. The figure below shows the strength of electromagnetism (horizontal axis) and the strength of gravity (vertical axis).

Below the dashed lines, hydrogen-burning stars are stable. Below the thick black line, deuterium/diproton burning stars are stable – this is a much larger region. Our universe is the black square. *Note the logarithmic scale!* The contour lines show the lifetime of the longest-lived (and hence smallest) stable star in a given universe. The line labelled “6” shows where the longest-lived star burns out in a million years – too short for planets and life and such. Binding the diproton does not affect chemistry, or indeed any of the physics that upon which living things directly rely.

So, if the strength of gravity () were not very small (< 10^30), all stars would burn out too quickly.This is a conservative but very robust anthropic constraint. Actually, the “strength of gravity” is the ratio of the proton to the Planck mass, so the relevant fine-tuning is the fact that the fundamental particles of nature are “absurdly light“, in the words of Leonard Susskind. These are some of the most important fine-tuning examples around.

Filed under: Astronomy, fine tuning, Physics, The Universe ]]>