The Conversation has published an article of mine, coauthored with Geraint Lewis, titled “Have cosmologists lost their minds in the multiverse?“. It’s a quick introduction to the multiverse in light of the recent BICEP2 results. Comments welcome!
Archive for the ‘fine tuning’ Category
Have cosmologists lost their minds in the multiverse?
Posted in Astronomy, cosmology, fine tuning, The Universe, tagged multiverse on May 12, 2014  10 Comments »
Questions for Richard Carrier
Posted in fine tuning on January 22, 2014  11 Comments »
Following my three critiques (one, two, three) of Richard Carrier’s view on the finetuning of the universe for intelligent life, we had a backandforth in the comments section of his blog. Just as things were getting interesting, Carrier took his ball and went home, saying that any further conversation would be “a waste of anyone’s time”. Sorry, anyone.
I still have questions. Before I forget, I’ll post them here. (I posted them as a comment on his blog but they’re still “awaiting moderation”. I guess he’ll delete them.)
The Main Attraction
What is Carrier’s main argument in response to finetuning, in his article “Neither Life nor the Universe Appear Intelligently Designed”? He kept accusing me of misrepresenting him, but never clarified his argument. I’ll have another go. Let,
o = intelligent observers exist
f = a finelytuned universe exists
b = background information.
NID = a Nonterrestrial Intelligent Designer caused the universe.
We want to calculate the posterior: the probability of NID given what we know. From Carrier’s footnote 29, introduced as the “probability that NID caused the universe”, we can derive (using the odds form of Bayes’ theorem),
(1)
Carrier argues in footnotes 22 and 23 that,
implies , (2)
because o is part of “established background knowledge” and so part of b. Thus,
(3)
Conclusion: the posterior is be equal to the prior (as seen in footnote 29). Learning f has not changed the probability that NID is true. Finetuning is irrelevant to the existence of God.
Question 1: Is the above a correct formalisation of Carrier’s argument? (If anyone has read his essay, comment!) (more…)
Christmas Tripe – A FineTuned Critique of Richard Carrier (Part 3)
Posted in fine tuning on December 23, 2013  30 Comments »
I thought I was done with Richard Carrier’s views on the finetuning of the universe for intelligent life (Part 1, Part 2). And then someone pointed me to this. It comes in response to an article by William Lane Craig. I’ve critiqued Craig’s views on finetuning here and here. The quotes below are from Carrier unless otherwise noted.
[H]e claims “the fundamental constants and quantities of nature must fall into an incomprehensibly narrow lifepermitting range,” but that claim has been refuted–by scientists–again and again. We actually do not know that there is only a narrow lifepermitting range of possible configurations of the universe. As has been pointed out to Craig by several theoretical physicists (from Krauss to Stenger), he can only get his “narrow range” by varying one single constant and holding all the others fixed, which is simply not how a universe would be randomly selected. When you allow all the constants to vary freely, the number of configurations that are life permitting actually ends up respectably high (between 1 in 8 and 1 in 4: see Victor Stenger’s The Fallacy of FineTuning).
I’ve said an awful lot in response to that paragraph, so let’s just run through the highlights.

“Refuted by scientists again and again”. What, in the peerreviewed scientific literature? I’ve published a review of the scientific literature, 200+ papers, and I can only think of a handful that oppose this conclusion, and piles and piles that support it. Here are some quotes from nontheist scientists. For example, Andrei Linde says: “The existence of an amazingly strong correlation between our own properties and the values of many parameters of our world, such as the masses and charges of electron and proton, the value of the gravitational constant, the amplitude of spontaneous symmetry breaking in the electroweak theory, the value of the vacuum energy, and the dimensionality of our world, is an experimental fact requiring an explanation.” [emphasis added.]

“By several theoretical physicists (from Krauss to Stenger)”. I’ve replied to Stenger. I had a chance to talk to Krauss briefly about finetuning but I’m still not sure what he thinks. His published work on anthropic matters doesn’t address the more general finetuning claim. Also, by saying “from” and “to”, Carrier is trying to give the impression that a great multitude stands with his claim. I’m not even sure if Krauss is with him. I’ve read loads on this subject and only Stenger defends Carrier’s point, and in a popular (ish) level book. On the other hand, Craig can cite Barrow, Carr, Carter, Davies, Deutsch, Ellis, Greene, Guth, Harrison, Hawking, Linde, Page, Penrose, Polkinghorne, Rees, Sandage, Smolin, Susskind, Tegmark, Tipler, Vilenkin, Weinberg, Wheeler, and Wilczek. (See here). With regards to the claim that “the fundamental constants and quantities of nature must fall into an incomprehensibly narrow lifepermitting range”, the weight of the peerreviewed scientific literature is overwhelmingly with Craig. (If you disagree, start citing papers).

“He can only get his “narrow range” by varying one single constant”. Wrong. The very thing that got this field started was physicists noting coincidences between a number of constants and the requirements of life. Only a handful of the 200+ scientific papers in this field vary only one variable. Read this.

“1 in 8 and 1 in 4: see Victor Stenger”. If Carrier is referring to Stenger’s program MonkeyGod, then he’s kidding himself. That “model” has 8 high schoollevel equations, 6 of which are wrong. It fails to understand the difference between an experimental range and a possible range, which is fatal to any discussion of finetuning. Assumptions are cherrypicked. Crucial constraints and constants are missing. Carrier has previously called MonkeyGod “a serious research product, defended at length in a technical article”. It was published in a philosophical journal of a humanist society, and a popular level book, and would be laughed out of any scientific journal. MonkeyGod is a bad joke.
And even those models are artificially limiting the constants that vary to the constants in our universe, when in fact there can be any number of other constants and variables.
In all the possible universes we have explored, we have found that a tiny fraction would permit the existence of intelligent life. There are other possible universes,that we haven’t explored. This is only relevant if we have some reason to believe that the trend we have observed until now will be miraculously reversed just beyond the horizon of what we have explored. In the absence of such evidence, we are justified in concluding that the possible universes we have explored are typical of all the possible universes. In fact, by beginning in our universe, known to be lifepermitting, we have biased our search in favour of finding lifepermitting universes. (more…)
What Chance Looks Like – A FineTuned Critique of Richard Carrier (Part 2)
Posted in fine tuning on December 15, 2013  23 Comments »
Last time, we looked at historian Richard Carrier’s article, “Neither Life nor the Universe Appear Intelligently Designed”. We found someone who preaches Bayes’ theorem but thinks that probabilities are frequencies, says that likelihoods are irrelevant to posteriors, and jettisons his probability principles at his leisure. In this post, we’ll look at his comments on the finetuning of the universe for intelligent life. Don’t get your hopes up.
Simulating universes
Here’s Carrier.
Suppose in a thousand years we develop computers capable of simulating the outcome of every possible universe, with every possible arrangement of physical constants, and these simulations tell us which of those universes will produce arrangements that make conscious observers (as an inevitable undesigned byproduct). It follows that in none of those universes are the conscious observers intelligently designed (they are merely inevitable byproducts), and none of those universes are intelligently designed (they are all of them constructed merely at random). Suppose we then see that conscious observers arise only in one out of every universes. … Would any of those conscious observers be right in concluding that their universe was intelligently designed to produce them? No. Not even one of them would be.
To see why this argument fails, replace “universe” with “arrangement of metal and plastic” and “conscious observers” with “driveable cars”. Suppose we could simulate the outcome of every possible arrangement of metal and plastic, and these simulations tell us which arrangements produce driveable cars. Does it follow that none of those arrangements could have been designed? Obviously not. This simulation tells us nothing about how actual cars are produced. The fact that we can imagine every possible arrangement of metal and plastic does not mean that every actual car is constructed merely at random. This wouldn’t even follow if cars were in fact constructed by a machine that produced every possible arrangement of metal and plastic, since the machine itself would need to be designed. The driveable cars it inevitably made would be the product of design, albeit via an unusual method.
Note a few leaps that Carrier makes. He leaps from bits in a computer to actual universes that contain conscious observers. He leaps from simulating every possible universe to producing universes “merely at random”. As a cosmological simulator myself, I can safely say that a computer program able to simulate every possible universe would require an awful lot of intelligent design. Carrier also seems to assume that a random process is undesigned. Tell that to these guys. Random number generators are a common feature of intelligently designed computer programs. This argument is an abysmal failure.
How to Fail Logic 101
Carrier goes on … (more…)
Probably Not – A FineTuned Critique of Richard Carrier (Part 1)
Posted in fine tuning, Mathematics on December 13, 2013  10 Comments »
After a brief back and forth in a comments section, I was encouraged by Dr Carrier to read his essay “Neither Life nor the Universe Appear Intelligently Designed”. I am assured that the title of this essay will be proven “with such logical certainty” that all opposing views should be wiped off the face of Earth.
Dr Richard Carrier is a “worldrenowned author and speaker”. That quote comes from none other than the worldrenowned author and speaker, Dr Richard Carrier. Fellow atheist Massimo Pigliucci says,
The guy writes too much, is too long winded, far too obnoxious for me to be able to withstand reading him for more than a few minutes at a time.
I know the feeling. When Carrier’s essay comes to address evolution, he recommends that we “consider only actual scholars with PhD’s in some relevant field”. One wonders why, when we come to consider the particular intersection of physics, cosmology and philosophy wherein we find finetuning, we should consider the musings of someone with a PhD in ancient history. (A couple of articles on philosophy does not a philosopher make). Especially when Carrier has stated that there are six fundamental constants of nature, but can’t say what they are, can’t cite any physicist who believes that laughable claim, and refers to the constants of the standard model of particle physics (which every physicist counts as fundamental constants of nature) as “trivia”.
In this post, we will consider Carrier’s account of probability theory. In the next post, we will consider Carrier’s discussion of finetuning. The mathematical background and notation of probability theory were given in a previous post, and follow the discussion of Jaynes. (Note: probabilities can be either or , and both an overbar and tilde denote negation.)
Probability theory, a la Carrier
I’ll quote Carrier at length.
Bayes’ theorem is an argument in formal logic that derives the probability that a claim is true from certain other probabilities about that theory and the evidence. It’s been formally proven, so no one who accepts its premises can rationally deny its conclusion. It has four premises … [namely P(hb), P(~hb), P(eh.b), P(e~h.b)]. … Once we have [those numbers], the conclusion necessarily follows according to a fixed formula. That conclusion is then by definition the probability that our claim h is true given all our evidence e and our background knowledge b.
We’re off to a dubious start. Bayes’ theorem, as the name suggests, is a theorem, not an argument, and certainly not a definition. Also, Carrier seems to be saying that P(hb), P(~hb), P(eh.b), and P(e~h.b) are the premises from which one formally proves Bayes’ theorem. This fails to understand the difference between the derivation of a theorem and the terms in an equation. Bayes’ theorem is derived from the axioms of probability theory – Kolmogorov’s axioms or Cox’s theorem are popular starting points. Any necessity in Bayes’ theorem comes from those axioms, not from the four numbers P(hb), P(~hb), P(eh.b), and P(e~h.b). (more…)
Reply to Maudlin: The Calibrated Cosmos
Posted in fine tuning, Philosophy, The Universe on November 13, 2013  4 Comments »
I recently read philosopher of science Tim Maudlin’s book Philosophy of Physics: Space and Time and thought it was marvellous, so I was expecting good things when I came to read Maudlin’s article for Aeon Magazine titled “The calibrated cosmos: Is our universe finetuned for the existence of life – or does it just look that way from where we’re sitting?“. I’ve got a few comments. Indented quotes below are from Maudlin’s article unless otherwise noted.
In a weekend?
Theories now suggest that the most general structural elements of the universe — the stars and planets, and the galaxies that contain them — are the products of finely calibrated laws and conditions that seem too good to be true. … The details of these sorts of calculations should be taken with a grain of salt. No one could sit down and rigorously work out an entirely new physics in a weekend.
Two few quick things. “Theories” has a ring of “some tentative, fringe ideas” to the lay reader, I suspect. The theories on which one bases finetuning calculations are precisely the reigning theories of modern physics. These are not “entirely new physics” but the same equations (general relativity, the standard model of particle physics, stellar structure equations etc.) that have time and again predicted the results of observations, now applied to different scenarios. I think Maudlin has underestimated both the power of order of magnitude calculations in physics, and the effort that theoretical physicists have put into finetuning calculations. For example, Epelbaum and his collaborators, having developed the theory and tools to use supercomputer lattice simulations to investigate the structure of the C12 nucleus, write a few papers (2011, 2012) to describe their methods and show how their cuttingedge model successfully reproduces observations. They then use the same methods to investigate finetuning (2013). My review article cites upwards of a hundred papers like this. This is not a backoftheenvelope operation, not starting from scratch, not entirely new physics, not a weekend hobby. This is theoretical physics.
Telling your likelihood from your posterior
It can be unsettling to contemplate the unlikely nature of your own existence … Even if your parents made a deliberate decision to have a child, the odds of your particular sperm finding your particular egg are one in several billion. … after just two generations, we are up to one chance in 10^27. Carrying on in this way, your chance of existing, given the general state of the universe even a few centuries ago, was almost infinitesimally small. You and I and every other human being are the products of chance, and came into existence against very long odds.
The slogan I want to invoke here is “don’t treat a likelihood as if it were a posterior”. That’s a bit to jargony. The likelihood is the probability of what we know, assuming that some theory is true. The posterior is the reverse – the probability of the theory, given what we know. It is the posterior that we really want, since it reflects our situation: the theory is uncertain, the data is known. The likelihood can help us calculate the posterior (using Bayes theorem), but in and of itself, a small likelihood doesn’t mean anything. The calculation Maudlin alludes to above is a likelihood: what is the probability that I would exist, given that the events that lead to my existence came about by chance? The reason that this small likelihood doesn’t imply that the posterior – the probability of my existence by chance, given my existence – is small is that the theory has no comparable rivals. Brendon has explained this point elsewhere. (more…)
What to Read: The FineTuning of the Universe for Intelligent life
Posted in fine tuning, Uncategorized on September 10, 2013  13 Comments »
I’ve spent a lot of time critiquing articles on the finetuning of the universe for intelligent life. I should really give the other side of the story. Below are some of the good ones, ranging from popular level books to technical articles. I’ve given my recommendations for popular cosmology books here.
Books – Popularlevel
 Just Six Numbers, Martin Rees – Highly recommended, with a strong focus on cosmology and astrophysics, as you’d expect from the Astronomer Royal. Rees gives a clear exposition of modern cosmology, including inflation, and ends up giving a cogent defence of the multiverse.
 The Goldilocks Enigma, Paul Davies – Davies is an excellent writer and has long been an important contributor to this field. His discussion of the physics is very good, and includes a description of the Higgs mechanism. When he strays into metaphysics, he is thorough and thoughtful, even when he is defending conclusions that I don’t agree with.
 The Cosmic Landscape: String Theory and the Illusion of Intelligent Design, Leonard Susskind – I’ve reviewed this book in detail in a previous blog posts. Highly recommended. I can also recommend his many lectures on YouTube.
 Constants of Nature, John Barrow – A discussion of the physics behind the constants of nature. An excellent presentation of modern physics, cosmology and their relationship to mathematics, which includes a chapter on the anthropic principle and a discussion of the multiverse.
 Cosmology: The Science of the Universe, Edward Harrison – My favourite cosmology introduction. The entire book is worth reading, not least the sections on life in the universe and the multiverse.
 At Home in the Universe, John Wheeler – A thoughtful and wonderfully written collection of essays, some of which touch on matters anthropic.
I haven’t read Brian Greene’s book on the multiverse but I’ve read his other books and they’re excellent. Stephen Hawking discusses finetuning in A Brief History of Time and the Grand Design. As usual, read anything by Sean Carroll, Frank Wilczek, and Alex Vilenkin.
Books – Advanced
 The Cosmological Anthropic Principle, Barrow and Tipler – still the standard in the field. Even if you can’t follow the equations in the middle chapters, it’s still worth a read as the discussion is quite clear. Gets a bit speculative in the final chapters, but its fairly obvious where to apply your grain of salt.
 Universe or Multiverse (Edited by Bernard Carr) – the new standard. A great collection of papers by most of the experts in the field. Special mention goes to the papers by Weinberg, Wilczek, Aguirre, and Hogan.
Scientific Review Articles
The field of finetuning grew out of the socalled “Large numbers hypothesis” of Paul Dirac, which is owes a lot to Weyl and is further discussed by Eddington, Gamow and others. These discussions evolve into finetuning when Dicke explains them using the anthropic principle. Dicke’s method is examined and expanded in these classic papers of the field: (more…)
FineTuning on the TV: A Review of ABC’s Catalyst
Posted in Astronomy, cosmology, fine tuning, Science and the Public, tagged anthropic, fine tuning, multiverse on August 30, 2013  15 Comments »
It’s always a nervous moment when, as a scientist, you discover that a documentary has been made on one of your favourite topics. Science journalism is rather hit and miss. So it was when the Australian Broadcasting Corporation (ABC), our public TV network, aired a documentary about the finetuning of the universe for intelligent life as part of their Catalyst science series. (I’ve mentioned my finetuning review paper enough, haven’t I?).
The program can be watched on ABC iView. (International readers – does this work for you?). It was hosted by Dr Graham Phillips, who has a PhD in Astrophysics. The preview I saw last week was promising. All the right people’s heads were appearing – Sean Carroll, Brian Greene, Paul Davies, Leonard Susskind, Lawrence Krauss, Charley Lineweaver. John Wheeler even got a mention.
Overall – surprisingly OK. They got the basic science of finetuning correct. Phillips summarises finetuning as:
When scientists look far into the heavens or deeply down into the forces of nature, they see something deeply mysterious. If some of the laws that govern our cosmos were only slightly different, intelligent life simply couldn’t exist. It appears that the universe has been finetuned so that intelligent beings like you and me could be here.
Not bad, though I’m not sure why it needed to be accompanied by such ominous music. There is a possibility for misunderstanding, however. Finetuning is a technical term in physics that roughly means extreme sensitivity of some “output” to the “input”. For example, if some theory requires an unexplained coincidence between two free parameters, then the “finetuning” of the theory required to explain the data counts against that theory. “Finetuned” does not mean “chosen by an intelligent being” or “designed”. It’s a metaphor.
Ten minutes in, the only actual case of finetuning that had been mentioned was the existence of inhomogeneities in the early universe. Sean Carroll:
If the big bang had been completely smooth, it would just stay completely smooth and the history of the universe would be very, very boring. It would just get more and more dilute but you would never make stars, you would never make galaxies or clusters of galaxies. So the potential for interesting complex creatures like you and me would be there, but it would never actually come to pass. So we’re very glad that there was at least some fluctuation in the early universe.
Paul Davies then discussed the fact that there not only need to be such fluctuations, but they need to be nottoobig and nottoosmall. Here’s the scientific paper, if you’re interested.
The documentary also has a cogent discussion of the cosmological constant problem – the “mother of all finetunings” – and the finetuning of the Higgs field, which is related to the hierarchy problem. Unfortunately, Phillips calls it “The God Particle” because “it gives substance to all nature’s other particles”. Groan.
Once we move beyond the science of finetuning, however, things get a bit more sketchy.
The Multiverse
Leonard Susskind opens the section on the multiverse by stating that the multiverse is, in his opinion, the only explanation available for the finetuning of the universe for intelligent life. At this point, both the defence and the prosecution could have done more.
Possibilities are cheap. Sean Carroll appears on screen to say “Aliens could have created our universe” and then is cut off. We are told that if we just suppose there is a multiverse, the problems of finetuning are solved. This isn’t the full story on two counts – the multiverse isn’t a mere possibility, and it doesn’t automatically solve the finetuning problem. (more…)
FineTuning and the Myth of “One variable at a time”
Posted in fine tuning on August 1, 2013  17 Comments »
A commenter over at my post “Got a cosmology question?” asks:
Someone told me “there is not a single paper which finds fine tuning that has allowed multivariation”. Can you please refute this?
Incidentally, cosmology questions are still very welcome over there.
“Multivariation” is not a word, but in this context presumably means varying more than one variable at a time. There is an objection to finetuning that goes like this: all the finetuning cases involve varying one variable only, keeping all other variables fixed at their value in our universe, and then calculating the lifepermitting range on that one variable. But, if you let more than one variable vary at a time, there turns out to be a range of lifepermitting universes. So the universe is not finetuned for life.
This is a myth. The claim quoted by our questioner is totally wrong. The vast majority of finetuning/anthropic papers, from the very earliest papers in the 70’s until today, vary many parameters^{1}. I’ve addressed these issues at length in my review paper. I’ll summarise some of that article here.
The very thing that started this whole field was physicists noting coincidences between the values of a number of different constants and the requirements for life. Carter’s classic 1974 paper “Large number coincidences and the anthropic principle in cosmology” notes that in order for the universe to have both radiative and convective stars we must have (in more modern notation to his equation 15, but it’s the same equation),
where, in Planck units, , , , and is the charge on the electron. (Interestingly, Barrow and Tipler show that the same condition must hold for stars emit photons with the right energy to power chemical reactions e.g. photosynthesis.) Similarly for cosmological cases: for the universe to live long enough for stars to live and die, we must have,
where is related to the curvature of space and is roughly the baryon to photon ratio.
This continues in the classic anthropic papers. Carr and Rees (1977) show that to have hydrogen to power stars left over from big bang nucleosynthesis, and to have supernovae distribute heavy elements, we must have (in Planck units, rearranging their equation 61),
where is the weak coupling constant.
Barrow and Tipler’s “The Anthropic Cosmological Principle” shows that, for carbon and larger elements to be stable, we must have:
where is the strong force coupling constant (evaluated at , if you’re interested).
The whole point of these relations and more like them, which the early anthropic literature is entirely concerned with, is that they relate a number of different physical parameters. There are approximations in these calculations – they are orderofmagnitude – but this usually involves assuming that a dimensionless mathematical constant is approximately one. At most, a parameter may be assumed to be in a certain regime. For example, one may assume that and are small (much less than one) in order to make an approximation (e.g. that the nucleus is much heavier than the electron, and the electron orbits nonrelativistically). These approximations are entirely justified in an anthropic calculation, because we have other anthropic limits that are known to (not merely assumed to) involve one variable – e.g. if is large, all solids are unstable to melting, and if is large then all atoms are unstable. See section 4.8 of my paper for more information and references.
More modern papers almost always vary many variables. Examples abound. Below is figure 2 from my paper, which shows Figures from Barr and Khan and Tegmark, Aguirre, Rees and Wilczek. (Seriously, people … Wilczek is a Nobel prize winning particle physicist and Martin Rees is the Astronomer Royal and former president of the Royal Society. These people know what they are doing.)
The top two panels show the anthropic limits on the upquark mass (x axis) and downquark mass (y axis). 9 anthropic limits are shown. The lifepermitting region is the green triangle in the top right plot. The lower two panels show cosmological limits on the cosmological constant (energy density) , primordial inhomogeneity Q, and the matter density per CMB photon. Tegmark et al. derive from cosmology 8 anthropic constraints on the 7 dimensional parameter space . Tegmark and Rees (1997) derive the following anthropic constraint on the primordial inhomogeneity Q:
Needless to say, there is more than one variable being investigated here. For more examples, see Figures 6, 7 (from Hogan), 8 (from Jaffe et al.) and 9 (from Tegmark) of my paper. The reason that the plots above only show two parameters at a time is because your screen is two dimensional. The equations and calculations from which these plots are constructed take into account many more variables than can be plotted on two axes.
This myth may have started because, when finetuning is presented to lay audiences, it is often illustrated using oneparameter limits. Martin Rees, for example, does this in his excellent book “Just Six Numbers“. Rees knows that the limits involve more than one parameter – he derived many of those limits. But equation (1) above would be far too intimidating in a popular level book.
My paper lists about 200 publications relevant to the field. I can only think of a handful that only vary one parameter. The scientific literature does not simply vary one parameter at a time when investigating lifepermitting universes. This is a myth, born of (at best) complete ignorance.
____________________
Postscript: The questioner’s discussion revolves around the article of Harnik, Kribs & Perez (2006) on a universe without weak interactions. It’s a very clever article. Their weakless universe requires “judicious parameter adjustment” and so is also finetuned. Remember that finetuning doesn’t claim that our universe is uniquely lifepermitting, but rather that lifepermitting universes are rare in the set of possible universe. Thus, the weakless universe is not a counterexample to finetuning. There are also concerns about galaxy formation and oxygen production. See the end of Section 4.8 of my paper for a discussion.
Footnotes:
1. Even if finetuning calculations varied only one parameter, it wouldn’t follow that finetuning is false. Opening up more parameter space in which life can form will also open up more parameter space in which life cannot form. As Richard Dawkins (1986) rightly said: “however many ways there may be of being alive, it is certain that there are vastly more ways of being dead, or rather not alive.” For more, see section 4.2.2 of my paper.
More of my posts on finetuning are here.
Not So Sharp: A FineTuned Critique of Richard Klee
Posted in fine tuning, tagged fine tuning on June 21, 2013  8 Comments »
Beginning with Hugh Ross, I undertook to critique various articles on the finetuning of the universe for intelligent life that I deemed to be woeful, or at least in need of correction. A list of previous critiques can be found here. I generally looked for published work, as correcting every blog post, forum or YouTube comment is a sure road to insanity. I was looking to maximise prestige of publication, “magic bullet” aspirations and wrongness about finetuning. I may have a new record holder.
It’s an article published in the prestigious British Journal for the Philosophy of Science by a professor of philosophy who has written books like “Introduction to the Philosophy of Science”. It claims to expose the “philosophical naivete and mathematical sloppiness on the part of the astrophysicists who are smitten with [finetuning]”. The numbers, we are told, have been “doctored” by a practice that is “shrewdly selfadvantageous to the point of being seriously misleading” in support of a “slicklypackaged argument” with an “ulterior theological agenda”. The situation is serious, as [cue dramatic music] … “the fudging is insidious”. (Take a moment to imagine the Emperor from Star Wars saying that phrase. I’ll wait.)
It will be my task this post to demonstrate that the article “The Revenge of Pythagoras: How a Mathematical Sharp Practice Undermines the Contemporary Design Argument in Astrophysical Cosmology” (hereafter TROP, available here) by Robert Klee does not understand the first thing about the finetuning of the universe for intelligent life – its definition. Once a simple distinction is made regarding the role that Order of Magnitude (OoM) calculations play in finetuning arguments, the article will be seen to be utterly irrelevant to the topic it claims to address.
Note well: Klee’s ultimate target is the design argument for the existence of God. In critiquing Klee, I am not attempting to defend that argument. I’m interested in the science, and Klee gets the science wrong.
Warning Signs
Klee, a philosopher with one refereed publication related to physics (the one in question), is about to accuse the following physicists of a rather basic mathematical error: Arthur Eddington, Paul Dirac, Hermann Weyl, Robert Dicke, Brandon Carter, Hermann Bondi, Bernard Carr, Martin Rees, Paul Davies, John Barrow, Frank Tipler^{1}, Alan Lightman, William H. Press and Fred Hoyle. Even John Wheeler doesn’t escape Klee’s critical eye. That is quite a roll call. Eddington, Dirac, Weyl, Bondi, Rees, Hoyle and Wheeler are amongst the greatest scientists of the 20th century. The rest have had distinguished careers in their respective fields. They are not all astrophysicists, incidentally.
That fact should put us on edge when reading Klee’s article. He may, of course, be correct. But he is a philosopher up against something of a physicist dream team.
Klee’s Claim
The main claim of TROP is that finetuning is “infected with a mathematically sharp practice: the concepts of two numbers being of the same order of magnitude, and of being within an order of each other, have been stretched from their proper meanings so as to doctor the numbers”. The centrepiece of TROP is an examination of the calculations of Carr and Rees (1979, hereafter CR79) – “[this] is a foundational document in the area, and if the sharp practice infests this paper, then we have uncovered it right where it could be expected to have the most harmful influence”.
CR79 derives OoM equations for the levels of physical structure in the universe, from the Planck scale to nuclei to atoms to humans to planets to stars to galaxies to the whole universe. They claim that just a few physical constants determine all of these scales, to within an order of magnitude. Table 1 of TROP shows a comparison of CR79’s calculations to the “Actual Value”.
Klee notes that only 8 of the 14 cases fall within a factor of 10. Hence “42.8%” of these cases are “more than 1 order magnitude off from exact precision”. The mean of all the accuracies is “19.23328, over 1 order of magnitude to the high side”. Klee concludes that “[t]hese statistical facts reveal the exaggerated nature of the claim that the formulae Carr and Rees devise determine ‘to an order of magnitude’ the mass and length scales of every kind of stable material system in the universe”. Further examples are gleaned from Paul Davies’ 1982 book “The Accidental Universe”, and his “rudimentary” attempt to justify “the sharp practice” as useful approximations is dismissed as ignoring the fact that these numbers are still “off from exact precision – exact fine tuning”.
And there it is …
I’ll catalogue some of Klee’s mathematical, physical and astrophysical blunders in a later section, but first let me make good on my promise from the introduction – to demonstrate that this paper doesn’t understand the definition of finetuning. The misunderstanding is found throughout the paper, but is most clearly seen in the passage I quoted above:
[Davies’] attempted justification [of an order of magnitude calculation] fails. 10^2 is still a factor of 100 off from exact precision – exact finetuning – no matter how small a fraction of some other number it may be [emphasis added].
Klee thinks that finetuning refers to the precision of these OoM calculations: “exact precision” = “exact finetuning”. Klee thinks that, by pointing about that these OoM approximations are not exact and sometimes off by more than a factor of 10, he has shown that the universe is not as finetuned as those “astrophysicists” claim.
Wrong. Totally wrong. (more…)