Feeds:
Posts
Comments

Posts Tagged ‘fine tuning’

It’s always a nervous moment when, as a scientist, you discover that a documentary has been made on one of your favourite topics. Science journalism is rather hit and miss. So it was when the Australian Broadcasting Corporation (ABC), our public TV network, aired a documentary about the fine-tuning of the universe for intelligent life as part of their Catalyst science series. (I’ve mentioned my fine-tuning review paper enough, haven’t I?).

The program can be watched on ABC iView. (International readers – does this work for you?). It was hosted by Dr Graham Phillips, who has a PhD in Astrophysics. The preview I saw last week was promising. All the right people’s heads were appearing – Sean Carroll, Brian Greene, Paul Davies, Leonard Susskind, Lawrence Krauss, Charley Lineweaver. John Wheeler even got a mention.

Overall – surprisingly OK. They got the basic science of fine-tuning correct. Phillips summarises fine-tuning as:

When scientists look far into the heavens or deeply down into the forces of nature, they see something deeply mysterious. If some of the laws that govern our cosmos were only slightly different, intelligent life simply couldn’t exist. It appears that the universe has been fine-tuned so that intelligent beings like you and me could be here.

Not bad, though I’m not sure why it needed to be accompanied by such ominous music. There is a possibility for misunderstanding, however. Fine-tuning is a technical term in physics that roughly means extreme sensitivity of some “output” to the “input”. For example, if some theory requires an unexplained coincidence between two free parameters, then the “fine-tuning” of the theory required to explain the data counts against that theory. “Fine-tuned” does not mean “chosen by an intelligent being” or “designed”. It’s a metaphor.

Ten minutes in, the only actual case of fine-tuning that had been mentioned was the existence of inhomogeneities in the early universe. Sean Carroll:

If the big bang had been completely smooth, it would just stay completely smooth and the history of the universe would be very, very boring. It would just get more and more dilute but you would never make stars, you would never make galaxies or clusters of galaxies. So the potential for interesting complex creatures like you and me would be there, but it would never actually come to pass. So we’re very glad that there was at least some fluctuation in the early universe.

Paul Davies then discussed the fact that there not only need to be such fluctuations, but they need to be not-too-big and not-too-small. Here’s the scientific paper, if you’re interested.

The documentary also has a cogent discussion of the cosmological constant problem – the “mother of all fine-tunings” – and the fine-tuning of the Higgs field, which is related to the hierarchy problem. Unfortunately, Phillips calls it “The God Particle” because “it gives substance to all nature’s other particles”. Groan.

Once we move beyond the science of fine-tuning, however, things get a bit more sketchy.

The Multiverse

Leonard Susskind opens the section on the multiverse by stating that the multiverse is, in his opinion, the only explanation available for the fine-tuning of the universe for intelligent life. At this point, both the defence and the prosecution could have done more.

Possibilities are cheap. Sean Carroll appears on screen to say “Aliens could have created our universe” and then is cut off. We are told that if we just suppose there is a multiverse, the problems of fine-tuning are solved. This isn’t the full story on two counts – the multiverse isn’t a mere possibility, and it doesn’t automatically solve the fine-tuning problem. (more…)

Advertisement

Read Full Post »

Beginning with Hugh Ross, I undertook to critique various articles on the fine-tuning of the universe for intelligent life that I deemed to be woeful, or at least in need of correction. A list of previous critiques can be found here. I generally looked for published work, as correcting every blog post, forum or YouTube comment is a sure road to insanity. I was looking to maximise prestige of publication, “magic bullet” aspirations and wrongness about fine-tuning. I may have a new record holder.

It’s an article published in the prestigious British Journal for the Philosophy of Science by a professor of philosophy who has written books like “Introduction to the Philosophy of Science”. It claims to expose the “philosophical naivete and mathematical sloppiness on the part of the astrophysicists who are smitten with [fine-tuning]”. The numbers, we are told, have been “doctored” by a practice that is “shrewdly self-advantageous to the point of being seriously misleading” in support of a “slickly-packaged argument” with an “ulterior theological agenda”. The situation is serious, as [cue dramatic music] … “the fudging is insidious”. (Take a moment to imagine the Emperor from Star Wars saying that phrase. I’ll wait.)

It will be my task this post to demonstrate that the article “The Revenge of Pythagoras: How a Mathematical Sharp Practice Undermines the Contemporary Design Argument in Astrophysical Cosmology” (hereafter TROP, available here) by Robert Klee does not understand the first thing about the fine-tuning of the universe for intelligent life – its definition. Once a simple distinction is made regarding the role that Order of Magnitude (OoM) calculations  play in fine-tuning arguments, the article will be seen to be utterly irrelevant to the topic it claims to address.

Note well: Klee’s ultimate target is the design argument for the existence of God. In critiquing Klee, I am not attempting to defend that argument. I’m interested in the science, and Klee gets the science wrong.

Warning Signs

Klee, a philosopher with one refereed publication related to physics (the one in question), is about to accuse the following physicists of a rather basic mathematical error: Arthur Eddington, Paul Dirac, Hermann Weyl, Robert Dicke, Brandon Carter, Hermann Bondi, Bernard Carr, Martin Rees, Paul Davies, John Barrow, Frank Tipler1, Alan Lightman, William H. Press and Fred Hoyle. Even John Wheeler doesn’t escape Klee’s critical eye. That is quite a roll call. Eddington, Dirac, Weyl, Bondi, Rees, Hoyle and Wheeler are amongst the greatest scientists of the 20th century. The rest have had distinguished careers in their respective fields. They are not all astrophysicists, incidentally.

That fact should put us on edge when reading Klee’s article. He may, of course, be correct. But he is a philosopher up against something of a physicist dream team.

Klee’s Claim

The main claim of TROP is that fine-tuning is “infected with a mathematically sharp practice: the concepts of two numbers being of the same order of magnitude, and of being within an order of each other, have been stretched from their proper meanings so as to doctor the numbers”. The centrepiece of TROP is an examination of the calculations of Carr and Rees (1979, hereafter CR79) – “[this] is a foundational document in the area, and if the sharp practice infests this paper, then we have uncovered it right where it could be expected to have the most harmful influence”.

CR79 derives OoM equations for the levels of physical structure in the universe, from the Planck scale to nuclei to atoms to humans to planets to stars to galaxies to the whole universe. They claim that just a few physical constants determine all of these scales, to within an order of magnitude. Table 1 of TROP shows a comparison of CR79’s calculations to the “Actual Value”.

Klee notes that only 8 of the 14 cases fall within a factor of 10. Hence “42.8%” of these cases are “more than 1 order magnitude off from exact precision”. The mean of all the accuracies is “19.23328, over 1 order of magnitude to the high side”. Klee concludes that “[t]hese statistical facts reveal the exaggerated nature of the claim that the formulae Carr and Rees devise determine ‘to an order of magnitude’ the mass and length scales of every kind of stable material system in the universe”. Further examples are gleaned from Paul Davies’ 1982 book “The Accidental Universe”, and his “rudimentary” attempt to justify “the sharp practice” as useful approximations is dismissed as ignoring the fact that these numbers are still “off from exact precision – exact fine tuning”.

And there it is …

I’ll catalogue some of Klee’s mathematical, physical and astrophysical blunders in a later section, but first let me make good on my promise from the introduction – to demonstrate that this paper doesn’t understand the definition of fine-tuning. The misunderstanding is found throughout the paper, but is most clearly seen in the passage I quoted above:

[Davies’] attempted justification [of an order of magnitude calculation] fails. 10^2 is still a factor of 100 off from exact precision – exact fine-tuning – no matter how small a fraction of some other number it may be [emphasis added].

Klee thinks that fine-tuning refers to the precision of these OoM calculations: “exact precision” = “exact fine-tuning”. Klee thinks that, by pointing about that these OoM approximations are not exact and sometimes off by more than a factor of 10, he has shown that the universe is not as fine-tuned as those “astrophysicists” claim.

Wrong. Totally wrong. (more…)

Read Full Post »

This is my second critique of the work of Ikeda and Jefferys (IJ) on the fine-tuning of the universe for intelligent life. IJ insist that we must always condition on everything that we know is true. Here, I’ll raise a few case studies in need of clarification. I should warn that I’m somewhat less certain about this part than the previous one. The fog is probably in my own head.

A. Magneto saves the day

This is a variation on John Leslie’s firing squad parable. You are sitting with your grandpa on his porch. Grandpa says, “I have a confession. I’m Magneto.” You: “What? You’re one of the Xmen? You can manipulate metals at will?” Grandpa: “Yes. That’s right”. You: “Right. Sure. Prove it.”

Grandpa pulls a set of keys from his pocket and makes them levitate two inches above his hand. “Yeah, nice magic trick, Grandpa”, you say. But then, up on the hill overlooking the porch, a freight train derails! Its carriages tumble toward the house. And, just your luck, this train happened to be loaded with TNT and samurai swords. The ensuing explosion sends several tonnes of rather pointy metal hurtling towards the porch. You instinctively flinch. A few seconds later … you’re alive! You turn in shock to see that every inch of your Grandpa’s house has shards of metal sticking out of it, except for two perfect silhouettes of you and your Grandpa. He looks at you, and smiles. “Not bad, huh?”

Now, like the nerd you are (you’re reading a science-themed blog, so there’s no point denying it), you want to formalise your conclusion. (more…)

Read Full Post »

Once more unto the breach, dear friends. (Another long fine-tuning post, I’m afraid …)

An oft-cited article on the fine-tuning of the universe for intelligent life was written by Michael Ikeda and Bill Jefferys, and goes by the title: “”The Anthropic Principle Does Not Support Supernaturalism”. It appears online here, and to the best of my knowledge has not been published anywhere has been published in “The Improbability of God“, edited by Michael Martin and Ricki Monnier (edit: 3/11/2010).

IJ’s Argument

Unless otherwise noted, quotes are from Ikeda and Jefferys (hereafter IJ). Their central argument is as follows. Let:

L = The universe exists and contains Life.
F = The conditions in the universe are ‘life-Friendly,’ that is, the conditions in our universe permit or are compatible with life existing naturalistically.
N = “The universe is governed solely by Naturalistic law.” The negation, ~N, is that it is not governed solely by naturalistic law, that is, some non-naturalistic (supernaturalistic) principle or entity is involved. N and ~N are not assumptions; they are hypotheses to be tested.

L is, of course, true of our universe. For the sake of argument, IJ assume that F is true. N and ~N are taken to have an a priori non-zero probability of being true. Now, the anthropic principle roughly states that living observers must observe conditions that permit the existence of observers. IJ formulate this as:

P(F|N\&L) = 1.         (1)

N appears in the expression just in case a supernatural agent decides to miraculously sustain life in a non-life-friendly universe.

Now, after dealing with the fallacious1 argument P(F|N) \ll 1 \Rightarrow P(N|F) \ll 1, IJ reach their Bayesian climax:

P(N|F\&L) = \frac{P(F|N\&L) P(N|L)} {P(F|L)}        (Bayes Theorem)
= \frac{P(N|L)} {P(F|L)}                                                    (using 1)
\ge P(N|L)                                                 (since P(F|L) \le 1)

Thus, the fine-tuning of the universe for intelligent life is at best irrelevant to the truth of naturalism, and could actually make it more likely. The fine-tuning of the universe, even if it is true, cannot support supernaturalism. Notice that all probabilities are conditioned on L. As IJ say:

… for an inference to be valid, it is necessary to take into account all known information that may be relevant to the conclusion. In the present case, we happen to know that life exists in our universe (i.e., that L is true). Therefore, it is invalid to make inferences about N if we fail to take into account the fact that L, as well as F, are already known to be true. It follows that any inferences about N must be conditioned upon both F and L … In inferring the probability that N is true, it is entirely irrelevant whether P(F|N) is large or small. It is entirely irrelevant whether the universe is “fine-tuned” or not. Only probabilities conditioned upon L are relevant to our inquiry.

I have two responses. Here I will contend that IJ’s formulation of the argument is incomplete. In the second part, I’ll raise a few issues with this “conditioning on everything” idea.

My Formulation (more…)

Read Full Post »

I’ve just about finished by series of responses to various views on the fine-tuning of the universe for intelligent life that I have encountered. Here I will respond to the work of Hector Avalos, who is professor of Religious Studies at Iowa State University. In 1998, he wrote an article for Mercury Magazine entitled “Heavenly Conflicts: the Bible and Astronomy.” While most of the article pertains to the cosmology of the Bible and it’s (shock horror) apparent contradiction with modern cosmology, he spends five paragraphs near the end discussing the anthropic principle. He writes:

Attempts to relate the Bible to astronomy are often intertwined with the search for the meaning and purpose of human life. In particular, discussions by John A. Wheeler, John Barrow and other cosmologists concerning the so-called anthropic principle – the idea that the physical constants of the universe are finely tuned for human existence – have attracted interest. The anthropic principle would assert, for example, that if the charge of the electron were other than what it is or the weights of the proton and neutron were different, then human existence would not be. But do these precise quantities necessarily indicate that human beings were part of some intelligent purpose?

The primary assumption of the anthropic principle, which is really a new version of the older “divine design” or teleological argument, seems to be that the “quantity of intelligent purpose” for an entity is directly proportional to the quantity of physico-chemical conditions necessary to create that entity. But the same line of reasoning leads to odd conclusions about many non-human entitles.

… let’s use the symbol P to designate the entire set of physico-chemical conditions necessary to produce a human being … Making a computer requires not only all the pre-existing conditions that enable humans to exist but also human beings themselves. In more symbolic terms, making a computer requires P + human beings, whereas only P is needed to make human beings. By the same logic, garbage cans and toxic pollution produced by human beings would be more purposed than human beings. So measuring the divine purpose of an entity by the number of pre-existing conditions required to make that entity is futile.

This response to the fine-tuning of the universe is confused on many levels. (more…)

Read Full Post »

[Edit, 4/2/2012: I’ve written a more complete critique of Stenger’s book The Fallacy of Fine-Tuning: Why the Universe Is Not Designed for Us. It’s posted on Arxiv. In particular, the program MonkeyGod is critiqued in Appendix B; most of the points raised below remain valid.]

This post is the second critiquing Victor Stenger’s take on the fine-tuning of the universe for intelligent life. Here are some more of Stenger’s claims. (The quotes below are an amalgam of the articles on this page.)

I think it is safe to conclude that the conditions for the appearance of a universe with life are not so improbable as the those authors, enamored by the anthropic principle, would have you think … [T]here could be many ways to produce a universe old enough to have some form of life.

How does Stenger reach this conclusion?

I have written a program, MonkeyGod … I have studied how the minimum lifetime of a typical star depends on three parameters: the masses of the proton and electron and the strength of the electromagnetic force. (The strong interaction strength does not enter into this calculation.) Varying these parameters by ten orders of magnitude around their present values, I find that over half of the stars will have lifetimes exceeding a billion years, allowing sufficient time for some kind of life to evolve. Long stellar lifetime is not the only requirement for life, but it certainly is not an unusual property of universes. (more…)

Read Full Post »

[Edit, 4/2/2012: I’ve written a more complete critique of Stenger’s book The Fallacy of Fine-Tuning: Why the Universe Is Not Designed for Us. It’s posted on on Arxiv.]

This post is part of a series that responds to internet articles on the fine tuning of the universe. Here I will respond to Prof. Victor Stenger, who is a particle physicist at the University of Hawaii and known for his defence of atheism. Stenger, according to Wikipedia, is currently writing a book on fine-tuning. Here I will respond to a point he made in a debate with Dr. William Lane Craig.

Stenger proposes the following counterexample to the claim that interesting conclusions can be drawn from the improbability of the fine-tuning of the constants/initial conditions/laws of nature:

Low probability events happen every day.  What’s the probability that my distinguished opponent exists?  You have to calculate the probability that a particular sperm united with a particular egg, then multiply that by the probability that his parents met, and then repeat that calculation for his grandparents and all his ancestors going back to the beginning of life on Earth. Even if you stop the calculation with Adam and Eve, you are going to get a fantastically small number. To use words that Dr Craig has used before, “Improbability is multiplied by improbability by improbability until our minds are reeling in incomprehensible numbers.” Well, Dr Craig has a mind-reeling, incomprehensibly low probability – a priori probability – for existing.  Yet here he is before us today. (more…)

Read Full Post »

Today, more on the work of William Lane Craig on the fine-tuning of the universe for intelligent life. The issue today is whether the laws/constants/initial conditions of the universe are, in some way, necessary. We think that the probability of a randomly chosen universe (with its laws, constants and initial conditions) being life-supporting is vanishingly small. We reach this conclusion by altering the laws/constants/initial conditions and predicting the outcome.

But perhaps when we have a deeper understanding of the laws of nature, we’ll realise that these constants couldn’t have been different. Or at least, we’ll realise that many of them are related, and thus cannot be altered independently. This would significantly reduce the probability of “getting the universe right”, as there are fewer dials to be tuned.

Let’s have a look at Craig’s response to this argument: (more…)

Read Full Post »

This post is part of a series on the fine-tuning of the universe. Here I will respond to the work of Dr. William Lane Craig. Craig is Research Professor of Philosophy at Talbot School of Theology. He is known for his defence of arguments for the existence of God, both in philosophical journals and public debates. Here, I will respond to a point that Craig has made in response to the multiverse (or many-worlds hypothesis; James Sinclair makes a similar point in his essay in “Contending with Christianity’s Critics”):

The error that that is made by the many worlds hypothesis is that it is basically an attempt to multiply your probabilistic resources without having any justification for doing so. It’s a way of saying that the improbable roll of the dice that we have come up with is rendered probable because there have been many throws. If you’re allowed to do that, then you could explain away anything. For example, imagine a couple of card players in a west Texas saloon. And every time one of them deals, he gets four aces, and wins the game. The other guy gets outraged and says, “Tex! You’re a dirty cheater!” And old Tex says, “Well, Slim, you shouldn’t really be surprised that every time I deals I gets four aces. After all, in this infinite universe of ours there’s an infinite number of poker games goin’ on somewhere. And so chances are in some of them I gets four aces every time I deals.” (more…)

Read Full Post »

Today I’ll be looking at a paper on the fine-tuning of the universe by Professor Fred Adams. He is professor of physics at the University of Michigan, where his main field of research is astrophysical theory focusing on star formation, background radiation fields, and the early universe.

Fred Adams published a paper in 2008 entitled “Stars In Other Universes: Stellar structure with different fundamental constants”. The paper garnered some interest from the science blogosphere and popular science magazines. Here are the relevant parts of the abstract:

Motivated by the possible existence of other universes, with possible variations in the laws of physics, this paper explores the parameter space of fundamental constants that allows for the existence of stars. To make this problem tractable, we develop a semi-analytical stellar structure model. [We vary] the gravitational constant G, the fine structure constant $\latex alpha$, and a composite parameter C that determines nuclear reaction rates. Our main finding is that a sizable fraction of the parameter space (roughly one fourth) provides the values necessary for stellar objects to operate through sustained nuclear fusion. As a result, the set of parameters necessary to support stars are not particularly rare.

(more…)

Read Full Post »

Older Posts »