Today I’ll be looking at a paper on the fine-tuning of the universe by Professor Fred Adams. He is professor of physics at the University of Michigan, where his main field of research is astrophysical theory focusing on star formation, background radiation fields, and the early universe.
Fred Adams published a paper in 2008 entitled “Stars In Other Universes: Stellar structure with different fundamental constants”. The paper garnered some interest from the science blogosphere and popular science magazines. Here are the relevant parts of the abstract:
Motivated by the possible existence of other universes, with possible variations in the laws of physics, this paper explores the parameter space of fundamental constants that allows for the existence of stars. To make this problem tractable, we develop a semi-analytical stellar structure model. [We vary] the gravitational constant G, the fine structure constant $\latex alpha$, and a composite parameter C that determines nuclear reaction rates. Our main finding is that a sizable fraction of the parameter space (roughly one fourth) provides the values necessary for stellar objects to operate through sustained nuclear fusion. As a result, the set of parameters necessary to support stars are not particularly rare.
The first thing to admit is that I am a galaxies-and-larger astronomer. Stars aren’t my speciality. So I can’t provide an in depth assessment of the model.
That said, this seems to be a very nice piece of work, and the sort of careful, detailed research that the fine-tuning field needs. Section 4 of the paper, titled “Unconventional Stars”, is an interesting discussion on the possibilities for objects like radiating black holes filling the role of stars in other universes. And he is aware that he has only considered one life-permitting criterion, and thus doesn’t overstate his conclusions by passing judgement on all claims of fine-tuning.
I have one minor quibble, and one major quibble. The minor quibble is the use of the parameter C. This parameter is related to the nuclear reaction rates. We are told that C “depends in a complicated manner on the strong and weak forces, as well as the particle masses”. Actually, the expression for C (equation 18) also contains the fine structure constant, which doesn’t help. Later in the paper, Adams varies C by a factor of 100 in either direction. But we have no way of knowing what a variation in C means for the strong force, or the weak force, or the particle masses. It would have been much more instructive to vary a fundamental constant, even if it was just the strong force (keeping the weak force and particle masses constant), because of the centrality of the strong force to the properties of stars.
This is a minor quibble, because it wouldn’t surprise me if Adams’ range of C corresponds to a reasonable range of the strong force. I’m fairly sure Adams knows what he’s doing.
My major quibble regards the claim that “a sizable fraction of the parameter space (roughly one fourth) provides the values necessary for stellar objects to operate through sustained nuclear fusion”. I’m always sceptical of these sorts of probability estimates, and in this case it is with good reason. The basis of this claim is figure 5:
G is the gravitational constant, and is the fine structure (electromagnetism) constant. The triangle is our universe, and the area under the line is where stars are permitted. The dashed (dotted) line is for C increased (decreased) by a factor of 100.
We see that, indeed, roughly one fourth of the plot is under the solid line. However, Adams has made two crucial assumptions that aren’t mentioned, yet are crucial to the probability estimate. The first is the limits of the plot. Ten orders of magnitude seems like a large range, but remember that the ratio of the strong force to the gravitational force is about . If we extended parameter space to , then the probability drops to about 9%. We could make this probability as small as we like by enlarging parameter space, and Adams has given us no reason not to do so.
More important is Adams’ use of a logarithmic scale. This is equivalent to assuming that the Probability Density Function (PDF) for a model parameter is uniform in over the allowed range.
Adams has not justified this assumption, so what happens if we change it? If, instead, we were to plot in normal (not log) space (i.e. the PDF is uniform in ), then the probability would drop from 25% to . And if we extend the possible range of the gravitational force to where it is as strong as the strong force (), then the probability drops to .
The word “robust” in statistics has a very specific meaning. A statistical estimate is robust if it remains relatively unchanged by reasonable changes in the model’s assumptions. Any probability estimate that we are free to alter by 42 orders of magnitude is worse than non-robust. It is meaningless. Adams should at least acknowledge this problem – if he can’t justify his prior probability distribution for his parameters, then he can’t make a probability estimate at all. His claim that “a sizable fraction of the parameter space (roughly one fourth) provides the values necessary for [stars]” is thus utterly baseless.
My biggest concern is that when Adams’ work is cited in the context of the fine-tuning of the universe for life, the figure of 25% is simply accepted as scientifically proven. Here are some quotes (mostly from internet articles and blogs):
Serious scientists such as Fred Adams have clearly negated the fine-tuning claim. About a QUARTER of the Adams’ universes turned out to be populated by energy-generating stars.
Adams reckons his results suggest that the “specialness” of our universe could well be an illusion. (From New Scientist)
Fred Adams has investigated the problem of the life times of stars … His conclusion was that stable, long-lived stars existed in vast regions of parameter space. (Steve Zara and Øystein Elgarøy)
Fred Adams has recently shown that all this talk of the universe being fine-tuned is based on a false, simplistic premise.
Recent studies by Fred Adams indicate that the existence of life may not even require fine tuning at all.
Adams’ work cannot support these claims. Even if the figure of 25% were robust, it still wouldn’t follow that fine-tuning has been “negated” – there are plenty more fine-tuning claims that Adams hasn’t addressed (even with regard to stars), and hasn’t claimed to address. But most importantly, he could have just as easily concluded that only 1 part in of parameter space allows for stars.
The question of the prior PDF for the fundamental constants of nature is a very difficult one – where do we start? A multiverse proposal (see Ellis), like the string landscape, should be able to calculate it – in theory. In the absence of such a model, we would need to reason very carefully, considering the full consequences of any assumptions we make. That being the case, the best approach is to carefully separate these two questions: 1. What range of the fundamental constants is life-permitting? 2. What is the probability that a universe, chosen at random from the range of possible universes, will fall in the life-permitting range? We are on much safer ground answering the first question.
It’s the same as the difference between asking, “where on the dartboard do I need to hit to score a bullseye?”, and asking, “what is the probability that a bullseye will be hit in a single throw?” Answering the first question is necessary but not sufficient to answer the second. The answer to the second question depends crucially on how the darts are thrown.
More of my posts on fine-tuning are here.
I don’t really think Adams can be faulted too much for having done one particular calculation and not another, but yes, people should not present such a result as though it didn’t depend on assumptions.
>>The question of the prior PDF for the fundamental constants of nature is a very difficult one – where do we start?<<
It's not just a very difficult question – as far as I can tell, it's pretty much a meaningless one. If something like multiverses is true then there might be a frequency distribution over fundamental constants, but that would be unknown, so we should be talking about probability distributions over a space of frequency distributions…
In my view the way forward with these questions is to apply non-indexical conditioning (insert yet another plug for Radford Neal's paper), that is, answering the question, "how does what we actually observe (life exists on Earth, etc) affect the plausibility of various hypotheses?" How much of the volume of hypothetical fundamental-constant space *may* end up being life-supporting may turn out to be quite irrelevant.
Interesting series of posts on fine tuning by the way – I'm currently reading Sean Carroll's entropy/arrow of time book, so perhaps the blog can expect a "disordered" critique of that. 🙂
I put asterisks around the wrong *may* in one sentence.
How much of the volume of hypothetical fundamental-constant space ends up being life-supporting *may* turn out to be quite irrelevant.
“people should not present such a result as though it didn’t depend on assumptions” – spot on. I’ll have some rather harsh words on that topic for Victor Stenger soon. My next target is William Lane Craig. I’ve nearly got all the fine-tuning stuff out of my system …
You might be right about the prior for the fundamental constants. Another scenario would be if they are set via some symmetry breaking, then that may allow the calculation of the prior for, say, the quark masses.
There might more general arguments that can be made in the absence of a specific physical/multiverse proposal. Here’s Anthony Aguirre in “Universe or Multiverse”:
“If the probability of the laws/constants P is to have interesting structure over the relatively small life-permitting range (parameterised by A), there must be a parameter of order A in the expression for P. But it is precisely the absence of such a parameter that motivated the anthropic approach.”
This sort of argument would provide a reasonable case for a flat probability density in the neighborhood of the life-permitting region of parameter space.
[…] Fred Adams and Luke’s critique […]
Im at the point and so is much of American society that Opinions of origins should be qualified on that persons belief system. All opinions should be heard but if they are coming from a 6000 year old creationist or a christian hating atheist they should be put aside in favor of a more neutral thinker.
Anyone who has spoken out against Christianity, which most agree is the only game in town–if there is a God, is essentially worthless as their bias is as if they are fighting for their lives. These people know if they’re wrong–they are Doomed are thats too powerful a bias to disregard.
When you hear an agnostic or someone who considers himself on the road of life and discovery you immediately notice the difference in reasoning to an outspoken atheist. These people went into the field to confirm their bias and are the ones whose fear has clouded their judgement. They are like infants with their fingers in their ears singing la la la–just as 6000 year old creationist are as well.
Yes, even if Hitler says 2+2=4 it is true…but this is the most important question in life and if Christians could be wrong they would never be around to know it–YET atheists would have made the worst mistake a man could ever make–so their bias is paralyzing and it shows up clearly in all their papers and books. I say they should be considered well after unbiased researchers.
Until Science, those doing reviews and the like ,clearly identify WHO is doing the talking here–the public will continue to be suspicious of the opinion. This is the main reason why evolution is doubted when it has much good science beyond it(except for its chance elements). This man Adams is clearly incoherent in his ramblings and should identified as such before people waste their time on his personal wish that is there is no Creator.
[…] and size of a star aren’t good enough for you, then repeat the calculation in more detail, as Fred Adams has done. Barr & Khan (2007), Agrawal et al. (1998a,b), Jaffe et al. (2009), Epelbaum et al. […]
[…] fourth” mentioned in the abstract is not a measure of the life-permitting range. See this post, and my comments in the review […]
You say:
“That being the case, the best approach is to carefully separate these two questions: 1. What range of the fundamental constants is life-permitting? 2. What is the probability that a universe, chosen at random from the range of possible universes, will fall in the life-permitting range? We are on much safer ground answering the first question.”
It seems to me strictly speaking impossible to give an answer to 2, given that there are an infinite number of life-permitting possible universes and an infinite number of non-life permitting possible universes. But I don’t think this matters, as we just need to focus on the set of possible universes with physical laws/initial conditions exactly the same as our own except that the values of the constants involved in physical laws/initial conditions vary. Call that set S. Then I think we can get a modified version of your question that we can answer: What is the probability that a universe, chosen at random from S, will fall in the life-permitting range?
Actually, that’s still not good enough, as there will still be an infinite number of universes in S. So we have to find some natural grouping of possible universes within S, S*, and then ask a modified version of your question in terms of S*. I suspect this is what physicists are actually doing when they defend probability claims with reference to fine-tuning, and if so, it seems to me perfectly philosophical defensible, despite the naysaying of certain philosophers.
Philip Goff
[…] for responses to two of the main proponents of the “not so fine-tuned” argument. See this for a response to Fred Adams’ research. And again, see this for a response to Victor […]