Feeds:
Posts

## Fine-Tuning and the Myth of “One variable at a time”

A commenter over at my post “Got a cosmology question?” asks:

Someone told me “there is not a single paper which finds fine tuning that has allowed multivariation”. Can you please refute this?

Incidentally, cosmology questions are still very welcome over there.

“Multivariation” is not a word, but in this context presumably means varying more than one variable at a time. There is an objection to fine-tuning that goes like this: all the fine-tuning cases involve varying one variable only, keeping all other variables fixed at their value in our universe, and then calculating the life-permitting range on that one variable. But, if you let more than one variable vary at a time, there turns out to be a range of life-permitting universes. So the universe is not fine-tuned for life.

This is a myth. The claim quoted by our questioner is totally wrong. The vast majority of fine-tuning/anthropic papers, from the very earliest papers in the 70’s until today, vary many parameters1. I’ve addressed these issues at length in my review paper. I’ll summarise some of that article here.

The very thing that started this whole field was physicists noting coincidences between the values of a number of different constants and the requirements for life. Carter’s classic 1974 paper “Large number coincidences and the anthropic principle in cosmology” notes that in order for the universe to have both radiative and convective stars we must have (in more modern notation to his equation 15, but it’s the same equation),

$\alpha_G^{1/2} \approx \alpha^6 \beta^2$

where, in Planck units, $\alpha_G = m_{proton}^2$$\alpha = e^2$$\beta = m_{electron}/m_{proton}$, and $e$ is the charge on the electron. (Interestingly, Barrow and Tipler show that the same condition must hold for stars emit photons with the right energy to power chemical reactions e.g. photosynthesis.) Similarly for cosmological cases: for the universe to live long enough for stars to live and die, we must have,

$|\kappa| \lesssim \left( \frac{\eta^2}{m_{proton}} \right)^{1/3} m_{proton}^3$

where $\kappa$ is related to the curvature of space and $\eta$ is roughly the baryon to photon ratio.

This continues in the classic anthropic papers. Carr and Rees (1977) show that to have hydrogen to power stars left over from big bang nucleosynthesis, and to have supernovae distribute heavy elements, we must have (in Planck units, rearranging their equation 61),

$m_{electron}^{-3/2} \sim g_w$

where $g_w$ is the weak coupling constant.

Barrow and Tipler’s “The Anthropic Cosmological Principle” shows that, for carbon and larger elements to be stable, we must have:

$\alpha_s \lesssim 0.3 \alpha ^{1/2}$

where $\alpha_s$ is the strong force coupling constant (evaluated at $m_Z$, if you’re interested).

The whole point of these relations and more like them, which the early anthropic literature is entirely concerned with, is that they relate a number of different physical parameters. There are approximations in these calculations – they are order-of-magnitude – but this usually involves assuming that a dimensionless mathematical constant is approximately one. At most, a parameter may be assumed to be in a certain regime. For example, one may assume that $\alpha$ and $\beta$ are small (much less than one) in order to make an approximation (e.g. that the nucleus is much heavier than the electron, and the electron orbits non-relativistically). These approximations are entirely justified in an anthropic calculation, because we have other anthropic limits that are known to (not merely assumed to) involve one variable – e.g. if $\beta$ is large, all solids are unstable to melting, and if $\alpha$ is large then all atoms are unstable. See section 4.8 of my paper for more information and references.

More modern papers almost always vary many variables. Examples abound. Below is figure 2 from my paper, which shows Figures from Barr and Khan and Tegmark, Aguirre, Rees and Wilczek. (Seriously, people … Wilczek is a Nobel prize winning particle physicist and Martin Rees is the Astronomer Royal and former president of the Royal Society. These people know what they are doing.)

The top two panels show the anthropic limits on the up-quark mass (x axis) and down-quark mass (y axis). 9 anthropic limits are shown. The life-permitting region is the green triangle in the top right plot. The lower two panels show cosmological limits on the cosmological constant (energy density) $\rho_\Lambda$, primordial inhomogeneity Q, and the matter density per CMB photon. Tegmark et al. derive from cosmology 8 anthropic constraints on the 7 dimensional parameter space $(\alpha, \beta, m_{proton}, \rho_\Lambda, Q, \xi,\xi_{baryon})$. Tegmark and Rees (1997) derive the following anthropic constraint on the primordial inhomogeneity Q:

(1)

Needless to say, there is more than one variable being investigated here. For more examples, see Figures 6, 7 (from Hogan), 8 (from Jaffe et al.) and 9 (from Tegmark) of my paper. The reason that the plots above only show two parameters at a time is because your screen is two dimensional. The equations and calculations from which these plots are constructed take into account many more variables than can be plotted on two axes.

This myth may have started because, when fine-tuning is presented to lay audiences, it is often illustrated using one-parameter limits. Martin Rees, for example, does this in his excellent book “Just Six Numbers“. Rees knows that the limits involve more than one parameter – he derived many of those limits. But equation (1) above would be far too intimidating in a popular level book.

My paper lists about 200 publications relevant to the field. I can only think of a handful that only vary one parameter. The scientific literature does not simply vary one parameter at a time when investigating life-permitting universes. This is a myth, born of (at best) complete ignorance.

____________________

Postscript: The questioner’s discussion revolves around the article of Harnik, Kribs & Perez (2006) on a universe without weak interactions. It’s a very clever article. Their weakless universe requires “judicious parameter adjustment” and so is also fine-tuned. Remember that fine-tuning doesn’t claim that our universe is uniquely life-permitting, but rather that life-permitting universes are rare in the set of possible universe. Thus, the weakless universe is not a counterexample to fine-tuning. There are also concerns about galaxy formation and oxygen production. See the end of Section 4.8 of my paper for a discussion.

Footnotes:

1. Even if fine-tuning calculations varied only one parameter, it wouldn’t follow that fine-tuning is false. Opening up more parameter space in which life can form will also open up more parameter space in which life cannot form. As Richard Dawkins (1986) rightly said: “however many ways there may be of being alive, it is certain that there are vastly more ways of being dead, or rather not alive.” For more, see section 4.2.2 of my paper.

More of my posts on fine-tuning are here.

### 17 Responses

1. If this model can help you http://arnaudantoineandrieu.files.wordpress.com/2013/07/static-density-mono-particle-arnaud-antoine-andrieu.png ; you can use it, modify it. It represents the electron ‘for example’. In my mind it follows by: arnaudantoineandrieu.files.wordpress.com/2013/06/fermionic_model_arnaudantoineandrieu.png ; Thanks

2. Thanks in favor of sharing such a pleasant thought, paragraph is
pleasant, thats why i have read it fully

3. What do you think of the following critiques of fine tuning:

Critique 1:

FT is calculated by looking at the set of physically possible universes and noticing that only a tiny fraction of that space permits life. Then it is concluded that fine tuning is improbable.

This is like tossing a coin and concluding that it landing on heads requires multiverse/divine/necessary kind of explanations because out of the vast set possible ways it could have landed (e.g. on one angle, a slightly different angle, another angle etc), it landing that particular way is terribly improbable.

The basis we have for declaring something improbable is frequency on repeated trials. This is not possible with the universe. Therefore, the fine tuning argument is false.

Critique 2:

This is actually provided by a couple of theistic philosophers in the paper ‘Probabilities and the Fine Tuning Argument: A Skeptical View':

What is your assessment of these criticisms?

4. 1. I don’t agree that: “The basis we have for declaring something improbable is frequency on repeated trials”. This view of probability, known as finite frequentism (see http://plato.stanford.edu/entries/probability-interpret/), would make all cosmology impossible, since there is only one universe. It also misunderstands the type of argument that fine-tuning needs. Fine-tuning needs claims like “if the laws/constants of nature were so and so, then the universe would behave like such and such”. This is theoretical physics. Exactly theoretical physics, not just analogous to theoretical physics. The arguments are *about* other hypothetical universes. No claim is being made about actual universes, so complaining that there have been no trials misses the point.

I have a post about finite-frequentism mostly written. I think it fails. Repeated trials are evidence for probability assignments, but not the basis for them.

2. The McGrew et al criticism is well worth a close think. I’d want to say a few things.
* I could just bite the bullet and argue that the coarse tuning argument is just as valid.
* I could argue that, since the relevant probabilities are epistemic, the relevant bounds on parameter space are also epistemic. We just ask: what are the limits to where I can predict what a universe would be like? If these are plausibly finite, then I have a bounded space.For example, I think that some of the best arguments for fine-tuning regard particle masses, and the masses can be plausibly bounded above by the Planck mass.

I read the paper a while ago. I should probably read it again.

5. on August 6, 2013 at 6:41 pm | Reply Tayyib Chowdhry

Hi Meh and Luke,I think there are a few papers that deal with the issues raised by Meh.
1. Finite Frequentism : http://www.joelvelasco.net/teaching/3865/hajek%20-%20mises%20redux%20redux.pdf

2. Coarse tuning : http://home.messiah.edu/~rcollins/Fine-tuning/chapter%203%20how%20to%20rigorously%20define%20fine-tuning.doc (sections 5.2 and 5.3 are the relevant ones)

6. Luke: “…finite frequentism would make all cosmology impossible, since there is only one universe. ”

One persons modus ponens is another’s modus tollens. The determined proponent of such an argument could declare that this proves that something is fundamentally wrong with modern cosmology.

“It also misunderstands the type of argument that fine-tuning needs. Fine-tuning needs claims like “if the laws/constants of nature were so and so, then the universe would behave like such and such”. This is theoretical physics. ”

Ok but I’m not sure I follow you. You seem to be saying that in order to run fine tuning arguments you need counterfactuals like ‘if x were the case, y would occur’ to be true.

But why couldn’t someone say ‘if the wind blows this way, this coin will land like this’ or ‘if I flip the coin like this, it will land that way’ etc etc. There are a range of factors that affect which way a coin lands. This shows that a multiverse is needed to explain why the coin landed here rather than there and this way rather than that way because for it land the way it did, this counter factual would have to obtain out of a very large set of possibilities.

As for the criticisms of finite frequentism, I skimmed through the paper Tayyib linked to (still working my way through the SEP entry). But it isn’t enough to show that frequentism fails. You also need to show that there is a way of calculating these probabilities that does also lead to absurdities (like supposing that, instead of chance, we need infinitely many universes to explain why a coin lands on heads instead of tails).

At this point I should probably clarify that I don’t think I endorse above arguments. I’ve just come across them before and couldn’t quite articulate what I found wrong with them so I thought I’d better ask an expert.

7. “One persons …”. Agreed. I’m assuming we can do cosmology. Exhibit A: http://astrobites.org/wp-content/uploads/2013/03/cmb_power.png

“This shows that a multiverse is needed” .. I don’t get it. Why? A multiverse is a set of actual universes, not possible ones.

8. Please find two more references re the nature of Reality

http://www.dabase.org/up-1-7.htm

http://sacredcamelgardens.com/wordpress/the-unique-potential-of-man

And on the mommy-daddy nature of conventional “creator-‘God'” religiosity

9. […] “He can only get his “narrow range” by varying one single constant”. Wrong. The very thing that got this field started was physicists noting coincidences between a number of constants and the requirements of life. Only a handful of the 200+ scientific papers in this field vary only one variable. Read this. […]

10. […] Where are the peer-reviewed scientific publications that “only get [a] “narrow range” by varying one single constant“? […]

11. Interesting article.

12. revolutionary concepts

13. This might be interesting for some people, Jeff Lowder (an atheistic philosopher who also knows a thing or two about probability and bayes theorem and is one of the most fairminded philosophers I’ve read) has weighed in on the discussion between Barnes and Carrier. See in the comment section of the following post:

http://www.patheos.com/blogs/secularoutpost/2014/04/28/how-hugh-ross-calculates-the-improbability-of-life-on-earth-due-to-chance-alone/

It seems he agree’s with many of your points Barnes (though not all). I think he has interesting points to add himself like:

” * In his essay, Carrier writes: “Probability measures frequency (whether of things happening or of things being true).” Not exactly. The frequentist interpretation of probability measures relative frequency, but the frequentist interpretation of probabiltiy isn’t the only interpretation of probability. There are “many other games in town” besides that one; there is also the epistemic interpretation of probability (aka “subjective” aka “personal” aka “Bayesian”), which measures degree of belief. Thus, to say that probability just is relative frequency is to beg the question against all the rival interpretations of probability. (And, for the record, I’m actually a pluralist when it comes to probability; following Gillies, I think different interpretations can be used in different situations.)”

And something I strongly agree with:
” Reading the exchange between Carrier and Barnes reminds me of one of my wishes for people who use Bayes’ Theorem in this way: I really wish people would explicitly state the propositions they are including in their background knowledge. It avoids misunderstandings and misinterpretations.”

14. […] to GGDFan777 for the tip-off: Jeffery Jay Lowder has weighed in on my posts (one, two, three, four) about Richard Carrier. […]