Feeds:
Posts
Comments

Archive for February, 2015

Gabriel Popkin has written a nice overview of some recent work on the fine-tuning of the universe for intelligent life at insidescience.org, titled “A More Finely Tuned Universe“. It’s well worth a read, and features a few quotes from yours truly.

It details the work of Ulf Meissner and colleagues on the dependence of the Hoyle resonance in Carbon on the masses of the up and down quarks. The quark masses are fundamental parameters of the standard model, meaning that we can measure them, but the model itself can’t predict them. They are just arbitrary constants, so far as the equations are concerned. Their work shows that a change in the quark masses of ~3 percent with respect to their values in this universe will not result in the universe producing substantially less carbon or oxygen, so this is something of a safe zone. As the article quotes me as saying, I hope that they continue to push things further, to see if and where the universe really starts to change.

I have a problem, however, with the following quote:

David Kaplan, a particle physicist at Johns Hopkins University in Baltimore, said two to three percent gives the quark mass a lot of wiggle room compared to other much more finely tuned parameters within physics, including the cosmological constant.

(Just to note: I was quoted accurately in the article, so probably the other scientists were too. This isn’t always the case in science journalism, so I’m responding here to the quote, not necessarily to the scientist.)

The three percent change in the quark masses is with respect to their values in this universe. This is a useful way to describe the carbon-based-life-permitting range, but gives a misleading impression of its size. For fine-tuning, we need to compare this range to the set of possible values of the quark masses. This set of possible values – before you ask again, Jeff Shallit – is with defined by the mathematical model. It is part of our ideas about how the universe works. If you’ve got a better idea, a natural, simple idea for why constants like the quark masses must have the values they do, then write it down, derive the constants, and collect your Nobel Prize. The standard model of particle physics has no idea why the constants take any value over their possible range, that is, the range in which the model is well-defined and we can calculate its predictions. Moreover, in testing our ideas in a Bayesian framework, we cannot cheat by arbitrarily confining our free parameters to the neighbourhood of their known value. The prior is broad. Fine-tuned free parameters make their theories improbable.

The smallest possible mass is zero; the photon, for example, is massless. The largest mass that a particle can have in the standard model is the Planck mass. Larger particles are predicted to become their own black hole, so we would need a quantum theory of gravity to describe them. Alas, we’re still working on that.

3% of the quark masses value in our universe is one part in $ latex10^{23}$ (one followed by 23 zeros) of the Planck mass. Technically, the down quark mass is (roughly) the product of the “Higgs vev” and a dimensionless parameter called the Yukawa parameter. The possible range of the Higgs vev extends to the Planck mass; why it is so much smaller than the Planck mass is known as the Hierarchy problem. The quark Yukawa parameters are about 3 \times 10^{-5}, which leads Leonard Susskind to comment (in The Cosmic Landscape),

.. the up- and down-quarks … are absurdly light. The fact that they are roughly twenty thousand times lighter than particles like the Z-boson and the W-boson is what needs an explanation. The Standard Model has not provided one.

In my paper on fine-tuning, I discuss the “cheap binoculars fallacy”: you can make anything look big, if you just zoom in enough. Actually, the fine-tuning of the cosmological constant is a good example of avoiding this fallacy. Relative to its value in our universe, the cosmological constant doesn’t seem very fine-tuned at all. Forget 3%; it can increase by a factor of ten, or take on similar but negative value, and the universe would still contain galaxies and stars. No one thinks that this is the answer to the cosmological constant problem, because comparing the life-permitting range with the value in our universe is irrelevant. When we compare to the range the constant could take in our models, we see fine-tuning on the order of one part in 10^{120}.

Later in the article, Kaplan states:

“Maybe if you change the quark masses not by three percent but by 50 percent you could end up with a situation where life as we know it couldn’t exist, but life as we don’t know it could exist,”

I agree with that sentence, so long as it starts with “Maybe”. But the state of understanding of our models is such that the burden of proof is now firmly on the “life as we don’t know it” claim. There is zero evidence for it, and piles against it. For example, one doesn’t have to change the quark masses by very much to obliterate nuclear binding. No nuclei. No atoms. No chemistry. No periodic table. No stars. No planets. Just hydrogen gas. These calculations have been done, for example “Constraints on the variability of quark masses from nuclear binding” by Damour and Donoghue. If they are wrong, then write a paper about it and send it to Physical Review D. Possibilities are cheap.

Of course, when Geraint Lewis and I publish our fine-tuning book, all this will be sorted out once and for all, bringing fame and fortune and a movie deal. Editing continues, so stay tuned.

Advertisement

Read Full Post »