Just in time for Christmas, I’ve had a paper accepted by the Journal of Cosmology and Astroparticle Physics. It’s called “Binding the Diproton in Stars: Anthropic Limits on the Strength of Gravity“. Here’s the short version.
Diproton Disaster?
In 1971, Freeman Dyson discussed a seemingly fortunate fact about nuclear physics in our universe. Because two protons won’t stick to each other, when they collide inside stars, nothing much happens. Very rarely, however, in the course of the collision the weak nuclear force will turn a proton into a neutron, and the resulting deuterium nucleus (proton + neutron) is stable. These the star can combine into helium, releasing energy.
If a super-villain boasted of a device that could bind the diproton (proton + proton) in the Sun, then we’d better listen. The Sun, subject to such a change in nuclear physics, would burn through the entirety of its fuel in about a second. Ouch.
A very small change in the strength of the strong force or the masses of the fundamental particles would bind the diproton. This looks like an outstanding case of find-tuning for life: a very small change in the fundamental constants of nature would produce a decidedly life-destroying outcome.
Asking the Right Question
However, this is not the right conclusion. The question of fine-tuning is this: how would the universe have been different if the constants of nature had different values? In the example above, we took our universe and abruptly changed the constants half-way through its life. The Sun would explode, but would a bound-diproton universe create stars that explode?
In my review paper, I reported a few reasons to suspect that the diproton disaster isn’t as clear cut as we think, but noted that detailed calculations have not been performed. I’m a cosmologist/galaxy formation kind of astrophysicist, and so hoped that someone else would do it! However, a talk by Mark Krumholz (UC Santa Cruz) showed the way forward. Stars in our universe have an initial deuterium-burning phase, where they burn leftover deuterium from the big bang. They only have a very small amount, but this reaction is very similar to diproton burning in alternative universes.
Making Stable Stars
So I investigated stars that are initially 50% protons, 50% deuterium, and so are primed to burn via the strong force. The result: as expected, stars don’t explode. They simply burn at a lower temperature, and with less dense cores. In particular, for stars with the same total mass, there is only a factor of three difference in the total energy output per unit time. This means that their lifetimes are also similar.
Looking over all the stars available in parameter space – weak burning and strong burning – the most interesting constraint for a life-permitting universe is the maximum stellar lifetime. The figure below shows the strength of electromagnetism (horizontal axis) and the strength of gravity (vertical axis).
Below the dashed lines, hydrogen-burning stars are stable. Below the thick black line, deuterium/diproton burning stars are stable – this is a much larger region. Our universe is the black square. Note the logarithmic scale! The contour lines show the lifetime of the longest-lived (and hence smallest) stable star in a given universe. The line labelled “6” shows where the longest-lived star burns out in a million years – too short for planets and life and such. Binding the diproton does not affect chemistry, or indeed any of the physics that upon which living things directly rely.
So, if the strength of gravity () were not very small (< 10^30), all stars would burn out too quickly.This is a conservative but very robust anthropic constraint. Actually, the “strength of gravity” is the ratio of the proton to the Planck mass, so the relevant fine-tuning is the fact that the fundamental particles of nature are “absurdly light“, in the words of Leonard Susskind. These are some of the most important fine-tuning examples around.
Thanks for sharing this. I did not really get the math part of it, but the key message is interesting and worth pondering about.
Wonderful write-up Luke! I’m looking forward to reading the paper (soon as I get some time…). One bug alert… your link to Freeman Dyson (2nd paragraph) appears to href to the text (“freeman dyson”) rather than the URL. Easy fix… 🙂 Thanks, and keep up the great work!
Fixed. Cheers!
Hi Luke,
I’ve made comment on your paper here – http://neophilosophical.blogspot.com/2015/12/is-luke-barnes-even-trying-anymore.html – but in brief:
Your argument, perhaps taken from Martin Rees is that αG is unnaturally small, making α/αG unnaturally large. However, this argument resolves down to a question of (in your definition of αG) the relative values of the proton mass and the Planck mass. Thus it’s fundamentally a comment on the fact that the Planck mass is rather large, much much larger than the proton mass (and also the electron mass which is more commonly used to produce αG).
Therefore, what you have overlooked is what the Planck mass is, because what the Planck mass is explains why the Planck mass is so (relatively) huge. Unlike the Planck length and the Planck time, which both appear to be close to if not beyond the limits of observational measurement, the Planck mass is the mass of black hole with a Schwarzschild radius in the order of a Planck length (for which the Compton wavelength and the Schwarzschild radius are equal).
If the Planck mass were supposed to have some relation to a quantum mass (ie being close to if not beyond the limits of observational measurement), then you’d have an argument for fine-tuning, but it’s not and you don’t.
And in any event, 9 orders of magnitude (between 10^-30 and 10^-39) is not a fine-tuned margin. And that’s only if you use the proton mass variant of αG. If you use the more common definition of the gravitational couple constant, with the electron mass, there are 15 orders of magnitude (between 10^-30 and 10^-45).
I note that you didn’t post my last comment (against a different blog post). I’m giving you the benefit of the doubt and assuming that this was an error or oversight on your part. I posted that here – http://neophilosophical.blogspot.com/2015/09/another-open-letter-to-luke-barnes.html.
-neopolitan-
The proton mass is more commonly used in astrophysics to define \alpha_G because that is the parameter most relevant to stars. Since this is the relevant anthropic conditions, that is the definition used in the literature: Barrow and Tipler (1986), Carr and Rees (1979), Press and Lightman (1983), Padmanabhan (2000, Chapter 1), Davies (1983, page 48).
“If the Planck mass were supposed to have some relation to a quantum mass (ie being close to if not beyond the limits of observational measurement), then you’d have an argument for fine-tuning, but it’s not and you don’t.”
False. The smallness of the masses of the fundamental particles with respect to the Planck mass is known in particle physics as the “hierarchy problem”. Quantum field theory predicts corrections to the particle masses up to the mass scale at which the theory can be trusted, which is (at most) the Planck scale.
Arkani–Hamed et al: “There are at least two seemingly fundamental energy scales in nature, the electroweak scale and the Planck scale, where gravity becomes as strong as the gauge interactions. Over the last two decades, explaining the smallness and radiative stability of the hierarchy has been one of the greatest driving forces behind the construction of theories beyond the Standard Model (SM)”.
See also “On Absolute Units, II: Challenges and Responses” by Frank Wilzcek.
“And in any event, 9 orders of magnitude (between 10^-30 and 10^-39) is not a fine-tuned margin.” Cheap binoculars fallacy. Read my paper.
My blog’s approval policy for comments is seemingly random. Post the comment again and I’ll approve it. I only reject comments if they are obscene or offensive, and yours aren’t. You can expect a longer discussion of my views on matters beyond science in my forthcoming book.
If you are suggesting no more than the hierarchy problem requires fine tuning of the Standard model to get results that accord with experimental findings, then I would have no problem. My problem is that you are apparently suggesting that the fine tuning exists in reality, rather than the model (hence your link to William Lane Craig, who has still failed to run over my dog).
My argument is that these “problems” are indicative of what we already know (and what Arkani-Hamed is pointing towards) – we need better theories if we are to more accurately model reality … the hierarchy problem is merely an indicator of this need. Fine tuning of the existing theories will push us towards a Ptolemaic like version of physics, with an ad hoc correction here and an ad hoc correction there to make it all work. Once we get beneath the skin of that, however, if we ever do, it is entirely likely, if not inevitable, that a more simple, elegant and powerful theory will let the experimental findings fall into place. Such a theory will not require apologists – and may well cut some existing apologetic arguments off at the knees.
What we should *NOT* be doing with examples of “fine tuning” is running off to summer seminars at theological seminaries to imply some cosmological equivalent of irreducible complexity.
You’re going to have to explain the application of the cheap binoculars fallacy. Your argument has it that αG must be less than (or in the order of) 10^-30. The value 10^-39 adequately meets that criterion, and it could be a billion times larger and still meet that that criterion, or a billion times (or more) smaller and still meet that criterion. If there is something else that prevents αG being any smaller, then you should be talking about that, rather than the diproton disaster, because that would be the critical factor. For the purposes of driving towards better theories, of course, not for claiming or helping others claim that “god did it”.
It is not actually necessary that “fine tuning” of certain parameters will have to exist in reality for proving the existence of God. I think light with its very peculiar properties is sufficient for that purpose.
Light originates within space and time but it goes beyond space and time. A photon coming from a star lying at a distance of one billion light-years from earth will take one billion years of earth’s time to reach the surface of the earth. During these one billion years of earth’s time it will be in a spaceless and timeless condition, because the distance between the star and the earth has become zero for it and time has also stopped. So it will be neither in space nor in time during the total period of its existence. Then it will cease to be by being absorbed by something or someone on earth.
So light originating within space and time goes beyond space and time, because space and time become non-existent for it. And we cannot claim that this is without any cause. As light is not a conscious entity, so we cannot claim here that light has the capability of deciding its own fate that it will go beyond space and time. So this must have been caused by something else. But whatever may be the cause of it, this cause cannot lie within space and time; it is impossible. Let us suppose that this cause is A and that it lies within space and time. We can now ask two questions about A:
1) Are space and time non-existent for A also?
2) Or, are they not non-existent for A?
If 2), then how can A cause space and time becoming non-existent for light when they are not non-existent for A itself? But if 1), then we will have to ask the same question about A that we were earlier asking about light: what causes space and time becoming non-existent for A, when we know very well that A lies within space and time? So we see that A cannot be the ultimate cause that makes space and time non-existent for light, because here we will have to find out again the cause that makes space and time non-existent for A itself. In this way it can be shown that there will be an infinite regress, and that there is nothing within space and time that can be this cause. So ultimately we will have to go beyond space and time in search of this cause. A cause that lies within space and time is a natural cause, but a cause that lies beyond space and time is not a natural cause; it is a supernatural cause. So the cause that makes space and time non-existent for light is a supernatural cause.
About light one can also read here the article “The Fundamental Nature of Light” by Dr. Sascha Vongehr in Science 2.0 (February 3rd, 2011)1
Ref:
1. http://www.science20.com/alpha_meme/fundamental_nature_light-75861
Light does not exist in nature. Light is created inside our vision. The entire universe is dark as coal.
Light does not exist in nature. Light is created inside our vision.