Feeds:
Posts
Comments

Archive for the ‘Uncategorized’ Category

Gabriel Popkin has written a nice overview of some recent work on the fine-tuning of the universe for intelligent life at insidescience.org, titled “A More Finely Tuned Universe“. It’s well worth a read, and features a few quotes from yours truly.

It details the work of Ulf Meissner and colleagues on the dependence of the Hoyle resonance in Carbon on the masses of the up and down quarks. The quark masses are fundamental parameters of the standard model, meaning that we can measure them, but the model itself can’t predict them. They are just arbitrary constants, so far as the equations are concerned. Their work shows that a change in the quark masses of ~3 percent with respect to their values in this universe will not result in the universe producing substantially less carbon or oxygen, so this is something of a safe zone. As the article quotes me as saying, I hope that they continue to push things further, to see if and where the universe really starts to change.

I have a problem, however, with the following quote:

David Kaplan, a particle physicist at Johns Hopkins University in Baltimore, said two to three percent gives the quark mass a lot of wiggle room compared to other much more finely tuned parameters within physics, including the cosmological constant.

(Just to note: I was quoted accurately in the article, so probably the other scientists were too. This isn’t always the case in science journalism, so I’m responding here to the quote, not necessarily to the scientist.)

The three percent change in the quark masses is with respect to their values in this universe. This is a useful way to describe the carbon-based-life-permitting range, but gives a misleading impression of its size. For fine-tuning, we need to compare this range to the set of possible values of the quark masses. This set of possible values – before you ask again, Jeff Shallit – is with defined by the mathematical model. It is part of our ideas about how the universe works. If you’ve got a better idea, a natural, simple idea for why constants like the quark masses must have the values they do, then write it down, derive the constants, and collect your Nobel Prize. The standard model of particle physics has no idea why the constants take any value over their possible range, that is, the range in which the model is well-defined and we can calculate its predictions. Moreover, in testing our ideas in a Bayesian framework, we cannot cheat by arbitrarily confining our free parameters to the neighbourhood of their known value. The prior is broad. Fine-tuned free parameters make their theories improbable.

The smallest possible mass is zero; the photon, for example, is massless. The largest mass that a particle can have in the standard model is the Planck mass. Larger particles are predicted to become their own black hole, so we would need a quantum theory of gravity to describe them. Alas, we’re still working on that.

3% of the quark masses value in our universe is one part in $ latex10^{23}$ (one followed by 23 zeros) of the Planck mass. Technically, the down quark mass is (roughly) the product of the “Higgs vev” and a dimensionless parameter called the Yukawa parameter. The possible range of the Higgs vev extends to the Planck mass; why it is so much smaller than the Planck mass is known as the Hierarchy problem. The quark Yukawa parameters are about 3 \times 10^{-5}, which leads Leonard Susskind to comment (in The Cosmic Landscape),

.. the up- and down-quarks … are absurdly light. The fact that they are roughly twenty thousand times lighter than particles like the Z-boson and the W-boson is what needs an explanation. The Standard Model has not provided one.

In my paper on fine-tuning, I discuss the “cheap binoculars fallacy”: you can make anything look big, if you just zoom in enough. Actually, the fine-tuning of the cosmological constant is a good example of avoiding this fallacy. Relative to its value in our universe, the cosmological constant doesn’t seem very fine-tuned at all. Forget 3%; it can increase by a factor of ten, or take on similar but negative value, and the universe would still contain galaxies and stars. No one thinks that this is the answer to the cosmological constant problem, because comparing the life-permitting range with the value in our universe is irrelevant. When we compare to the range the constant could take in our models, we see fine-tuning on the order of one part in 10^{120}.

Later in the article, Alejandro Jenkins states:

“Maybe if you change the quark masses not by three percent but by 50 percent you could end up with a situation where life as we know it couldn’t exist, but life as we don’t know it could exist,”

I agree with that sentence, so long as it starts with “Maybe”. But the state of understanding of our models is such that the burden of proof is now firmly on the “life as we don’t know it” claim. There is zero evidence for it, and piles against it. For example, one doesn’t have to change the quark masses by very much to obliterate nuclear binding. No nuclei. No atoms. No chemistry. No periodic table. No stars. No planets. Just hydrogen gas. These calculations have been done, for example “Constraints on the variability of quark masses from nuclear binding” by Damour and Donoghue. If they are wrong, then write a paper about it and send it to Physical Review D. Possibilities are cheap.

Of course, when Geraint Lewis and I publish our fine-tuning book, all this will be sorted out once and for all, bringing fame and fortune and a movie deal. Editing continues, so stay tuned.

Read Full Post »

Apologies for the blogging drought. More soon. I couldn’t help but comment on something in the news recently.

Doing the rounds this week is a Wall Street Journal article by Eric Metaxas titled “Science Increasingly Makes the Case for God“. A few thoughts.

“Today there are more than 200 known parameters necessary for a planet to support life—every single one of which must be perfectly met, or the whole thing falls apart.”

I’m really hoping that his reference for the “200” parameters isn’t Hugh Ross, whom I’ve commented on before. The fine-tuning of the universe for intelligent life is about the fundamental parameters of the laws of nature as we know them, and there are only about 30 of those. Also, exactly zero fine-tuning cases require a parameter to be “perfectly” anything. There is always a non-zero (if sometimes very small) life-permitting window.

The fine-tuning for planets is a bit of a non-starter. How many planets are there in the universe? We don’t know, because we don’t know how large the universe is. There is no reason to believe that the size of the observable universe is any indication of the size of the whole universe.

Without a massive planet like Jupiter nearby, whose gravity will draw away asteroids, a thousand times as many would hit Earth’s surface.

This turns out to be a bit of a myth, however widely reported. Jonathan Horner and Barrie Jones used a set of simulations to test this idea, but their results tended to show that the opposite might be true. (more…)

Read Full Post »

Victor Stenger (1935 – 2014)

My fine-tuning interlocutor Prof Victor Stenger died a few weeks ago, at age 79.

There’s a tradition in cricket, especially in Australia, that whatever happens on the field and whatever is said during the battle, you can always sit down at the end of the day and have a beer. I never met Prof Stenger, but I’d have liked to buy him a beer. We’d chat about fine-tuning eventually, of course, but first I’d love to hear the story about he came to be sued by Uri Geller. Anyone who’s annoyed that charlatan enough to end up in court has clearly done something very right. Then I’d ask about Super-Kamiokande. And then what perspective his electric engineering training gave him on modern physics. Then about the future of big experiments in particle physics and “big science” in general. Then about the time he met Einstein. Maybe we’d get around to fine-tuning.

While searching for news about his death, I found his final Huffpo article, “Myths of Physics: 2. Gravity Is Much Weaker Than Electromagnetism“. It’s about the gravitational fine-structure constant, which is (usually) defined to be the square of the (proton mass divided by the planck mass). It’s value is about 6 x 10^-39. The article states that “It is proportional to the square of the proton mass and has a value 23 orders of magnitude less than alpha.” Actually, it’s 36 orders of magnitude. I assume that’s a typo. (Someone tell Huffpo).

More interesting is Stenger’s final comments. In the article, he points out that what is often called the “weakness of gravity” is really the smallness of the masses of fundamental particles compared to the Planck mass. In his book “The Fallacy of Fine-Tuning”, he states:

All these masses [of fundamental particles] are orders of magnitude less than the Planck mass, and no fine-tuning was necessary to make gravity much weaker than electromagnetism. This happened naturally and would have occurred for a wide range of mass values, which, after all, are just small corrections to their intrinsically zero masses.

In reply, my paper said:

The [hierarchy] problem (as ably explained by Martin, 1998) is that the Higgs mass (squared) receives quantum corrections from the virtual effects of every particle that couples, directly or indirectly, to the Higgs field. These corrections are enormous – their natural scale is the Planck scale, so that these contributions must be fine-tuned to mutually cancel to one part in (m_Pl/m_Higgs)^2 = 10^32. …

It is precisely the smallness of the quantum corrections wherein the fine-tuning lies. If the Planck mass is the “natural” mass scale in physics, then it sets the scale for all mass terms, corrections or otherwise. Just calling them “small” doesn’t explain anything.

Interestingly, Stenger’s Huffpo article states that:

… a good question is: Why are the masses of elementary particles so small compared to the Planck mass? This is a major puzzle called the hierarchy problem that physicists have still not solved. However, it is to be noted that, in the standard model, all elementary particle masses are intrinsically zero and their masses are small corrections resulting from the Higgs mechanism and other processes. The hierarchy problem can be recast to ask why the corrections are not on the order of the Planck mass.

Now, unless I’m seeing things (always a possibility), that last sentence sounds a lot more like what I said than what he said in his book. Of course, I don’t think he’s conceding a solid case of fine-tuning. But he it at least acknowledging that physics as we know it hasn’t solved the fine-tuning of masses of fundamental particles. I wonder whether he thought that the solution would come from particle physics (e.g. supersymmetry) or the multiverse + anthropic selection.

In any case, a bunch of people who knew him have left comments over at Friendly Atheist. Seems like a nice bloke.

Read Full Post »

Subtitle: how a modern physicist is liable to misunderstand Aristotle. This post was inspired by a very interesting post by Edward Feser here.

I have tried. What follows is my attempt to give full expression of my own ignorance. One of the conclusions I have drawn from my forays into Ancient and Medieval philosophy is that these are great thinkers.

Here is the standard illustration for Aristotle’s four causes. Consider a marble statue. The statue has four causes. The material cause is the marble, the material out of which the thing is made. The formal cause is the arrangement of the statue, its geometrical shape. The efficient cause is the “doer”, the sculptor, who arranges the material into the desired shape. The final cause of the statue is the purpose for which the sculptor has created the statue, e.g. to look beautiful in the garden.

Aristotle and Newton

Right, I think. My physicist training naturally has me try to cast my Newtonian (occasionally Einsteinian, sometimes quantum) view of the world in these categories. (This might be a bad idea, I think, given the discontinuity between Aristotle and Newton. Still, I’ll give it a go. Also, I’ll worry about relativity and quantum mechanics later, if at all.) So,

  • Material cause – the particles of matter out of which physical things are made.
  • Formal cause – the arrangement of those particles. Mathematically, a list of the position and velocity of each particle at some time (x_i(t), v_i(t))
  • Efficient cause – Newtonian forces, which move particles around.
  • Final causes – an emergent, higher-level property of minds, who can make and execute plans.

(This is not the correct way to understand Aristotle, so stay tuned.) So far, so good, I think. The “Newtonian” material, formal and efficient causes give all the information one needs to solve Newton’s laws of motion. But now the confusion starts.

A lecturer giving an introduction to the history of science talks about Plato’s theory of forms as a realm of abstract but still “really” existing ideas. He later seems to suggest that the formal cause of a chalk circle drawn on the board is the idea of the circle in the mind of the lecturer. I ask, “For the circle on the board: is the formal cause the idea in the mind, or the idea of a circle floating out there somewhere in Plato’s realm?”. “Uh … have you been reading the Scholastics?”, he replies. “Nope”. I can’t remember the rest of his answer – it was rather vague. I’ve had a chance to ask a few other philosophers about formal causes since, and their reply usually starts with a grimace.

Enter Feser

So it was that I came to Edward Feser’s Aquinas (A Beginner’s Guide). His exposition is admirably clear, and it is obvious that I must change my understanding of the four causes. In particular, final causes are more than just intentions of minds. There are natural final causes. When a match is struck and fire is created, the efficient cause of the fire is the match. At the same time, the final cause of the match is the fire. The properties of the match “point to” or “are directed at” the creation of fire. Fire is what matches do. The match isn’t just a generic efficient cause that could cause any old thing but just happens to cause fire every time. Its ability to do something is controlled by fire as its final cause. I picture this as the efficient cause being the engine, and the final cause as the steering wheel. The efficient cause does the causing, and the final cause directs the efficient cause towards the production of its effect.

The shocking thing about this, as Feser points out, is that the scientific revolution, despite its PR, didn’t get rid of final causes. Final causes are how Aristotelian metaphysics explains the orderliness of nature. The fact that things keep doing the same kind of things – trees grow, the sun shines, fire burns, dropped stones fall – is because the efficient causes in the world are conjoined (is that the right word?) to final causes, ensuring that they produce consistent effects. The “laws of nature”, to use an slight anachronism, are more about final causes than efficient ones. But that is a topic for another day.

Despite Feser’s clarity, formal causes now get even more confusing. Feser argues that things can be imperfect instantiations of their form. Their form isn’t just how their parts are arranged. It is, in some sense, how they should be arranged, what their essential arrangement is. For example, when a person loses a leg, they don’t take the form of a one-legged person. Their true, two-legged human nature is still there in the person, but it is instantiated imperfectly.

Note that Aristotle differs from Plato in locating the form of a thing in the thing itself, not in some ideal external realm of forms. It isn’t just that the person fails to replicate the ideal of a  two-legged Platonic person “up there”. The one-legged person still has the form of a  two-legged human. Two-legged-ness is still in there, somewhere. (more…)

Read Full Post »

Thanks to GGDFan777 for the tip-off: Jeffery Jay Lowder has weighed in on my posts (one, two, three, four) about Richard Carrier. It’s in the comments of this post over at The Secular Outpost. Keith Parsons even drops in with a few comments. [Edit:] More details here: The Carrier-Barnes Exchange on Fine-Tuning.

Read Full Post »

I’ve invited cosmology questions before, but I wanted to renew the call. I’ve got a Q&A article on cosmology coming out soon, so ask away!

Read Full Post »

I’ve spent a lot of time critiquing articles on the fine-tuning of the universe for intelligent life. I should really give the other side of the story. Below are some of the good ones, ranging from popular level books to technical articles. I’ve given my recommendations for popular cosmology books here.

Books – Popular-level

  • Just Six Numbers, Martin Rees – Highly recommended, with a strong focus on cosmology and astrophysics, as you’d expect from the Astronomer Royal. Rees gives a clear exposition of modern cosmology, including inflation, and ends up giving a cogent defence of the multiverse.
  • The Goldilocks Enigma, Paul Davies – Davies is an excellent writer and has long been an important contributor to this field. His discussion of the physics is very good, and includes a description of the Higgs mechanism. When he strays into metaphysics, he is thorough and thoughtful, even when he is defending conclusions that I don’t agree with.
  • The Cosmic Landscape: String Theory and the Illusion of Intelligent Design, Leonard Susskind – I’ve reviewed this book in detail in a previous blog posts. Highly recommended. I can also recommend his many lectures on YouTube.
  • Constants of Nature, John Barrow – A discussion of the physics behind the constants of nature. An excellent presentation of modern physics, cosmology and their relationship to mathematics, which includes a chapter on the anthropic principle and a discussion of the multiverse.
  • Cosmology: The Science of the Universe, Edward Harrison – My favourite cosmology introduction. The entire book is worth reading, not least the sections on life in the universe and the multiverse.
  • At Home in the Universe, John Wheeler – A thoughtful and wonderfully written collection of essays, some of which touch on matters anthropic.

I haven’t read Brian Greene’s book on the multiverse but I’ve read his other books and they’re excellent. Stephen Hawking discusses fine-tuning in A Brief History of Time and the Grand Design. As usual, read anything by Sean Carroll, Frank Wilczek, and Alex Vilenkin.

Books – Advanced

  • The Cosmological Anthropic Principle, Barrow and Tipler – still the standard in the field. Even if you can’t follow the equations in the middle chapters, it’s still worth a read as the discussion is quite clear. Gets a bit speculative in the final chapters, but its fairly obvious where to apply your grain of salt.
  • Universe or Multiverse (Edited by Bernard Carr) – the new standard. A great collection of papers by most of the experts in the field. Special mention goes to the papers by Weinberg, Wilczek, Aguirre, and Hogan.

Scientific Review Articles

The field of fine-tuning grew out of the so-called “Large numbers hypothesis” of Paul Dirac, which is owes a lot to Weyl and is further discussed by Eddington, Gamow and others. These discussions evolve into fine-tuning when Dicke explains them using the anthropic principle. Dicke’s method is examined and expanded in these classic papers of the field: (more…)

Read Full Post »

Older Posts »

Follow

Get every new post delivered to your Inbox.

Join 470 other followers