Feeds:
Posts
Comments

Gabriel Popkin has written a nice overview of some recent work on the fine-tuning of the universe for intelligent life at insidescience.org, titled “A More Finely Tuned Universe“. It’s well worth a read, and features a few quotes from yours truly.

It details the work of Ulf Meissner and colleagues on the dependence of the Hoyle resonance in Carbon on the masses of the up and down quarks. The quark masses are fundamental parameters of the standard model, meaning that we can measure them, but the model itself can’t predict them. They are just arbitrary constants, so far as the equations are concerned. Their work shows that a change in the quark masses of ~3 percent with respect to their values in this universe will not result in the universe producing substantially less carbon or oxygen, so this is something of a safe zone. As the article quotes me as saying, I hope that they continue to push things further, to see if and where the universe really starts to change.

I have a problem, however, with the following quote:

David Kaplan, a particle physicist at Johns Hopkins University in Baltimore, said two to three percent gives the quark mass a lot of wiggle room compared to other much more finely tuned parameters within physics, including the cosmological constant.

(Just to note: I was quoted accurately in the article, so probably the other scientists were too. This isn’t always the case in science journalism, so I’m responding here to the quote, not necessarily to the scientist.)

The three percent change in the quark masses is with respect to their values in this universe. This is a useful way to describe the carbon-based-life-permitting range, but gives a misleading impression of its size. For fine-tuning, we need to compare this range to the set of possible values of the quark masses. This set of possible values – before you ask again, Jeff Shallit – is with defined by the mathematical model. It is part of our ideas about how the universe works. If you’ve got a better idea, a natural, simple idea for why constants like the quark masses must have the values they do, then write it down, derive the constants, and collect your Nobel Prize. The standard model of particle physics has no idea why the constants take any value over their possible range, that is, the range in which the model is well-defined and we can calculate its predictions. Moreover, in testing our ideas in a Bayesian framework, we cannot cheat by arbitrarily confining our free parameters to the neighbourhood of their known value. The prior is broad. Fine-tuned free parameters make their theories improbable.

The smallest possible mass is zero; the photon, for example, is massless. The largest mass that a particle can have in the standard model is the Planck mass. Larger particles are predicted to become their own black hole, so we would need a quantum theory of gravity to describe them. Alas, we’re still working on that.

3% of the quark masses value in our universe is one part in $ latex10^{23}$ (one followed by 23 zeros) of the Planck mass. Technically, the down quark mass is (roughly) the product of the “Higgs vev” and a dimensionless parameter called the Yukawa parameter. The possible range of the Higgs vev extends to the Planck mass; why it is so much smaller than the Planck mass is known as the Hierarchy problem. The quark Yukawa parameters are about 3 \times 10^{-5}, which leads Leonard Susskind to comment (in The Cosmic Landscape),

.. the up- and down-quarks … are absurdly light. The fact that they are roughly twenty thousand times lighter than particles like the Z-boson and the W-boson is what needs an explanation. The Standard Model has not provided one.

In my paper on fine-tuning, I discuss the “cheap binoculars fallacy”: you can make anything look big, if you just zoom in enough. Actually, the fine-tuning of the cosmological constant is a good example of avoiding this fallacy. Relative to its value in our universe, the cosmological constant doesn’t seem very fine-tuned at all. Forget 3%; it can increase by a factor of ten, or take on similar but negative value, and the universe would still contain galaxies and stars. No one thinks that this is the answer to the cosmological constant problem, because comparing the life-permitting range with the value in our universe is irrelevant. When we compare to the range the constant could take in our models, we see fine-tuning on the order of one part in 10^{120}.

Later in the article, Alejandro Jenkins states:

“Maybe if you change the quark masses not by three percent but by 50 percent you could end up with a situation where life as we know it couldn’t exist, but life as we don’t know it could exist,”

I agree with that sentence, so long as it starts with “Maybe”. But the state of understanding of our models is such that the burden of proof is now firmly on the “life as we don’t know it” claim. There is zero evidence for it, and piles against it. For example, one doesn’t have to change the quark masses by very much to obliterate nuclear binding. No nuclei. No atoms. No chemistry. No periodic table. No stars. No planets. Just hydrogen gas. These calculations have been done, for example “Constraints on the variability of quark masses from nuclear binding” by Damour and Donoghue. If they are wrong, then write a paper about it and send it to Physical Review D. Possibilities are cheap.

Of course, when Geraint Lewis and I publish our fine-tuning book, all this will be sorted out once and for all, bringing fame and fortune and a movie deal. Editing continues, so stay tuned.

Apologies for the blogging drought. More soon. I couldn’t help but comment on something in the news recently.

Doing the rounds this week is a Wall Street Journal article by Eric Metaxas titled “Science Increasingly Makes the Case for God“. A few thoughts.

“Today there are more than 200 known parameters necessary for a planet to support life—every single one of which must be perfectly met, or the whole thing falls apart.”

I’m really hoping that his reference for the “200” parameters isn’t Hugh Ross, whom I’ve commented on before. The fine-tuning of the universe for intelligent life is about the fundamental parameters of the laws of nature as we know them, and there are only about 30 of those. Also, exactly zero fine-tuning cases require a parameter to be “perfectly” anything. There is always a non-zero (if sometimes very small) life-permitting window.

The fine-tuning for planets is a bit of a non-starter. How many planets are there in the universe? We don’t know, because we don’t know how large the universe is. There is no reason to believe that the size of the observable universe is any indication of the size of the whole universe.

Without a massive planet like Jupiter nearby, whose gravity will draw away asteroids, a thousand times as many would hit Earth’s surface.

This turns out to be a bit of a myth, however widely reported. Jonathan Horner and Barrie Jones used a set of simulations to test this idea, but their results tended to show that the opposite might be true. Continue Reading »

Before I get onto Carroll’s other replies to the fine-tuning argument, I need to discuss a feature of naturalism that will be relevant to what follows.

I take naturalism to be the claim that physical stuff is the only stuff. That is, the only things that exist concretely are physical things. (I say “concretely” in order to avoid the question of whether abstract things like numbers exist. Frankly, I don’t know.)

On naturalism, the ultimate laws of nature are the ultimate brute facts of reality. I’ve discussed this previously (here and here): the study of physics at any particular time can be summarised by three statements:

  1. A list of the fundamental constituents of physical reality and their properties.
  2. A set of mathematical equations describing how these entities change, interact and rearrange.
  3. A statement about how the universe began (or some other boundary condition, if the universe has no beginning point).

In short, what is there, what does it do, and in what state did it start?

Naturalism is the claim that there is some set of statements of this kind which forms the ultimate brute fact foundation of all concrete reality. There is some scientific theory of the physical contents of the universe, and once we’ve discovered that, we’re done. All deeper questions – such as where that stuff came from, why it is that type of stuff, why it obeys laws, why those laws, or why there is anything at all – are not answerable in terms of the ultimate laws of nature, and so are simply unanswerable. They are not just in need of more research; there are literally no true facts which shed any light whatsoever on these questions. There is no logical contradiction in asserting that the universe could have obeyed a different set of laws, but nevertheless there is no reason why our laws are the ones attached to reality and the others remain mere possibilities.

(Note: if there is a multiverse, then the laws that govern our cosmic neighbourhood are not the ultimate laws of nature. The ultimate laws would govern the multiverse, too.)

Non-informative Probabilities

In probability theory, we’ve seen hypotheses like naturalism before. They are known as “non-informative”.

In Bayesian probability theory, probabilities quantify facts about certain states of knowledge. The quantity p(A|B) represents the plausibility of the statement A, given only the information in the state of knowledge B. Probability aims to be an extension of deductive logic, such that:

“if A then B”, A -> B, and p(B|A) = 1

are the same statement. Similarly,

“if A then not B”, A -> ~B and p(B|A) = 0

are the same statement.

Between these extremes of logical implication, probability provides degrees of plausibility.

It is sometimes the case that the proposition of interest A is very well informed by B. For example, what is the probability that it will rain in the next 10 minutes, given that I am outside and can see blue skies in all directions? On other occasions, we are ignorant of some relevant information. For example, what is the probability that it will rain in the next 10 minutes, given that I’ve just woken up and can’t open the shutters in this room? Because probability describes states of knowledge, it is not necessarily derailed by a lack of information. Ignorance is just another state of knowledge, to be quantified by probabilities.

In Chapter 9 of his textbook “Probability Theory” (highly recommended), Edwin Jaynes considers a reasoning robot that is “poorly informed” about the experiment that it has been asked to analyse. The robot has been informed only that there are N possibilities for the outcome of the experiment. The poorly informed robot, with no other information to go on, should assign an equal probability to each outcome, as any other assignment would show unjustified favouritism to an arbitrarily labeled outcome. (See Jaynes Chapter 2 for a discussion of the principle of indifference.)

When no information is given about any particular outcome, all that is left is to quantify some measure of the size of the set of possible outcomes. This is not to assume some randomising selection mechanism. This is not a frequency, nor the objective chance associated with some experiment. It is simply a mathematical translation of the statement: “I don’t know which of these N outcomes will occur”. We are simply reporting our ignorance.

At the same time, the poorly informed robot can say more than just “I don’t know”, since it does know the number of possible outcomes. A poorly informed robot faced with 7 possibilities is in a different state of knowledge to one faced with 10,000 possibilities.

A particularly thorny case is characterising ignorance over a continuous parameter, since then there are an infinite number of possibilities. When a probability distribution for a certain parameter is not informed by data but only “prior” information, it is called a “non-informative prior”. Researchers continue the search for appropriate non-informative priors for various situations; the interested reader is referred to the “Catalogue of Non-informative Priors”. Continue Reading »

My fine-tuning interlocutor Prof Victor Stenger died a few weeks ago, at age 79.

There’s a tradition in cricket, especially in Australia, that whatever happens on the field and whatever is said during the battle, you can always sit down at the end of the day and have a beer. I never met Prof Stenger, but I’d have liked to buy him a beer. We’d chat about fine-tuning eventually, of course, but first I’d love to hear the story about he came to be sued by Uri Geller. Anyone who’s annoyed that charlatan enough to end up in court has clearly done something very right. Then I’d ask about Super-Kamiokande. And then what perspective his electric engineering training gave him on modern physics. Then about the future of big experiments in particle physics and “big science” in general. Then about the time he met Einstein. Maybe we’d get around to fine-tuning.

While searching for news about his death, I found his final Huffpo article, “Myths of Physics: 2. Gravity Is Much Weaker Than Electromagnetism“. It’s about the gravitational fine-structure constant, which is (usually) defined to be the square of the (proton mass divided by the planck mass). It’s value is about 6 x 10^-39. The article states that “It is proportional to the square of the proton mass and has a value 23 orders of magnitude less than alpha.” Actually, it’s 36 orders of magnitude. I assume that’s a typo. (Someone tell Huffpo).

More interesting is Stenger’s final comments. In the article, he points out that what is often called the “weakness of gravity” is really the smallness of the masses of fundamental particles compared to the Planck mass. In his book “The Fallacy of Fine-Tuning”, he states:

All these masses [of fundamental particles] are orders of magnitude less than the Planck mass, and no fine-tuning was necessary to make gravity much weaker than electromagnetism. This happened naturally and would have occurred for a wide range of mass values, which, after all, are just small corrections to their intrinsically zero masses.

In reply, my paper said:

The [hierarchy] problem (as ably explained by Martin, 1998) is that the Higgs mass (squared) receives quantum corrections from the virtual effects of every particle that couples, directly or indirectly, to the Higgs field. These corrections are enormous – their natural scale is the Planck scale, so that these contributions must be fine-tuned to mutually cancel to one part in (m_Pl/m_Higgs)^2 = 10^32. …

It is precisely the smallness of the quantum corrections wherein the fine-tuning lies. If the Planck mass is the “natural” mass scale in physics, then it sets the scale for all mass terms, corrections or otherwise. Just calling them “small” doesn’t explain anything.

Interestingly, Stenger’s Huffpo article states that:

… a good question is: Why are the masses of elementary particles so small compared to the Planck mass? This is a major puzzle called the hierarchy problem that physicists have still not solved. However, it is to be noted that, in the standard model, all elementary particle masses are intrinsically zero and their masses are small corrections resulting from the Higgs mechanism and other processes. The hierarchy problem can be recast to ask why the corrections are not on the order of the Planck mass.

Now, unless I’m seeing things (always a possibility), that last sentence sounds a lot more like what I said than what he said in his book. Of course, I don’t think he’s conceding a solid case of fine-tuning. But he it at least acknowledging that physics as we know it hasn’t solved the fine-tuning of masses of fundamental particles. I wonder whether he thought that the solution would come from particle physics (e.g. supersymmetry) or the multiverse + anthropic selection.

In any case, a bunch of people who knew him have left comments over at Friendly Atheist. Seems like a nice bloke.

Subtitle: how a modern physicist is liable to misunderstand Aristotle. This post was inspired by a very interesting post by Edward Feser here.

I have tried. What follows is my attempt to give full expression of my own ignorance. One of the conclusions I have drawn from my forays into Ancient and Medieval philosophy is that these are great thinkers.

Here is the standard illustration for Aristotle’s four causes. Consider a marble statue. The statue has four causes. The material cause is the marble, the material out of which the thing is made. The formal cause is the arrangement of the statue, its geometrical shape. The efficient cause is the “doer”, the sculptor, who arranges the material into the desired shape. The final cause of the statue is the purpose for which the sculptor has created the statue, e.g. to look beautiful in the garden.

Aristotle and Newton

Right, I think. My physicist training naturally has me try to cast my Newtonian (occasionally Einsteinian, sometimes quantum) view of the world in these categories. (This might be a bad idea, I think, given the discontinuity between Aristotle and Newton. Still, I’ll give it a go. Also, I’ll worry about relativity and quantum mechanics later, if at all.) So,

  • Material cause – the particles of matter out of which physical things are made.
  • Formal cause – the arrangement of those particles. Mathematically, a list of the position and velocity of each particle at some time (x_i(t), v_i(t))
  • Efficient cause – Newtonian forces, which move particles around.
  • Final causes – an emergent, higher-level property of minds, who can make and execute plans.

(This is not the correct way to understand Aristotle, so stay tuned.) So far, so good, I think. The “Newtonian” material, formal and efficient causes give all the information one needs to solve Newton’s laws of motion. But now the confusion starts.

A lecturer giving an introduction to the history of science talks about Plato’s theory of forms as a realm of abstract but still “really” existing ideas. He later seems to suggest that the formal cause of a chalk circle drawn on the board is the idea of the circle in the mind of the lecturer. I ask, “For the circle on the board: is the formal cause the idea in the mind, or the idea of a circle floating out there somewhere in Plato’s realm?”. “Uh … have you been reading the Scholastics?”, he replies. “Nope”. I can’t remember the rest of his answer – it was rather vague. I’ve had a chance to ask a few other philosophers about formal causes since, and their reply usually starts with a grimace.

Enter Feser

So it was that I came to Edward Feser’s Aquinas (A Beginner’s Guide). His exposition is admirably clear, and it is obvious that I must change my understanding of the four causes. In particular, final causes are more than just intentions of minds. There are natural final causes. When a match is struck and fire is created, the efficient cause of the fire is the match. At the same time, the final cause of the match is the fire. The properties of the match “point to” or “are directed at” the creation of fire. Fire is what matches do. The match isn’t just a generic efficient cause that could cause any old thing but just happens to cause fire every time. Its ability to do something is controlled by fire as its final cause. I picture this as the efficient cause being the engine, and the final cause as the steering wheel. The efficient cause does the causing, and the final cause directs the efficient cause towards the production of its effect.

The shocking thing about this, as Feser points out, is that the scientific revolution, despite its PR, didn’t get rid of final causes. Final causes are how Aristotelian metaphysics explains the orderliness of nature. The fact that things keep doing the same kind of things – trees grow, the sun shines, fire burns, dropped stones fall – is because the efficient causes in the world are conjoined (is that the right word?) to final causes, ensuring that they produce consistent effects. The “laws of nature”, to use an slight anachronism, are more about final causes than efficient ones. But that is a topic for another day.

Despite Feser’s clarity, formal causes now get even more confusing. Feser argues that things can be imperfect instantiations of their form. Their form isn’t just how their parts are arranged. It is, in some sense, how they should be arranged, what their essential arrangement is. For example, when a person loses a leg, they don’t take the form of a one-legged person. Their true, two-legged human nature is still there in the person, but it is instantiated imperfectly.

Note that Aristotle differs from Plato in locating the form of a thing in the thing itself, not in some ideal external realm of forms. It isn’t just that the person fails to replicate the ideal of a  two-legged Platonic person “up there”. The one-legged person still has the form of a  two-legged human. Two-legged-ness is still in there, somewhere. Continue Reading »

Last time, I started a review of the Carroll vs. Craig debate with a (mostly historical) overview of the back-and-forth about the beginning of the universe for the last 90 years of modern cosmology. Here, I’ll have a look at fine-tuning. I should start by saying how much I enjoyed the debate. They should do it again some time.

In his speeches, Sean Carroll raised five points (transcript) against the fine-tuning of the universe for intelligent life as an argument for the existence of God. I want to have a look at those five. Carroll (here) and Craig (here, here and here) had a few points to make post-debate, too.

Here is fine-tuning reply number one:

First, I am by no means convinced that there is a fine-tuning problem and, again, Dr. Craig offered no evidence for it. It is certainly true that if you change the parameters of nature our local conditions that we observe around us would change by a lot. I grant that quickly. I do not grant therefore life could not exist. I will start granting that once someone tells me the conditions under which life can exist. What is the definition of life, for example? If it’s just information processing, thinking or something like that, there’s a huge panoply of possibilities. They sound very “science fiction-y” but then again you’re the one who is changing the parameters of the universe. The results are going to sound like they come from a science fiction novel. Sadly, we just don’t know whether life could exist if the conditions of our universe were very different because we only see the universe that we see.

“Interesting” Games

Is the debate over the definition of life a problem for fine-tuning? Sean and I had a brief discussion on this point during my talk at the UCSC Summer School on Philosophy of Cosmology. My response was (roughly) as follows.

Consider chess. In particular, I’m wondering whether minor changes to the laws of chess would result in a similarly interesting game. Wait a minute, you say, you haven’t defined “interesting”. In fact, different people are going to come up with different definitions of interesting. So how can we know whether a game is interesting or not?

It’s a good point, but instead of considering this question in abstract, consider this particular example. Change one word in the rules of chess: instead of “Knights may jump over other pieces”, we propose that “Bishops may jump over other pieces”. If we were to rewrite the 530 page “Silman’s Complete Endgame Course“, we would need just one page, one paragraph, two sentences: “White bishop moves from f1 to b5. Checkmate.”

Chess2

My claim is that this particular case is so clear that by any definition of interesting, this is not an interesting game. The game is no more interesting than tossing a coin to see who goes first. It is too simple, too easy. Continue Reading »

It’s been a while, but I’ve finally gotten around to jotting down a few thoughts about the Sean Carroll vs. William Lane Craig debate. I previewed the debate here (part one, two, three, four). I thoroughly enjoyed the debate. Future posts will discuss a few of the philosophical questions raised by the debate, but I’ll briefly discuss some of the science in this point. (I didn’t manage to record my talk a few weeks ago, but this post summarises it.)

Firstly, I want to refer you to the much greater expertise of Aron Wall of UC Santa Barbara. I’ll list them all because they’re great.

(I’m on the “astrophysics” end of cosmology. The beginning of the universe probes the “particle and plasma and quantum gravity and beyond” end of cosmology. I know the field, but not as well as someone like Wall or Carroll.)

No one expects the beginning of the universe!

Regarding the scientific question of the beginning of the universe, here is how I see the state of play. Cosmologist don’t try to put a beginning into their models. For the longest time, even theists who believed that the universe had a beginning acknowledged that the universe shows no sign of such a beginning. We see cycles in nature – the stars go round, the sun goes round, the planets go round, the seasons go around, generations come and go. “There is nothing new under the sun”, says the Teacher in Ecclesiastes. Aristotle argued that the universe is eternal. Aquinas argued that we cannot know that the world had a beginning from the appearance of the universe, but only by revelation.

So when a cosmic beginning first raised its head in cosmology, it was a shock to the system. Interestingly, theists didn’t immediately jump on the beginning as an argument for God. Lemaître, one of the fathers of the Big Bang theory and a priest, said:

“As far as I can see, such a theory [big bang] remains entirely outside any metaphysical or religious question.”

In 1951, Pope Pius XII declared that Lemaître’s theory provided a scientific validation for existence of God and Catholicism. However, Lemaître resented the Pope’s proclamation. He persuaded the Pope to stop making proclamations about cosmology.

The philosophical defence of the argument from the beginning of the universe to God (the Kalam cosmological argument) starts essentially with Craig himself in 1979, half a century after the Big Bang theory is born.

In fact, the more immediate response came from atheist cosmologists, who were keen to remove the beginning. Fred Hoyle devised the steady state theory to try to remove the beginning from cosmology, noting that:

“… big bang theory requires a recent origin of the Universe that openly invites the concept of creation”. His steady-state theory was attacked “because we were touching on issues that threatened the theological culture on which western civilisation was founded.” (quoted in Holder).

Tipping the Scales

But what of the beginning in the Big Bang model? Singularities in general relativity weren’t taken seriously at first. Einstein never believed in the singularities in black holes. Singularities were believed to be the result of an unphysical assumption of perfect spherical symmetry. In Newtonian gravity, a perfectly spherical, pressure-free static sphere will collapse to a singularity of infinite density. However, this is avoided by the slightest perturbation of the sphere, or by the presence of pressure. A realistic Newtonian ball of gas won’t form a singularity, and the same was assumed of Einstein’s theory of gravity (General Relativity).

The next 80 years of cosmology sees the scales tipping back and forth, for and against the beginning. Continue Reading »

Follow

Get every new post delivered to your Inbox.

Join 470 other followers