Feeds:
Posts
Comments

Research question: Do telepathic powers exist? Such powers would be highly in demand, so highly in fact that telepaths might become paranoid and keep their abilities secret. Here, I propose a method to identify hidden telepaths. Continue Reading »

Another video of one of my talks. The goal is to take Bayesian probability theory as it is used in the physical sciences and see if it can make sense of postulating and testing a multiverse theory.

As part of a project called Establishing the Philosophy of Cosmology, I attended a conference in Tenerife, Spain in September last year. The line-up of fellow attendees was, frankly, intimidating. Nevertheless, I had a wonderful time, learned a lot and presented some of my own ideas towards the end of the conference.

The videos are now available on YouTube here; talk slides are here. Just about all the talks are worth a listen – I’ve been enjoying listening to them again. Here are a few highlights.

Joel Primack – Cosmological Structure Formation. A nice introduction to how the universe made its galaxies.

Barry Loewer – Metaphysics of Laws & Time in Cosmology. A very helpful talk on how to think about the laws of nature, and the place of probabilities therein.

George Ellis – Observability and Testability in cosmology and Cosmology: what are the Limits of Science. Made an important distinction between “big-C” Cosmology, whose purview is all of reality, and “little-c” cosmology, which is a branch of science about what physics and physical observations can say about the universe as a whole.

Sean Carroll – What Happens Inside the Wave Function? (I’ll let Sean explain here.)

The talks by Don Page, Bob Wald, Jim Hartle, Joe Silk, David Wallace, David Albert, Chris Smeenk, Brian Pitts, Tom Banks, and Jean-Philippe Uzan were very interesting, as were the discussion panels of Dean Zimmerman, Jennan Ismael & Tim Maudlin, and Janna Levin, Priya Natarajan, Claus Beisbart & Pedro Ferreira.

Here’s mine.  Enjoy.

(My sister is a TV journalist. I’m going to have to get some tips about not fidgeting, what to do with my hands, and not flubbing my words. I say “quantise” instead of “quantify” at one point. *cringe* My good wife has seen me give public lectures, and has commented that I appear to be on speed.)

I’ve started a new project at the University of Sydney. I’m still at the same desk, but I’ll be doing something a bit different. More details soon, but basically I’ll be using cosmological simulations of galaxy formation to try to make precise the connection between the fundamental parameters of cosmology – like the density of matter, the lumpiness of the early universe and the cosmological constant – and the conditions required by stars, and hence anything that requires stars.

For a brief overview of why anyone would do this, he’s a short presentation I gave at the Australian Academy of Science’s “Australian Frontiers of Science – The edges of astronomy” meeting in December 2014. My talk starts at 25:09. I think it’s queued up below. The other talks are also well worth your time:

The edge of the Universe—a fundamental limit how much we can know? – Associate Professor Tamara Davis
The small-scale spatial limits to the Universe – Dr Alessandro Fedrizzi
The edges of knowledge —the ‘physics is done’ syndrome – Associate Professor Michael Murphy

Gabriel Popkin has written a nice overview of some recent work on the fine-tuning of the universe for intelligent life at insidescience.org, titled “A More Finely Tuned Universe“. It’s well worth a read, and features a few quotes from yours truly.

It details the work of Ulf Meissner and colleagues on the dependence of the Hoyle resonance in Carbon on the masses of the up and down quarks. The quark masses are fundamental parameters of the standard model, meaning that we can measure them, but the model itself can’t predict them. They are just arbitrary constants, so far as the equations are concerned. Their work shows that a change in the quark masses of ~3 percent with respect to their values in this universe will not result in the universe producing substantially less carbon or oxygen, so this is something of a safe zone. As the article quotes me as saying, I hope that they continue to push things further, to see if and where the universe really starts to change.

I have a problem, however, with the following quote:

David Kaplan, a particle physicist at Johns Hopkins University in Baltimore, said two to three percent gives the quark mass a lot of wiggle room compared to other much more finely tuned parameters within physics, including the cosmological constant.

(Just to note: I was quoted accurately in the article, so probably the other scientists were too. This isn’t always the case in science journalism, so I’m responding here to the quote, not necessarily to the scientist.)

The three percent change in the quark masses is with respect to their values in this universe. This is a useful way to describe the carbon-based-life-permitting range, but gives a misleading impression of its size. For fine-tuning, we need to compare this range to the set of possible values of the quark masses. This set of possible values – before you ask again, Jeff Shallit – is with defined by the mathematical model. It is part of our ideas about how the universe works. If you’ve got a better idea, a natural, simple idea for why constants like the quark masses must have the values they do, then write it down, derive the constants, and collect your Nobel Prize. The standard model of particle physics has no idea why the constants take any value over their possible range, that is, the range in which the model is well-defined and we can calculate its predictions. Moreover, in testing our ideas in a Bayesian framework, we cannot cheat by arbitrarily confining our free parameters to the neighbourhood of their known value. The prior is broad. Fine-tuned free parameters make their theories improbable.

The smallest possible mass is zero; the photon, for example, is massless. The largest mass that a particle can have in the standard model is the Planck mass. Larger particles are predicted to become their own black hole, so we would need a quantum theory of gravity to describe them. Alas, we’re still working on that.

3% of the quark masses value in our universe is one part in $ latex10^{23}$ (one followed by 23 zeros) of the Planck mass. Technically, the down quark mass is (roughly) the product of the “Higgs vev” and a dimensionless parameter called the Yukawa parameter. The possible range of the Higgs vev extends to the Planck mass; why it is so much smaller than the Planck mass is known as the Hierarchy problem. The quark Yukawa parameters are about 3 \times 10^{-5}, which leads Leonard Susskind to comment (in The Cosmic Landscape),

.. the up- and down-quarks … are absurdly light. The fact that they are roughly twenty thousand times lighter than particles like the Z-boson and the W-boson is what needs an explanation. The Standard Model has not provided one.

In my paper on fine-tuning, I discuss the “cheap binoculars fallacy”: you can make anything look big, if you just zoom in enough. Actually, the fine-tuning of the cosmological constant is a good example of avoiding this fallacy. Relative to its value in our universe, the cosmological constant doesn’t seem very fine-tuned at all. Forget 3%; it can increase by a factor of ten, or take on similar but negative value, and the universe would still contain galaxies and stars. No one thinks that this is the answer to the cosmological constant problem, because comparing the life-permitting range with the value in our universe is irrelevant. When we compare to the range the constant could take in our models, we see fine-tuning on the order of one part in 10^{120}.

Later in the article, Kaplan states:

“Maybe if you change the quark masses not by three percent but by 50 percent you could end up with a situation where life as we know it couldn’t exist, but life as we don’t know it could exist,”

I agree with that sentence, so long as it starts with “Maybe”. But the state of understanding of our models is such that the burden of proof is now firmly on the “life as we don’t know it” claim. There is zero evidence for it, and piles against it. For example, one doesn’t have to change the quark masses by very much to obliterate nuclear binding. No nuclei. No atoms. No chemistry. No periodic table. No stars. No planets. Just hydrogen gas. These calculations have been done, for example “Constraints on the variability of quark masses from nuclear binding” by Damour and Donoghue. If they are wrong, then write a paper about it and send it to Physical Review D. Possibilities are cheap.

Of course, when Geraint Lewis and I publish our fine-tuning book, all this will be sorted out once and for all, bringing fame and fortune and a movie deal. Editing continues, so stay tuned.

Apologies for the blogging drought. More soon. I couldn’t help but comment on something in the news recently.

Doing the rounds this week is a Wall Street Journal article by Eric Metaxas titled “Science Increasingly Makes the Case for God“. A few thoughts.

“Today there are more than 200 known parameters necessary for a planet to support life—every single one of which must be perfectly met, or the whole thing falls apart.”

I’m really hoping that his reference for the “200” parameters isn’t Hugh Ross, whom I’ve commented on before. The fine-tuning of the universe for intelligent life is about the fundamental parameters of the laws of nature as we know them, and there are only about 30 of those. Also, exactly zero fine-tuning cases require a parameter to be “perfectly” anything. There is always a non-zero (if sometimes very small) life-permitting window.

The fine-tuning for planets is a bit of a non-starter. How many planets are there in the universe? We don’t know, because we don’t know how large the universe is. There is no reason to believe that the size of the observable universe is any indication of the size of the whole universe.

Without a massive planet like Jupiter nearby, whose gravity will draw away asteroids, a thousand times as many would hit Earth’s surface.

This turns out to be a bit of a myth, however widely reported. Jonathan Horner and Barrie Jones used a set of simulations to test this idea, but their results tended to show that the opposite might be true. Continue Reading »

Before I get onto Carroll’s other replies to the fine-tuning argument, I need to discuss a feature of naturalism that will be relevant to what follows.

I take naturalism to be the claim that physical stuff is the only stuff. That is, the only things that exist concretely are physical things. (I say “concretely” in order to avoid the question of whether abstract things like numbers exist. Frankly, I don’t know.)

On naturalism, the ultimate laws of nature are the ultimate brute facts of reality. I’ve discussed this previously (here and here): the study of physics at any particular time can be summarised by three statements:

  1. A list of the fundamental constituents of physical reality and their properties.
  2. A set of mathematical equations describing how these entities change, interact and rearrange.
  3. A statement about how the universe began (or some other boundary condition, if the universe has no beginning point).

In short, what is there, what does it do, and in what state did it start?

Naturalism is the claim that there is some set of statements of this kind which forms the ultimate brute fact foundation of all concrete reality. There is some scientific theory of the physical contents of the universe, and once we’ve discovered that, we’re done. All deeper questions – such as where that stuff came from, why it is that type of stuff, why it obeys laws, why those laws, or why there is anything at all – are not answerable in terms of the ultimate laws of nature, and so are simply unanswerable. They are not just in need of more research; there are literally no true facts which shed any light whatsoever on these questions. There is no logical contradiction in asserting that the universe could have obeyed a different set of laws, but nevertheless there is no reason why our laws are the ones attached to reality and the others remain mere possibilities.

(Note: if there is a multiverse, then the laws that govern our cosmic neighbourhood are not the ultimate laws of nature. The ultimate laws would govern the multiverse, too.)

Non-informative Probabilities

In probability theory, we’ve seen hypotheses like naturalism before. They are known as “non-informative”.

In Bayesian probability theory, probabilities quantify facts about certain states of knowledge. The quantity p(A|B) represents the plausibility of the statement A, given only the information in the state of knowledge B. Probability aims to be an extension of deductive logic, such that:

“if A then B”, A -> B, and p(B|A) = 1

are the same statement. Similarly,

“if A then not B”, A -> ~B and p(B|A) = 0

are the same statement.

Between these extremes of logical implication, probability provides degrees of plausibility.

It is sometimes the case that the proposition of interest A is very well informed by B. For example, what is the probability that it will rain in the next 10 minutes, given that I am outside and can see blue skies in all directions? On other occasions, we are ignorant of some relevant information. For example, what is the probability that it will rain in the next 10 minutes, given that I’ve just woken up and can’t open the shutters in this room? Because probability describes states of knowledge, it is not necessarily derailed by a lack of information. Ignorance is just another state of knowledge, to be quantified by probabilities.

In Chapter 9 of his textbook “Probability Theory” (highly recommended), Edwin Jaynes considers a reasoning robot that is “poorly informed” about the experiment that it has been asked to analyse. The robot has been informed only that there are N possibilities for the outcome of the experiment. The poorly informed robot, with no other information to go on, should assign an equal probability to each outcome, as any other assignment would show unjustified favouritism to an arbitrarily labeled outcome. (See Jaynes Chapter 2 for a discussion of the principle of indifference.)

When no information is given about any particular outcome, all that is left is to quantify some measure of the size of the set of possible outcomes. This is not to assume some randomising selection mechanism. This is not a frequency, nor the objective chance associated with some experiment. It is simply a mathematical translation of the statement: “I don’t know which of these N outcomes will occur”. We are simply reporting our ignorance.

At the same time, the poorly informed robot can say more than just “I don’t know”, since it does know the number of possible outcomes. A poorly informed robot faced with 7 possibilities is in a different state of knowledge to one faced with 10,000 possibilities.

A particularly thorny case is characterising ignorance over a continuous parameter, since then there are an infinite number of possibilities. When a probability distribution for a certain parameter is not informed by data but only “prior” information, it is called a “non-informative prior”. Researchers continue the search for appropriate non-informative priors for various situations; the interested reader is referred to the “Catalogue of Non-informative Priors”. Continue Reading »

My fine-tuning interlocutor Prof Victor Stenger died a few weeks ago, at age 79.

There’s a tradition in cricket, especially in Australia, that whatever happens on the field and whatever is said during the battle, you can always sit down at the end of the day and have a beer. I never met Prof Stenger, but I’d have liked to buy him a beer. We’d chat about fine-tuning eventually, of course, but first I’d love to hear the story about he came to be sued by Uri Geller. Anyone who’s annoyed that charlatan enough to end up in court has clearly done something very right. Then I’d ask about Super-Kamiokande. And then what perspective his electric engineering training gave him on modern physics. Then about the future of big experiments in particle physics and “big science” in general. Then about the time he met Einstein. Maybe we’d get around to fine-tuning.

While searching for news about his death, I found his final Huffpo article, “Myths of Physics: 2. Gravity Is Much Weaker Than Electromagnetism“. It’s about the gravitational fine-structure constant, which is (usually) defined to be the square of the (proton mass divided by the planck mass). It’s value is about 6 x 10^-39. The article states that “It is proportional to the square of the proton mass and has a value 23 orders of magnitude less than alpha.” Actually, it’s 36 orders of magnitude. I assume that’s a typo. (Someone tell Huffpo).

More interesting is Stenger’s final comments. In the article, he points out that what is often called the “weakness of gravity” is really the smallness of the masses of fundamental particles compared to the Planck mass. In his book “The Fallacy of Fine-Tuning”, he states:

All these masses [of fundamental particles] are orders of magnitude less than the Planck mass, and no fine-tuning was necessary to make gravity much weaker than electromagnetism. This happened naturally and would have occurred for a wide range of mass values, which, after all, are just small corrections to their intrinsically zero masses.

In reply, my paper said:

The [hierarchy] problem (as ably explained by Martin, 1998) is that the Higgs mass (squared) receives quantum corrections from the virtual effects of every particle that couples, directly or indirectly, to the Higgs field. These corrections are enormous – their natural scale is the Planck scale, so that these contributions must be fine-tuned to mutually cancel to one part in (m_Pl/m_Higgs)^2 = 10^32. …

It is precisely the smallness of the quantum corrections wherein the fine-tuning lies. If the Planck mass is the “natural” mass scale in physics, then it sets the scale for all mass terms, corrections or otherwise. Just calling them “small” doesn’t explain anything.

Interestingly, Stenger’s Huffpo article states that:

… a good question is: Why are the masses of elementary particles so small compared to the Planck mass? This is a major puzzle called the hierarchy problem that physicists have still not solved. However, it is to be noted that, in the standard model, all elementary particle masses are intrinsically zero and their masses are small corrections resulting from the Higgs mechanism and other processes. The hierarchy problem can be recast to ask why the corrections are not on the order of the Planck mass.

Now, unless I’m seeing things (always a possibility), that last sentence sounds a lot more like what I said than what he said in his book. Of course, I don’t think he’s conceding a solid case of fine-tuning. But he it at least acknowledging that physics as we know it hasn’t solved the fine-tuning of masses of fundamental particles. I wonder whether he thought that the solution would come from particle physics (e.g. supersymmetry) or the multiverse + anthropic selection.

In any case, a bunch of people who knew him have left comments over at Friendly Atheist. Seems like a nice bloke.

Follow

Get every new post delivered to your Inbox.

Join 476 other followers