Feeds:
Posts
Comments

Before I get onto Carroll’s other replies to the fine-tuning argument, I need to discuss a feature of naturalism that will be relevant to what follows.

I take naturalism to be the claim that physical stuff is the only stuff. That is, the only things that exist concretely are physical things. (I say “concretely” in order to avoid the question of whether abstract things like numbers exist. Frankly, I don’t know.)

On naturalism, the ultimate laws of nature are the ultimate brute facts of reality. I’ve discussed this previously (here and here): the study of physics at any particular time can be summarised by three statements:

  1. A list of the fundamental constituents of physical reality and their properties.
  2. A set of mathematical equations describing how these entities change, interact and rearrange.
  3. A statement about how the universe began (or some other boundary condition, if the universe has no beginning point).

In short, what is there, what does it do, and in what state did it start?

Naturalism is the claim that there is some set of statements of this kind which forms the ultimate brute fact foundation of all concrete reality. There is some scientific theory of the physical contents of the universe, and once we’ve discovered that, we’re done. All deeper questions – such as where that stuff came from, why it is that type of stuff, why it obeys laws, why those laws, or why there is anything at all – are not answerable in terms of the ultimate laws of nature, and so are simply unanswerable. They are not just in need of more research; there are literally no true facts which shed any light whatsoever on these questions. There is no logical contradiction in asserting that the universe could have obeyed a different set of laws, but nevertheless there is no reason why our laws are the ones attached to reality and the others remain mere possibilities.

(Note: if there is a multiverse, then the laws that govern our cosmic neighbourhood are not the ultimate laws of nature. The ultimate laws would govern the multiverse, too.)

Non-informative Probabilities

In probability theory, we’ve seen hypotheses like naturalism before. They are known as “non-informative”.

In Bayesian probability theory, probabilities quantify facts about certain states of knowledge. The quantity p(A|B) represents the plausibility of the statement A, given only the information in the state of knowledge B. Probability aims to be an extension of deductive logic, such that:

“if A then B”, A -> B, and p(B|A) = 1

are the same statement. Similarly,

“if A then not B”, A -> ~B and p(B|A) = 0

are the same statement.

Between these extremes of logical implication, probability provides degrees of plausibility.

It is sometimes the case that the proposition of interest A is very well informed by B. For example, what is the probability that it will rain in the next 10 minutes, given that I am outside and can see blue skies in all directions? On other occasions, we are ignorant of some relevant information. For example, what is the probability that it will rain in the next 10 minutes, given that I’ve just woken up and can’t open the shutters in this room? Because probability describes states of knowledge, it is not necessarily derailed by a lack of information. Ignorance is just another state of knowledge, to be quantified by probabilities.

In Chapter 9 of his textbook “Probability Theory” (highly recommended), Edwin Jaynes considers a reasoning robot that is “poorly informed” about the experiment that it has been asked to analyse. The robot has been informed only that there are N possibilities for the outcome of the experiment. The poorly informed robot, with no other information to go on, should assign an equal probability to each outcome, as any other assignment would show unjustified favouritism to an arbitrarily labeled outcome. (See Jaynes Chapter 2 for a discussion of the principle of indifference.)

When no information is given about any particular outcome, all that is left is to quantify some measure of the size of the set of possible outcomes. This is not to assume some randomising selection mechanism. This is not a frequency, nor the objective chance associated with some experiment. It is simply a mathematical translation of the statement: “I don’t know which of these N outcomes will occur”. We are simply reporting our ignorance.

At the same time, the poorly informed robot can say more than just “I don’t know”, since it does know the number of possible outcomes. A poorly informed robot faced with 7 possibilities is in a different state of knowledge to one faced with 10,000 possibilities.

A particularly thorny case is characterising ignorance over a continuous parameter, since then there are an infinite number of possibilities. When a probability distribution for a certain parameter is not informed by data but only “prior” information, it is called a “non-informative prior”. Researchers continue the search for appropriate non-informative priors for various situations; the interested reader is referred to the “Catalogue of Non-informative Priors”. Continue Reading »

My fine-tuning interlocutor Prof Victor Stenger died a few weeks ago, at age 79.

There’s a tradition in cricket, especially in Australia, that whatever happens on the field and whatever is said during the battle, you can always sit down at the end of the day and have a beer. I never met Prof Stenger, but I’d have liked to buy him a beer. We’d chat about fine-tuning eventually, of course, but first I’d love to hear the story about he came to be sued by Uri Geller. Anyone who’s annoyed that charlatan enough to end up in court has clearly done something very right. Then I’d ask about Super-Kamiokande. And then what perspective his electric engineering training gave him on modern physics. Then about the future of big experiments in particle physics and “big science” in general. Then about the time he met Einstein. Maybe we’d get around to fine-tuning.

While searching for news about his death, I found his final Huffpo article, “Myths of Physics: 2. Gravity Is Much Weaker Than Electromagnetism“. It’s about the gravitational fine-structure constant, which is (usually) defined to be the square of the (proton mass divided by the planck mass). It’s value is about 6 x 10^-39. The article states that “It is proportional to the square of the proton mass and has a value 23 orders of magnitude less than alpha.” Actually, it’s 36 orders of magnitude. I assume that’s a typo. (Someone tell Huffpo).

More interesting is Stenger’s final comments. In the article, he points out that what is often called the “weakness of gravity” is really the smallness of the masses of fundamental particles compared to the Planck mass. In his book “The Fallacy of Fine-Tuning”, he states:

All these masses [of fundamental particles] are orders of magnitude less than the Planck mass, and no fine-tuning was necessary to make gravity much weaker than electromagnetism. This happened naturally and would have occurred for a wide range of mass values, which, after all, are just small corrections to their intrinsically zero masses.

In reply, my paper said:

The [hierarchy] problem (as ably explained by Martin, 1998) is that the Higgs mass (squared) receives quantum corrections from the virtual effects of every particle that couples, directly or indirectly, to the Higgs field. These corrections are enormous – their natural scale is the Planck scale, so that these contributions must be fine-tuned to mutually cancel to one part in (m_Pl/m_Higgs)^2 = 10^32. …

It is precisely the smallness of the quantum corrections wherein the fine-tuning lies. If the Planck mass is the “natural” mass scale in physics, then it sets the scale for all mass terms, corrections or otherwise. Just calling them “small” doesn’t explain anything.

Interestingly, Stenger’s Huffpo article states that:

… a good question is: Why are the masses of elementary particles so small compared to the Planck mass? This is a major puzzle called the hierarchy problem that physicists have still not solved. However, it is to be noted that, in the standard model, all elementary particle masses are intrinsically zero and their masses are small corrections resulting from the Higgs mechanism and other processes. The hierarchy problem can be recast to ask why the corrections are not on the order of the Planck mass.

Now, unless I’m seeing things (always a possibility), that last sentence sounds a lot more like what I said than what he said in his book. Of course, I don’t think he’s conceding a solid case of fine-tuning. But he it at least acknowledging that physics as we know it hasn’t solved the fine-tuning of masses of fundamental particles. I wonder whether he thought that the solution would come from particle physics (e.g. supersymmetry) or the multiverse + anthropic selection.

In any case, a bunch of people who knew him have left comments over at Friendly Atheist. Seems like a nice bloke.

Subtitle: how a modern physicist is liable to misunderstand Aristotle. This post was inspired by a very interesting post by Edward Feser here.

I have tried. What follows is my attempt to give full expression of my own ignorance. One of the conclusions I have drawn from my forays into Ancient and Medieval philosophy is that these are great thinkers.

Here is the standard illustration for Aristotle’s four causes. Consider a marble statue. The statue has four causes. The material cause is the marble, the material out of which the thing is made. The formal cause is the arrangement of the statue, its geometrical shape. The efficient cause is the “doer”, the sculptor, who arranges the material into the desired shape. The final cause of the statue is the purpose for which the sculptor has created the statue, e.g. to look beautiful in the garden.

Aristotle and Newton

Right, I think. My physicist training naturally has me try to cast my Newtonian (occasionally Einsteinian, sometimes quantum) view of the world in these categories. (This might be a bad idea, I think, given the discontinuity between Aristotle and Newton. Still, I’ll give it a go. Also, I’ll worry about relativity and quantum mechanics later, if at all.) So,

  • Material cause – the particles of matter out of which physical things are made.
  • Formal cause – the arrangement of those particles. Mathematically, a list of the position and velocity of each particle at some time (x_i(t), v_i(t))
  • Efficient cause – Newtonian forces, which move particles around.
  • Final causes – an emergent, higher-level property of minds, who can make and execute plans.

(This is not the correct way to understand Aristotle, so stay tuned.) So far, so good, I think. The “Newtonian” material, formal and efficient causes give all the information one needs to solve Newton’s laws of motion. But now the confusion starts.

A lecturer giving an introduction to the history of science talks about Plato’s theory of forms as a realm of abstract but still “really” existing ideas. He later seems to suggest that the formal cause of a chalk circle drawn on the board is the idea of the circle in the mind of the lecturer. I ask, “For the circle on the board: is the formal cause the idea in the mind, or the idea of a circle floating out there somewhere in Plato’s realm?”. “Uh … have you been reading the Scholastics?”, he replies. “Nope”. I can’t remember the rest of his answer – it was rather vague. I’ve had a chance to ask a few other philosophers about formal causes since, and their reply usually starts with a grimace.

Enter Feser

So it was that I came to Edward Feser’s Aquinas (A Beginner’s Guide). His exposition is admirably clear, and it is obvious that I must change my understanding of the four causes. In particular, final causes are more than just intentions of minds. There are natural final causes. When a match is struck and fire is created, the efficient cause of the fire is the match. At the same time, the final cause of the match is the fire. The properties of the match “point to” or “are directed at” the creation of fire. Fire is what matches do. The match isn’t just a generic efficient cause that could cause any old thing but just happens to cause fire every time. Its ability to do something is controlled by fire as its final cause. I picture this as the efficient cause being the engine, and the final cause as the steering wheel. The efficient cause does the causing, and the final cause directs the efficient cause towards the production of its effect.

The shocking thing about this, as Feser points out, is that the scientific revolution, despite its PR, didn’t get rid of final causes. Final causes are how Aristotelian metaphysics explains the orderliness of nature. The fact that things keep doing the same kind of things – trees grow, the sun shines, fire burns, dropped stones fall – is because the efficient causes in the world are conjoined (is that the right word?) to final causes, ensuring that they produce consistent effects. The “laws of nature”, to use an slight anachronism, are more about final causes than efficient ones. But that is a topic for another day.

Despite Feser’s clarity, formal causes now get even more confusing. Feser argues that things can be imperfect instantiations of their form. Their form isn’t just how their parts are arranged. It is, in some sense, how they should be arranged, what their essential arrangement is. For example, when a person loses a leg, they don’t take the form of a one-legged person. Their true, two-legged human nature is still there in the person, but it is instantiated imperfectly.

Note that Aristotle differs from Plato in locating the form of a thing in the thing itself, not in some ideal external realm of forms. It isn’t just that the person fails to replicate the ideal of a  two-legged Platonic person “up there”. The one-legged person still has the form of a  two-legged human. Two-legged-ness is still in there, somewhere. Continue Reading »

Last time, I started a review of the Carroll vs. Craig debate with a (mostly historical) overview of the back-and-forth about the beginning of the universe for the last 90 years of modern cosmology. Here, I’ll have a look at fine-tuning. I should start by saying how much I enjoyed the debate. They should do it again some time.

In his speeches, Sean Carroll raised five points (transcript) against the fine-tuning of the universe for intelligent life as an argument for the existence of God. I want to have a look at those five. Carroll (here) and Craig (here, here and here) had a few points to make post-debate, too.

Here is fine-tuning reply number one:

First, I am by no means convinced that there is a fine-tuning problem and, again, Dr. Craig offered no evidence for it. It is certainly true that if you change the parameters of nature our local conditions that we observe around us would change by a lot. I grant that quickly. I do not grant therefore life could not exist. I will start granting that once someone tells me the conditions under which life can exist. What is the definition of life, for example? If it’s just information processing, thinking or something like that, there’s a huge panoply of possibilities. They sound very “science fiction-y” but then again you’re the one who is changing the parameters of the universe. The results are going to sound like they come from a science fiction novel. Sadly, we just don’t know whether life could exist if the conditions of our universe were very different because we only see the universe that we see.

“Interesting” Games

Is the debate over the definition of life a problem for fine-tuning? Sean and I had a brief discussion on this point during my talk at the UCSC Summer School on Philosophy of Cosmology. My response was (roughly) as follows.

Consider chess. In particular, I’m wondering whether minor changes to the laws of chess would result in a similarly interesting game. Wait a minute, you say, you haven’t defined “interesting”. In fact, different people are going to come up with different definitions of interesting. So how can we know whether a game is interesting or not?

It’s a good point, but instead of considering this question in abstract, consider this particular example. Change one word in the rules of chess: instead of “Knights may jump over other pieces”, we propose that “Bishops may jump over other pieces”. If we were to rewrite the 530 page “Silman’s Complete Endgame Course“, we would need just one page, one paragraph, two sentences: “White bishop moves from f1 to b5. Checkmate.”

Chess2

My claim is that this particular case is so clear that by any definition of interesting, this is not an interesting game. The game is no more interesting than tossing a coin to see who goes first. It is too simple, too easy. Continue Reading »

It’s been a while, but I’ve finally gotten around to jotting down a few thoughts about the Sean Carroll vs. William Lane Craig debate. I previewed the debate here (part one, two, three, four). I thoroughly enjoyed the debate. Future posts will discuss a few of the philosophical questions raised by the debate, but I’ll briefly discuss some of the science in this point. (I didn’t manage to record my talk a few weeks ago, but this post summarises it.)

Firstly, I want to refer you to the much greater expertise of Aron Wall of UC Santa Barbara. I’ll list them all because they’re great.

(I’m on the “astrophysics” end of cosmology. The beginning of the universe probes the “particle and plasma and quantum gravity and beyond” end of cosmology. I know the field, but not as well as someone like Wall or Carroll.)

No one expects the beginning of the universe!

Regarding the scientific question of the beginning of the universe, here is how I see the state of play. Cosmologist don’t try to put a beginning into their models. For the longest time, even theists who believed that the universe had a beginning acknowledged that the universe shows no sign of such a beginning. We see cycles in nature – the stars go round, the sun goes round, the planets go round, the seasons go around, generations come and go. “There is nothing new under the sun”, says the Teacher in Ecclesiastes. Aristotle argued that the universe is eternal. Aquinas argued that we cannot know that the world had a beginning from the appearance of the universe, but only by revelation.

So when a cosmic beginning first raised its head in cosmology, it was a shock to the system. Interestingly, theists didn’t immediately jump on the beginning as an argument for God. Lemaître, one of the fathers of the Big Bang theory and a priest, said:

“As far as I can see, such a theory [big bang] remains entirely outside any metaphysical or religious question.”

In 1951, Pope Pius XII declared that Lemaître’s theory provided a scientific validation for existence of God and Catholicism. However, Lemaître resented the Pope’s proclamation. He persuaded the Pope to stop making proclamations about cosmology.

The philosophical defence of the argument from the beginning of the universe to God (the Kalam cosmological argument) starts essentially with Craig himself in 1979, half a century after the Big Bang theory is born.

In fact, the more immediate response came from atheist cosmologists, who were keen to remove the beginning. Fred Hoyle devised the steady state theory to try to remove the beginning from cosmology, noting that:

“… big bang theory requires a recent origin of the Universe that openly invites the concept of creation”. His steady-state theory was attacked “because we were touching on issues that threatened the theological culture on which western civilisation was founded.” (quoted in Holder).

Tipping the Scales

But what of the beginning in the Big Bang model? Singularities in general relativity weren’t taken seriously at first. Einstein never believed in the singularities in black holes. Singularities were believed to be the result of an unphysical assumption of perfect spherical symmetry. In Newtonian gravity, a perfectly spherical, pressure-free static sphere will collapse to a singularity of infinite density. However, this is avoided by the slightest perturbation of the sphere, or by the presence of pressure. A realistic Newtonian ball of gas won’t form a singularity, and the same was assumed of Einstein’s theory of gravity (General Relativity).

The next 80 years of cosmology sees the scales tipping back and forth, for and against the beginning. Continue Reading »

I’ve been a bit quiet around here, lately. Travel is my excuse. I’m currently in Cambridge, collaborating with a few colleagues on a project. I’ll be back in Sydney next week, so if you’re near Epping on Friday 4th July 2014, why not come along to hear me speak at the Astronomical Society of NSW:

“What Happened at the Big Bang?”

Friday 4th July 2014 – 8:00pm
Topic: What happened at the Big Bang?
Speaker: Dr Luke Barnes, University of Sydney
Venue: Epping Creative Centre – 26 Stanley Road, Epping

Abstract:
Was the big bang the beginning of the universe? Does the big bang represent the beginning of time itself? This is an age-old question, and has been remarkably informed by modern cosmology.

I will answer this question once and for all.

I will follow the theorems, evidence and hints that lead us back in time. In particular, I will discuss the expansion of the space, the physics of the very early universe, the recent BICEP2 results and cosmic inflation, the effect of quantum physics, and the reason (or one of them) why Stephen Hawking is famous.

Biography:
Dr Luke A. Barnes is a postdoctoral researcher at the Sydney Institute for Astronomy. After undergraduate studies at the University of Sydney, Dr. Barnes earned a scholarship to complete a PhD at the University of Cambridge. He worked as a researcher at the Swiss Federal Institute of Technology (ETH), before returning to Sydney in 2011. He has published papers on galaxy formation and cosmology, and recently has taken an interest in the fine-tuning of the universe for intelligent life. He blogs at letterstonature.wordpress.com.

The Conversation has published an article of mine, co-authored with Geraint Lewis, titled “Have cosmologists lost their minds in the multiverse?“. It’s a quick introduction to the multiverse in light of the recent BICEP2 results. Comments welcome!

Follow

Get every new post delivered to your Inbox.

Join 399 other followers