Feeds:
Posts

## Feser on Krauss

Having had my appetite for the Middle Ages whetted by Edward Grant’s excellent book A History of Natural Philosophy: From the Ancient World to the Nineteenth Century, I recently read Edward Feser’s Aquinas (A Beginner’s Guide). And, on the back of that, his book The Last Superstition. If I ever work out what a formal cause is, I might post a review.

In the meantime, I’ve quite enjoyed some of his blog posts about the philosophical claims of Lawrence Krauss. This is something I’ve blogged about a few times. His most recent post on Krauss contains this marvellous passage.

Krauss asserts:

“[N]othing is a physical concept because it’s the absence of something, and something is a physical concept.”

The trouble with this, of course, is that “something” is not a physical concept. “Something” is what Scholastic philosophers call a transcendental, a notion that applies to every kind of being whatsoever, whether physical or non-physical — to tables and chairs, rocks and trees, animals and people, substances and accidents, numbers, universals, and other abstract objects, souls, angels, and God. Of course, Krauss doesn’t believe in some of these things, but that’s not to the point. Whether or not numbers, universals, souls, angels or God actually exist, none of them would be physical if they existed. But each would still be a “something” if it existed. So the concept of “something” is broader than the concept “physical,” and would remain so even if it turned out that the only things that actually exist are physical.

No atheist philosopher would disagree with me about that much, because it’s really just an obvious conceptual point. But since Krauss and his fans have an extremely tenuous grasp of philosophy — or, indeed, of the obvious — I suppose it is worth adding that even if it were a matter of controversy whether “something” is a physical concept, Krauss’s “argument” here would simply have begged the question against one side of that controversy, rather than refuted it. For obviously, Krauss’s critics would not agree that “something is a physical concept.” Hence, confidently to assert this as a premise intended to convince someone who doesn’t already agree with him is just to commit a textbook fallacy of circular reasoning.

The wood floor guy analogy is pretty awesome, so be sure to have a read.

## Fine-Tuning on the TV: A Review of ABC’s Catalyst

It’s always a nervous moment when, as a scientist, you discover that a documentary has been made on one of your favourite topics. Science journalism is rather hit and miss. So it was when the Australian Broadcasting Corporation (ABC), our public TV network, aired a documentary about the fine-tuning of the universe for intelligent life as part of their Catalyst science series. (I’ve mentioned my fine-tuning review paper enough, haven’t I?).

The program can be watched on ABC iView. (International readers – does this work for you?). It was hosted by Dr Graham Phillips, who has a PhD in Astrophysics. The preview I saw last week was promising. All the right people’s heads were appearing – Sean Carroll, Brian Greene, Paul Davies, Leonard Susskind, Lawrence Krauss, Charley Lineweaver. John Wheeler even got a mention.

Overall – surprisingly OK. They got the basic science of fine-tuning correct. Phillips summarises fine-tuning as:

When scientists look far into the heavens or deeply down into the forces of nature, they see something deeply mysterious. If some of the laws that govern our cosmos were only slightly different, intelligent life simply couldn’t exist. It appears that the universe has been fine-tuned so that intelligent beings like you and me could be here.

Not bad, though I’m not sure why it needed to be accompanied by such ominous music. There is a possibility for misunderstanding, however. Fine-tuning is a technical term in physics that roughly means extreme sensitivity of some “output” to the “input”. For example, if some theory requires an unexplained coincidence between two free parameters, then the “fine-tuning” of the theory required to explain the data counts against that theory. “Fine-tuned” does not mean “chosen by an intelligent being” or “designed”. It’s a metaphor.

Ten minutes in, the only actual case of fine-tuning that had been mentioned was the existence of inhomogeneities in the early universe. Sean Carroll:

If the big bang had been completely smooth, it would just stay completely smooth and the history of the universe would be very, very boring. It would just get more and more dilute but you would never make stars, you would never make galaxies or clusters of galaxies. So the potential for interesting complex creatures like you and me would be there, but it would never actually come to pass. So we’re very glad that there was at least some fluctuation in the early universe.

Paul Davies then discussed the fact that there not only need to be such fluctuations, but they need to be not-too-big and not-too-small. Here’s the scientific paper, if you’re interested.

The documentary also has a cogent discussion of the cosmological constant problem – the “mother of all fine-tunings” – and the fine-tuning of the Higgs field, which is related to the hierarchy problem. Unfortunately, Phillips calls it “The God Particle” because “it gives substance to all nature’s other particles”. Groan.

Once we move beyond the science of fine-tuning, however, things get a bit more sketchy.

### The Multiverse

Leonard Susskind opens the section on the multiverse by stating that the multiverse is, in his opinion, the only explanation available for the fine-tuning of the universe for intelligent life. At this point, both the defence and the prosecution could have done more.

Possibilities are cheap. Sean Carroll appears on screen to say “Aliens could have created our universe” and then is cut off. We are told that if we just suppose there is a multiverse, the problems of fine-tuning are solved. This isn’t the full story on two counts – the multiverse isn’t a mere possibility, and it doesn’t automatically solve the fine-tuning problem. Continue Reading »

## A universe from nothing? What you should know before you hear the Krauss-Craig debate

The ABC’s opinion pages has posted my introduction to the debate between Lawrence Krauss and William Lane Craig, happening this evening at the Sydney Town Hall. The debate topic is “Why is there something rather than nothing?”. Can science answer the question? Can God? Can anyone? Read on.

## Fine-Tuning and the Myth of “One variable at a time”

A commenter over at my post “Got a cosmology question?” asks:

Someone told me “there is not a single paper which finds fine tuning that has allowed multivariation”. Can you please refute this?

Incidentally, cosmology questions are still very welcome over there.

“Multivariation” is not a word, but in this context presumably means varying more than one variable at a time. There is an objection to fine-tuning that goes like this: all the fine-tuning cases involve varying one variable only, keeping all other variables fixed at their value in our universe, and then calculating the life-permitting range on that one variable. But, if you let more than one variable vary at a time, there turns out to be a range of life-permitting universes. So the universe is not fine-tuned for life.

This is a myth. The claim quoted by our questioner is totally wrong. The vast majority of fine-tuning/anthropic papers, from the very earliest papers in the 70′s until today, vary many parameters1. I’ve addressed these issues at length in my review paper. I’ll summarise some of that article here.

The very thing that started this whole field was physicists noting coincidences between the values of a number of different constants and the requirements for life. Carter’s classic 1974 paper “Large number coincidences and the anthropic principle in cosmology” notes that in order for the universe to have both radiative and convective stars we must have (in more modern notation to his equation 15, but it’s the same equation),

$\alpha_G^{1/2} \approx \alpha^6 \beta^2$

where, in Planck units, $\alpha_G = m_{proton}^2$$\alpha = e^2$$\beta = m_{electron}/m_{proton}$, and $e$ is the charge on the electron. (Interestingly, Barrow and Tipler show that the same condition must hold for stars emit photons with the right energy to power chemical reactions e.g. photosynthesis.) Similarly for cosmological cases: for the universe to live long enough for stars to live and die, we must have,

$|\kappa| \lesssim \left( \frac{\eta^2}{m_{proton}} \right)^{1/3} m_{proton}^3$

where $\kappa$ is related to the curvature of space and $\eta$ is roughly the baryon to photon ratio.

This continues in the classic anthropic papers. Carr and Rees (1977) show that to have hydrogen to power stars left over from big bang nucleosynthesis, and to have supernovae distribute heavy elements, we must have (in Planck units, rearranging their equation 61),

$m_{electron}^{-3/2} \sim g_w$

where $g_w$ is the weak coupling constant.

Barrow and Tipler’s “The Anthropic Cosmological Principle” shows that, for carbon and larger elements to be stable, we must have:

$\alpha_s \lesssim 0.3 \alpha ^{1/2}$

where $\alpha_s$ is the strong force coupling constant (evaluated at $m_Z$, if you’re interested).

The whole point of these relations and more like them, which the early anthropic literature is entirely concerned with, is that they relate a number of different physical parameters. There are approximations in these calculations – they are order-of-magnitude – but this usually involves assuming that a dimensionless mathematical constant is approximately one. At most, a parameter may be assumed to be in a certain regime. For example, one may assume that $\alpha$ and $\beta$ are small (much less than one) in order to make an approximation (e.g. that the nucleus is much heavier than the electron, and the electron orbits non-relativistically). These approximations are entirely justified in an anthropic calculation, because we have other anthropic limits that are known to (not merely assumed to) involve one variable – e.g. if $\beta$ is large, all solids are unstable to melting, and if $\alpha$ is large then all atoms are unstable. See section 4.8 of my paper for more information and references.

More modern papers almost always vary many variables. Examples abound. Below is figure 2 from my paper, which shows Figures from Barr and Khan and Tegmark, Aguirre, Rees and Wilczek. (Seriously, people … Wilczek is a Nobel prize winning particle physicist and Martin Rees is the Astronomer Royal and former president of the Royal Society. These people know what they are doing.)

The top two panels show the anthropic limits on the up-quark mass (x axis) and down-quark mass (y axis). 9 anthropic limits are shown. The life-permitting region is the green triangle in the top right plot. The lower two panels show cosmological limits on the cosmological constant (energy density) $\rho_\Lambda$, primordial inhomogeneity Q, and the matter density per CMB photon. Tegmark et al. derive from cosmology 8 anthropic constraints on the 7 dimensional parameter space $(\alpha, \beta, m_{proton}, \rho_\Lambda, Q, \xi,\xi_{baryon})$. Tegmark and Rees (1997) derive the following anthropic constraint on the primordial inhomogeneity Q:

(1)

Needless to say, there is more than one variable being investigated here. For more examples, see Figures 6, 7 (from Hogan), 8 (from Jaffe et al.) and 9 (from Tegmark) of my paper. The reason that the plots above only show two parameters at a time is because your screen is two dimensional. The equations and calculations from which these plots are constructed take into account many more variables than can be plotted on two axes.

This myth may have started because, when fine-tuning is presented to lay audiences, it is often illustrated using one-parameter limits. Martin Rees, for example, does this in his excellent book “Just Six Numbers“. Rees knows that the limits involve more than one parameter – he derived many of those limits. But equation (1) above would be far too intimidating in a popular level book.

My paper lists about 200 publications relevant to the field. I can only think of a handful that only vary one parameter. The scientific literature does not simply vary one parameter at a time when investigating life-permitting universes. This is a myth, born of (at best) complete ignorance.

____________________

Postscript: The questioner’s discussion revolves around the article of Harnik, Kribs & Perez (2006) on a universe without weak interactions. It’s a very clever article. Their weakless universe requires ”judicious parameter adjustment” and so is also fine-tuned. Remember that fine-tuning doesn’t claim that our universe is uniquely life-permitting, but rather that life-permitting universes are rare in the set of possible universe. Thus, the weakless universe is not a counterexample to fine-tuning. There are also concerns about galaxy formation and oxygen production. See the end of Section 4.8 of my paper for a discussion.

Footnotes:

1. Even if fine-tuning calculations varied only one parameter, it wouldn’t follow that fine-tuning is false. Opening up more parameter space in which life can form will also open up more parameter space in which life cannot form. As Richard Dawkins (1986) rightly said: “however many ways there may be of being alive, it is certain that there are vastly more ways of being dead, or rather not alive.” For more, see section 4.2.2 of my paper.

More of my posts on fine-tuning are here.

## Fine-Tuning for Life: UCSC Summer School on Philosophy of Cosmology

I’ve given my talk on the Fine-Tuning of the Universe for Intelligent Life at the UCSC Summer School on Philosophy of Cosmology. The talk is already up on YouTube – see below. The quality isn’t great, but put some headphones on, play with the bass and treble and enjoy.

I’ve given versions of that talk plenty of times, but never with so many of the people whose work I’m discussing in the audience. The questions are always the best part, and this talk was no different.

The other talks I’ve seen have been very good. Fred Adams was engaging and wide-ranging, and Sean Carroll was his usual combination of clarity and insight. Check them out here.

Edit [11/7/2013]: The link to my slides is broken, so while I try to get that fixed, I’ve uploaded the slides on SpeakerDeck here. (WordPress doesn’t seem to allow me to embed the slides in this post.)

## Philosophy of Cosmology Summer School

I’m currently at the Philosophy of Cosmology Summer School at the University of California, Santa Cruz. I’ve been invited to speak for an afternoon on the fine-tuning of the universe for intelligent life. I’ve given such talks a number of times, but never with so many of the people whose work I am discussing actually sitting in the room. The line-up is very impressive:

Anthony Aguirre (UCSC), Craig Callender (UCSD), Sean Carroll (Cal Tech), Shelly Goldstein (Rutgers), Anna Ijjas (Harvard/Rutgers), Tim Maudlin (NYU), Priya Natarajan (Yale), Ward Struyve (Rutgers), Tiziana Vistarini, (Rutgers), David Wallace (Oxford), Alex Pruss, Chris Smeenk, Fred Adams, Leonard Susskind, Matt Johnson …

At the moment, Sean Carroll is holding forth on cosmology, time, initial conditions and such. The talks are being placed on YouTube fairly quickly, and I encourage you to have a look through the list of talks.

I’ll try to tweet some highlights – so follow me or watch the hashtag #PhilosophyCosmology.

## Classify or Measure?

It’s always useful to know a statistics junkie or two. Brendon is our resident Bayesian. Another colleague of mine from Zurich, Ewan Cameron, has recently started Another Astrostatistics Blog. It’s well worth a look.

I’m not a statistics expert, but I’ve had this rant in mind for a while. I’m currently at the “Feeding, Feedback, and Fireworks” conference on Hamilton Island (thanks Astropixie!). There has been some discussion of the problem of reification. In particular, Ray Norris warned that, once a phenomenon is named, we have put it in a box and it is difficult to think outside that box. For example, what was discovered in 1998 was the acceleration of the expansion of the universe. We often call it the discovery of dark energy, but this is perhaps a premature leap from observation to explanation – the acceleration could be being caused by something other than some exotic new form of matter.

There is a broader message here, which I’ll motivate with this very interesting passage from Alfred North Whitehead’s book “Science and the Modern World” (1925):

In a sense, Plato and Pythagoras stand nearer to modern physical science than does Aristotle. The former two were mathematicians, whereas Aristotle was the son of a doctor, though of course he was not thereby ignorant of mathematics. The practical counsel to be derived from Pythagoras is to measure, and thus to express quality in terms of numerically determined quantity. But the biological sciences, then and till our own time, has been overwhelmingly classificatory. Accordingly, Aristotle by his Logic throws the emphasis on classification. The popularity of Aristotelian Logic retarded the advance of physical science throughout the Middle Ages. If only the schoolmen had measured instead of classifying, how much they might have learnt!

… Classification is necessary. But unless you can progress from classification to mathematics, your reasoning will not take you very far.

A similar idea is championed by the biologist and palaeontologist Stephen Jay Gould in the essay “Why We Should Not Name Human Races – A Biological View”, which can be found in his book “Ever Since Darwin” (highly recommended). Gould first makes the point that “species” is a good classification in the animal kingdom. It represents a clear division in nature: same species = able to breed fertile offspring. However, the temptation to further divide into subspecies – or races, when the species is humans – should be resisted, since it involves classification where we should be measuring. Species have a (mostly) continuous geographic variability, and so Gould asks:

Shall we artificially partition such a dynamic and continuous pattern into distinct units with formal names? Would it not be better to map this variation objectively without imposing upon it the subjective criteria for formal subdivision that any taxonomist must use in naming subspecies?

Gould gives the example of the English sparrow, introduced to North America in the 1850s. The plot below shows the distribution of the size of male sparrows – dark regions show larger sparrows. Gould notes:

The strong relationship between large size and cold winter climates is obvious. But would we have seen it so clearly if variation had been expressed instead by a set of formal Latin names artificially dividing the continuum?