Feeds:
Posts

## Reply to Maudlin: The Calibrated Cosmos

I recently read philosopher of science Tim Maudlin’s book Philosophy of Physics: Space and Time and thought it was marvellous, so I was expecting good things when I came to read Maudlin’s article for Aeon Magazine titled “The calibrated cosmos: Is our universe fine-tuned for the existence of life – or does it just look that way from where we’re sitting?“. I’ve got a few comments. Indented quotes below are from Maudlin’s article unless otherwise noted.

### In a weekend?

Theories now suggest that the most general structural elements of the universe — the stars and planets, and the galaxies that contain them — are the products of finely calibrated laws and conditions that seem too good to be true. … The details of these sorts of calculations should be taken with a grain of salt. No one could sit down and rigorously work out an entirely new physics in a weekend.

Two few quick things. “Theories” has a ring of “some tentative, fringe ideas” to the lay reader, I suspect. The theories on which one bases fine-tuning calculations are precisely the reigning theories of modern physics. These are not “entirely new physics” but the same equations (general relativity, the standard model of particle physics, stellar structure equations etc.) that have time and again predicted the results of observations, now applied to different scenarios. I think Maudlin has underestimated both the power of order of magnitude calculations in physics,  and the effort that theoretical physicists have put into fine-tuning calculations. For example, Epelbaum and his collaborators, having developed the theory and tools to use supercomputer lattice simulations to investigate the structure of the C12 nucleus, write a few papers (2011, 2012) to describe their methods and show how their cutting-edge model successfully reproduces observations. They then use the same methods to investigate fine-tuning (2013). My review article cites upwards of a hundred papers like this. This is not a back-of-the-envelope operation, not starting from scratch, not entirely new physics, not a weekend hobby. This is theoretical physics.

It can be unsettling to contemplate the unlikely nature of your own existence … Even if your parents made a deliberate decision to have a child, the odds of your particular sperm finding your particular egg are one in several billion. … after just two generations, we are up to one chance in 10^27. Carrying on in this way, your chance of existing, given the general state of the universe even a few centuries ago, was almost infinitesimally small. You and I and every other human being are the products of chance, and came into existence against very long odds.

The slogan I want to invoke here is “don’t treat a likelihood as if it were a posterior”. That’s a bit to jargon-y. The likelihood is the probability of what we know, assuming that some theory is true. The posterior is the reverse – the probability of the theory, given what we know. It is the posterior that we really want, since it reflects our situation: the theory is uncertain, the data is known. The likelihood can help us calculate the posterior (using Bayes theorem), but in and of itself, a small likelihood doesn’t mean anything. The calculation Maudlin alludes to above is a likelihood: what is the probability that I would exist, given that the events that lead to my existence came about by chance? The reason that this small likelihood doesn’t imply that the posterior – the probability of my existence by chance, given my existence – is small is that the theory has no comparable rivals. Brendon has explained this point elsewhere. (more…)

## Feser on Krauss

Having had my appetite for the Middle Ages whetted by Edward Grant’s excellent book A History of Natural Philosophy: From the Ancient World to the Nineteenth Century, I recently read Edward Feser’s Aquinas (A Beginner’s Guide). And, on the back of that, his book The Last Superstition. If I ever work out what a formal cause is, I might post a review.

In the meantime, I’ve quite enjoyed some of his blog posts about the philosophical claims of Lawrence Krauss. This is something I’ve blogged about a few times. His most recent post on Krauss contains this marvellous passage.

Krauss asserts:

“[N]othing is a physical concept because it’s the absence of something, and something is a physical concept.”

The trouble with this, of course, is that “something” is not a physical concept. “Something” is what Scholastic philosophers call a transcendental, a notion that applies to every kind of being whatsoever, whether physical or non-physical — to tables and chairs, rocks and trees, animals and people, substances and accidents, numbers, universals, and other abstract objects, souls, angels, and God. Of course, Krauss doesn’t believe in some of these things, but that’s not to the point. Whether or not numbers, universals, souls, angels or God actually exist, none of them would be physical if they existed. But each would still be a “something” if it existed. So the concept of “something” is broader than the concept “physical,” and would remain so even if it turned out that the only things that actually exist are physical.

No atheist philosopher would disagree with me about that much, because it’s really just an obvious conceptual point. But since Krauss and his fans have an extremely tenuous grasp of philosophy — or, indeed, of the obvious — I suppose it is worth adding that even if it were a matter of controversy whether “something” is a physical concept, Krauss’s “argument” here would simply have begged the question against one side of that controversy, rather than refuted it. For obviously, Krauss’s critics would not agree that “something is a physical concept.” Hence, confidently to assert this as a premise intended to convince someone who doesn’t already agree with him is just to commit a textbook fallacy of circular reasoning.

The wood floor guy analogy is pretty awesome, so be sure to have a read.

## A universe from nothing? What you should know before you hear the Krauss-Craig debate

The ABC’s opinion pages has posted my introduction to the debate between Lawrence Krauss and William Lane Craig, happening this evening at the Sydney Town Hall. The debate topic is “Why is there something rather than nothing?”. Can science answer the question? Can God? Can anyone? Read on.

## Classify or Measure?

It’s always useful to know a statistics junkie or two. Brendon is our resident Bayesian. Another colleague of mine from Zurich, Ewan Cameron, has recently started Another Astrostatistics Blog. It’s well worth a look.

I’m not a statistics expert, but I’ve had this rant in mind for a while. I’m currently at the “Feeding, Feedback, and Fireworks” conference on Hamilton Island (thanks Astropixie!). There has been some discussion of the problem of reification. In particular, Ray Norris warned that, once a phenomenon is named, we have put it in a box and it is difficult to think outside that box. For example, what was discovered in 1998 was the acceleration of the expansion of the universe. We often call it the discovery of dark energy, but this is perhaps a premature leap from observation to explanation – the acceleration could be being caused by something other than some exotic new form of matter.

There is a broader message here, which I’ll motivate with this very interesting passage from Alfred North Whitehead’s book “Science and the Modern World” (1925):

In a sense, Plato and Pythagoras stand nearer to modern physical science than does Aristotle. The former two were mathematicians, whereas Aristotle was the son of a doctor, though of course he was not thereby ignorant of mathematics. The practical counsel to be derived from Pythagoras is to measure, and thus to express quality in terms of numerically determined quantity. But the biological sciences, then and till our own time, has been overwhelmingly classificatory. Accordingly, Aristotle by his Logic throws the emphasis on classification. The popularity of Aristotelian Logic retarded the advance of physical science throughout the Middle Ages. If only the schoolmen had measured instead of classifying, how much they might have learnt!

… Classification is necessary. But unless you can progress from classification to mathematics, your reasoning will not take you very far.

A similar idea is championed by the biologist and palaeontologist Stephen Jay Gould in the essay “Why We Should Not Name Human Races – A Biological View”, which can be found in his book “Ever Since Darwin” (highly recommended). Gould first makes the point that “species” is a good classification in the animal kingdom. It represents a clear division in nature: same species = able to breed fertile offspring. However, the temptation to further divide into subspecies – or races, when the species is humans – should be resisted, since it involves classification where we should be measuring. Species have a (mostly) continuous geographic variability, and so Gould asks:

Shall we artificially partition such a dynamic and continuous pattern into distinct units with formal names? Would it not be better to map this variation objectively without imposing upon it the subjective criteria for formal subdivision that any taxonomist must use in naming subspecies?

Gould gives the example of the English sparrow, introduced to North America in the 1850s. The plot below shows the distribution of the size of male sparrows – dark regions show larger sparrows. Gould notes:

The strong relationship between large size and cold winter climates is obvious. But would we have seen it so clearly if variation had been expressed instead by a set of formal Latin names artificially dividing the continuum?

## How to overhype a science press release

From the Sydney Morning Herald (here):

## Melbourne researchers rewrite Big Bang theory

Melbourne researchers believe they may be on the brink of rewriting the history of the universe.

A paper being published in a US physics journal suggests it may be possible to view “cracks” in the universe that would support the theory of quantum graphity – considered to be the holy grail of physics.

The team of researchers from the University of Melbourne and RMIT say that, instead of thinking of the start of the universe as being a big bang, we should imagine it as a cooling of water into ice.

… Their research rests on a school of thought that has emerged recently to suggest space is made of indivisible building blocks, such as atoms, that can be thought of as similar to pixels that make up images on a computer screen.

Mr Quach said the standing model for the origins of the universe, the big bang, needed to be rewritten. He hoped experimentalists would be able to find evidence to support the theory put forward by the Melbourne team of researchers, that would replace it. ”The biggest problem with the big bang model is the bang itself,” Mr Quach said. …

Mr Quach and his fellow researchers theorise that if quantum graphity “cracks” do exist, they will bend or reflect light, which, if observed through a telescope would support their predictions.
“If they prove my predictions that’s really good evidence for the condensed matter model of quantum graphity in which case you can throw out all the other attempts.”

Here’s a few pointers for the layman trying to decipher this article.

• Note how the claim of the title changes. “They’ve rewritten the big bang theory” becomes “they believe they’re about to rewrite the big bang theory” becomes “it may be possible to observe the consequences of a theory that might provide a model for the big bang”.
• The name “quantum graphity” is a pun on the terms quantum gravity and graph theory [edit: 1/9/2012]. Quantum gravity is the “holy grail” of physics (to some). Quantum graphity is not. The journalist evidently didn’t get the pun.
• Note that the article quotes Mr Quach. Not Dr or Professor. I love grad students, but claims that they are about to rewrite everything we know about the fundamental laws of nature and the entire history of the universe should be taken with a grain of salt.
• The paper that the article refers to contains no cosmology. It doesn’t claim to. None of Mr Quach’s papers do. What the paper shows is that, if spacetime consists of these building blocks, and the blocks get put together imperfectly, then light will scatter of the imperfection. The paper concludes: “they produce intriguing scattering, double imaging, and gravitational lensing-like eﬀects. Importantly this serves as a framework in which observable consequences of the QG model may allow it to be tested.”
• It is difficult to express just how astronomically huge the “if” is in the sentence “if observed through a telescope”.  What observational signature should we be looking for? There are an awful lot of things in the universe that bend and deflect light. How would we distinguish between the observation of a graphity imperfection and other gravitational lenses? What unique predictions does the model provide? How many imperfections should we expect in the universe?  What astronomical targets should we aim at?
• This idea isn’t new. The further we look in the universe, the more likely we are to see something funky along the way, so distant quasars have been used to test theories about interesting spacetime phenomena. So far: nothing. No evidence for quantum foam. No evidence for cosmic strings. No topological defects. Why would graphity defects be any different? (more…)

## The Traps of WAP and SAP

Let’s begin by quoting from Radford Neal:

There is a large literature on the Anthropic Principle, much of it too confused to address.

I’ve previously quoted John Leslie:

The ways in which ‘anthropic’ reasoning can be misunderstood form a long and dreary list.

My goal in this post is to go back to the original sources to try to understand the anthropic principle.

### Carter’s WAP

Let’s start with the definitions given by Brandon Carter in the original anthropic principle paper:

Weak Anthropic Principle (WAP): We must be prepared to take account of the fact that our location in the universe is necessarily privileged to the extent of being compatible with our existence as observers.

Carter’s illustration of WAP is the key to understanding what he means. Carter considers the following coincidence: (more…)

## James Jeans and our finite universe(?)

“Leave only three wasps alive in the whole of Europe and the air of Europe will still be more crowded with wasps than space is with stars, at any rate in those parts of the universe with which we are acquainted.”

I love a good illustration.

For whatever reason, I’m drawn to old popular-level science books. I just finished reading “The Stars in Their Courses” by James Jeans, first published in 1931. Jeans is best known in my field for the “Jeans length”. Suppose a cloud of gas is trying to collapse under its own gravity, but is being held back by gas pressure. Jeans showed that there is a critical length scale, such that if the object is smaller than the Jeans length then pressure wins and the cloud is stable, but if it is larger then gravity wins and collapse ensues.

Jeans gives an overview of all of the astronomy of his day. It’s mostly familiar material, of course; the interesting bit is the glimpse inside the mind of the great scientist. Here’s a neat illustration:

“If we could take an ordinary shilling out of our pocket, and heat it up to the temperature of the sun’s centre [40 million kelvin], its heat would shrivel up every living thing within thousands of miles of it.”

Repeating this calculation, I think Jeans is reasoning as follows. A shilling is about 5 grams of copper (specific heat capacity 0.385 J/gram/kelvin), and so at 40,000,000 K we have about $8 \times 10^7$ J of energy. This is ‘only’ 20 kg of TNT – most bombs are at least a tonne of TNT equivalent, and they don’t do miles of damage. That much energy could raise the temperature of the surrounding air to boiling point for about a 10 metre radius. Not too promising. However, the coin will be emitting thermal radiation at x-ray wavelengths. A lethal dose of x-rays is about 5 J/kg, so our coin has enough energy to kill about 100,000 people. One must factor in the fraction of energy emitted horizontally, the fraction absorbed by biological material, the cooling of the coin, etc, but certainly it’s a very dangerous coin.

Jeans’ views on cosmology are very revealing. He is writing within 5 years of the discovery of the expansion of the universe by Lemaitre (first!) and Hubble. Jeans says: (more…)

## Book Review: The Cosmic Landscape by Leonard Susskind (Part 1)

I’m a great fan a popular science books, particularly when the topic is cosmology or fundamental physics. Susskind’s “The Cosmic Landscape” was particularly enjoyable, though I will take issue with a few things in later posts. For now, here are a few highlights:

I love a good illustration:

A rocket-propelled lemon moving away from you might have the color of an orange or even a tomato if it were going fast enough. While its moving toward you, you might mistake it for a lime.

This is simply the Doppler effect, which we’ve all observed for sound as an ambulance drives past. It works for light as well, but you have to be going close to the speed of light. Using the right formula from Einstein’s special relativity, we find that you must fire a lemon at a tenth of the speed of light to make it look red. About the same speed, but moving toward you, will make it look green.

Susskind gives an excellent account of the fine-tuning of the universe for intelligent life.

[T]he Laws of Physics may not only be variable but are almost always deadly. In a sense the laws of nature are like East Coast weather: tremendously variable, almost always awful, but on rare occasions, perfectly lovely. … One theme has threaded its way through our long and winding tour from Feynman diagrams to bubbling universes: our own universe is an extraordinary place that appears to be fantastically well designed for our own existence. This specialness is not something that we can attribute to lucky accidents, which is far too unlikely. The apparent coincidences cry out for an explanation.

In particular, he takes the discussion to the cutting edge of particle physics, discussing the gauge hierarchy problem:

Physicists puzzled for some time about why the top-quark is so heavy, but recently we have come to understand that it’s not the top-quark that is abnormal: it’s the up- and down-quarks that are absurdly light. The fact that they are roughly twenty thousand times lighter than particles like the Z-boson and the W-boson is what needs an explanation. The Standard Model has not provided one. Thus, we can ask what the world would be like is the up- and down-quarks were much heavier than they are. Once again – disaster!

… the cosmological constant problem:

Throughout the years many people, including some of the illustrious names in physics, have tried to explain why the cosmological constant is small or zero. The overwhelming consensus is that these attempts have not been successful.

… fine-tuning of cosmic inflation needed to give the universe the right amount of lumpiness:

A lumpiness of about 10^-5 is essential for life to get a start. But is it easy to arrange this amount of density contrast? The answer is most decidedly no! The various parameters governing the inflating universe must be chosen with great care in order to get the desired result.

… and even supersymmetry:

The biggest threat to life in an exactly supersymmetric universe [has to do] with chemistry. In a supersymmetric universe every fermion has a boson partner with exactly the same mass, and therein lies the trouble. The culprits are the supersymmetric partners of the electron and the photon. These two particles, called the selectron (ugh!) and the photino, conspire to destroy all ordinary atoms. … in a supersymmetric world, an outer electron can emit a photino and turn into a selectron. … That’s a big problem: the selectron, being a boson, is not blocked (by the Pauli exclusion principle) from dropping down to lower energy orbits near the nucleus. … Goodbye to the chemical properties of carbon – and every other molecule needed by life.

Susskind is also clear to distinguish between the landscape of string theory and a multiverse (or megaverse):

The two concepts – Landscape and megaverse [a.k.a. multiverse] – should not be confused. The Landscape is not a real place. Think of it as a list of all the possible designs of hypothetical universes. Each valley represents one such design. … The megaverse, by contrast, is quite real. The pocket universes that fill it are actual existing places, not hypothetical possibilities.

All in all, the Susskind’s book is highly recommended.

Part 2 of my review is here.

## “If you don’t want to explain things, then you’re a moron”

I love David Mitchell. I love everything he’s done – Peep Show, Would I Lie to You, That Mitchell and Webb Look, Sound and Book, his work on QI, Mock the Week and any other panel show, Soapbox, and various articles. I was listening to a conversation with Mr Mitchell on CarPool with Robert Llewellyn of ‘Red Dwarf’ fame. He started talking about his time spent studying history at Cambridge, and why it interested him:

If you don’t want to explain things, then you’re a moron. As far as I’m concerned, trying to explain things through what the molecules people and things are made up of, or the chemicals and how they react to each other, is an incredibly roundabout way. You know, I don’t want to know that. I assume that will keep going whether or not I understand it. I want to know why we are in a country called Britain, why are these people in charge. That seems to me to be the direct way of generally explaining things. Obviously, I’ve got a lot of time for the scientific urge to explain. But for me, that’s always been a bit secondary to specifically explaining “what’s this stuff, and don’t tell me what it is at a subatomic level!” (more…)

## Dear Peter Coles …

I was just re-reading this post over at Cosmic Variance about a paper by Sean Carroll, which he summarises as:

Our observed universe is highly non-generic, and in the past it was even more non-generic, or “finely tuned.” One way of describing this state of affairs is to say that the early universe had a very low entropy. … The basic argument is an old one, going back to Roger Penrose in the late 1970′s. The advent of inflation in the early 1980′s seemed to change things — it showed how to get a universe just like ours starting from a tiny region of space dominated by “false vacuum energy.” But a more careful analysis shows that inflation doesn’t really change the underlying problem — sure, you can get our universe if you start in the right state, but that state is even more finely-tuned than the conventional Big Bang beginning. We find that inflation is very unlikely, in the sense that a negligibly small fraction of possible universes experience a period of inflation. On the other hand, our universe is unlikely, by exactly the same criterion. So the observable universe didn’t “just happen”; it is either picked out by some general principle, perhaps something to do with the wave function of the universe, or it’s generated dynamically by some process within a larger multiverse. And inflation might end up playing a crucial role in the story. We don’t know yet, but it’s important to lay out the options to help us find our way.

It’s a very nice paper and Sean’s post is also worth a read. What I didn’t notice before was this comment from Peter Coles:

I remember having a lot of discussions with George Ellis way back in the 90s about this issue. I strongly agree that what inflation does is merely to push the fine-tuning problems back to an earlier epoch where they are effectively under the carpet (or beyond the horizon, if you prefer a different metaphor). In fact we were planning to write a sort of spoof of Galileo’s “Dialogue concerning the Two Chief World Systems” featuring characters with names like “Inflatio” and “Anthropicus” …. but never got around to it.

Dear Peter Coles, Please write that paper!!! I’ve been looking through the inflation literature lately and there seems to be an uncomfortably large portion of it devoted to propaganda, arguing that inflation is inevitable and the only possible solution to the problems of the standard hot big bang. A good example is this exchange of papers (one, two and three), where Hollands and Wald face off against Kofman, Linde, and Mukhanov on the issue of whether inflation can explain the low entropy of our universe. The question of whether inflation can be the last word in cosmology (and initial conditions) is in need of clarification.