Feeds:
Posts
Comments

Archive for the ‘fine tuning’ Category

Last time, I started a review of the Carroll vs. Craig debate with a (mostly historical) overview of the back-and-forth about the beginning of the universe for the last 90 years of modern cosmology. Here, I’ll have a look at fine-tuning. I should start by saying how much I enjoyed the debate. They should do it again some time.

In his speeches, Sean Carroll raised five points (transcript) against the fine-tuning of the universe for intelligent life as an argument for the existence of God. I want to have a look at those five. Carroll (here) and Craig (here, here and here) had a few points to make post-debate, too.

Here is fine-tuning reply number one:

First, I am by no means convinced that there is a fine-tuning problem and, again, Dr. Craig offered no evidence for it. It is certainly true that if you change the parameters of nature our local conditions that we observe around us would change by a lot. I grant that quickly. I do not grant therefore life could not exist. I will start granting that once someone tells me the conditions under which life can exist. What is the definition of life, for example? If it’s just information processing, thinking or something like that, there’s a huge panoply of possibilities. They sound very “science fiction-y” but then again you’re the one who is changing the parameters of the universe. The results are going to sound like they come from a science fiction novel. Sadly, we just don’t know whether life could exist if the conditions of our universe were very different because we only see the universe that we see.

“Interesting” Games

Is the debate over the definition of life a problem for fine-tuning? Sean and I had a brief discussion on this point during my talk at the UCSC Summer School on Philosophy of Cosmology. My response was (roughly) as follows.

Consider chess. In particular, I’m wondering whether minor changes to the laws of chess would result in a similarly interesting game. Wait a minute, you say, you haven’t defined “interesting”. In fact, different people are going to come up with different definitions of interesting. So how can we know whether a game is interesting or not?

It’s a good point, but instead of considering this question in abstract, consider this particular example. Change one word in the rules of chess: instead of “Knights may jump over other pieces”, we propose that “Bishops may jump over other pieces”. If we were to rewrite the 530 page “Silman’s Complete Endgame Course“, we would need just one page, one paragraph, two sentences: “White bishop moves from f1 to b5. Checkmate.”

Chess2

My claim is that this particular case is so clear that by any definition of interesting, this is not an interesting game. The game is no more interesting than tossing a coin to see who goes first. It is too simple, too easy. (more…)

Read Full Post »

The Conversation has published an article of mine, co-authored with Geraint Lewis, titled “Have cosmologists lost their minds in the multiverse?“. It’s a quick introduction to the multiverse in light of the recent BICEP2 results. Comments welcome!

Read Full Post »

Following my three critiques (one, two, three) of Richard Carrier’s view on the fine-tuning of the universe for intelligent life, we had a back-and-forth in the comments section of his blog. Just as things were getting interesting, Carrier took his ball and went home, saying that any further conversation would be “a waste of anyone’s time”. Sorry, anyone.

I still have questions. Before I forget, I’ll post them here. (I posted them as a comment on his blog but they’re still “awaiting moderation”. I guess he’ll delete them.)

The Main Attraction

What is Carrier’s main argument in response to fine-tuning, in his article “Neither Life nor the Universe Appear Intelligently Designed”? He kept accusing me of misrepresenting him, but never clarified his argument. I’ll have another go. Let,

o = intelligent observers exist
f = a finely-tuned universe exists
b = background information.
NID = a Non-terrestrial Intelligent Designer caused the universe.

We want to calculate the posterior: the probability of NID given what we know. From Carrier’s footnote 29, introduced as the “probability that NID caused the universe”, we can derive (using the odds form of Bayes’ theorem),

\frac{p(NID|f.b)}{p(\sim NID|f.b)}=\frac{p(f|NID.b)}{p(f|\sim NID.b)}\frac{p(NID|b)}{p(\sim NID|b)}   (1)

Carrier argues in footnotes 22 and 23 that,

p(f|o)=1 implies p(f|\sim NID.b)=1,   (2)

because o is part of “established background knowledge” and so part of b. Thus,

\frac{p(NID|f.b)}{p(\sim NID|f.b)}=\frac{p(NID|b)}{p(\sim NID|b)}   (3)

Conclusion: the posterior is be equal to the prior (as seen in footnote 29). Learning f has not changed the probability that NID is true. Fine-tuning is irrelevant to the existence of God.

Question 1: Is the above a correct formalisation of Carrier’s argument? (If anyone has read his essay, comment!) (more…)

Read Full Post »

I thought I was done with Richard Carrier’s views on the fine-tuning of the universe for intelligent life (Part 1, Part 2). And then someone pointed me to this. It comes in response to an article by William Lane Craig. I’ve critiqued Craig’s views on fine-tuning here and here. The quotes below are from Carrier unless otherwise noted.

[H]e claims “the fundamental constants and quantities of nature must fall into an incomprehensibly narrow life-permitting range,” but that claim has been refuted–by scientists–again and again. We actually do not know that there is only a narrow life-permitting range of possible configurations of the universe. As has been pointed out to Craig by several theoretical physicists (from Krauss to Stenger), he can only get his “narrow range” by varying one single constant and holding all the others fixed, which is simply not how a universe would be randomly selected. When you allow all the constants to vary freely, the number of configurations that are life permitting actually ends up respectably high (between 1 in 8 and 1 in 4: see Victor Stenger’s The Fallacy of Fine-Tuning).

I’ve said an awful lot in response to that paragraph, so let’s just run through the highlights.

  • “Refuted by scientists again and again”. What, in the peer-reviewed scientific literature? I’ve published a review of the scientific literature, 200+ papers, and I can only think of a handful that oppose this conclusion, and piles and piles that support it. Here are some quotes from non-theist scientists. For example, Andrei Linde says: “The existence of an amazingly strong correlation between our own properties and the values of many parameters of our world, such as the masses and charges of electron and proton, the value of the gravitational constant, the amplitude of spontaneous symmetry breaking in the electroweak theory, the value of the vacuum energy, and the dimensionality of our world, is an experimental fact requiring an explanation.” [emphasis added.]

  • “By several theoretical physicists (from Krauss to Stenger)”. I’ve replied to Stenger. I had a chance to talk to Krauss briefly about fine-tuning but I’m still not sure what he thinks. His published work on anthropic matters doesn’t address the more general fine-tuning claim. Also, by saying “from” and “to”, Carrier is trying to give the impression that a great multitude stands with his claim. I’m not even sure if Krauss is with him. I’ve read loads on this subject and only Stenger defends Carrier’s point, and in a popular (ish) level book. On the other hand, Craig can cite Barrow, Carr, Carter, Davies, Deutsch, Ellis, Greene, Guth, Harrison, Hawking, Linde, Page, Penrose, Polkinghorne, Rees, Sandage, Smolin, Susskind, Tegmark, Tipler, Vilenkin, Weinberg, Wheeler, and Wilczek. (See here). With regards to the claim that “the fundamental constants and quantities of nature must fall into an incomprehensibly narrow life-permitting range”, the weight of the peer-reviewed scientific literature is overwhelmingly with Craig. (If you disagree, start citing papers).

  • “He can only get his “narrow range” by varying one single constant”. Wrong. The very thing that got this field started was physicists noting coincidences between a number of constants and the requirements of life. Only a handful of the 200+ scientific papers in this field vary only one variable. Read this.

  • “1 in 8 and 1 in 4: see Victor Stenger”. If Carrier is referring to Stenger’s program MonkeyGod, then he’s kidding himself. That “model” has 8 high school-level equations, 6 of which are wrong. It fails to understand the difference between an experimental range and a possible range, which is fatal to any discussion of fine-tuning. Assumptions are cherry-picked. Crucial constraints and constants are missing. Carrier has previously called MonkeyGod “a serious research product, defended at length in a technical article”. It was published in a philosophical journal of a humanist society, and a popular level book, and would be laughed out of any scientific journal. MonkeyGod is a bad joke.

And even those models are artificially limiting the constants that vary to the constants in our universe, when in fact there can be any number of other constants and variables.

In all the possible universes we have explored, we have found that a tiny fraction would permit the existence of intelligent life. There are other possible universes,that we haven’t explored. This is only relevant if we have some reason to believe that the trend we have observed until now will be miraculously reversed just beyond the horizon of what we have explored. In the absence of such evidence, we are justified in concluding that the possible universes we have explored are typical of all the possible universes. In fact, by beginning in our universe, known to be life-permitting, we have biased our search in favour of finding life-permitting universes. (more…)

Read Full Post »

Last time, we looked at historian Richard Carrier’s article, “Neither Life nor the Universe Appear Intelligently Designed”. We found someone who preaches Bayes’ theorem but thinks that probabilities are frequencies, says that likelihoods are irrelevant to posteriors, and jettisons his probability principles at his leisure. In this post, we’ll look at his comments on the fine-tuning of the universe for intelligent life. Don’t get your hopes up.

Simulating universes

Here’s Carrier.

Suppose in a thousand years we develop computers capable of simulating the outcome of every possible universe, with every possible arrangement of physical constants, and these simulations tell us which of those universes will produce arrangements that make conscious observers (as an inevitable undesigned by-product). It follows that in none of those universes are the conscious observers intelligently designed (they are merely inevitable by-products), and none of those universes are intelligently designed (they are all of them constructed merely at random). Suppose we then see that conscious observers arise only in one out of every 10^{1,000,000} universes. … Would any of those conscious observers be right in concluding that their universe was intelligently designed to produce them? No. Not even one of them would be.

To see why this argument fails, replace “universe” with “arrangement of metal and plastic” and “conscious observers” with “driveable cars”. Suppose we could simulate the outcome of every possible arrangement of metal and plastic, and these simulations tell us which arrangements produce driveable cars. Does it follow that none of those arrangements could have been designed? Obviously not. This simulation tells us nothing about how actual cars are produced. The fact that we can imagine every possible arrangement of metal and plastic does not mean that every actual car is constructed merely at random. This wouldn’t even follow if cars were in fact constructed by a machine that produced every possible arrangement of metal and plastic, since the machine itself would need to be designed. The driveable cars it inevitably made would be the product of design, albeit via an unusual method.

Note a few leaps that Carrier makes. He leaps from bits in a computer to actual universes that contain conscious observers. He leaps from simulating every possible universe to producing universes “merely at random”. As a cosmological simulator myself, I can safely say that a computer program able to simulate every possible universe would require an awful lot of intelligent design. Carrier also seems to assume that a random process is undesigned. Tell that to these guys. Random number generators are a common feature of intelligently designed computer programs. This argument is an abysmal failure.

How to Fail Logic 101

Carrier goes on … (more…)

Read Full Post »

After a brief back and forth in a comments section, I was encouraged by Dr Carrier to read his essay “Neither Life nor the Universe Appear Intelligently Designed”. I am assured that the title of this essay will be proven “with such logical certainty” that all opposing views should be wiped off the face of Earth.

Dr Richard Carrier is a “world-renowned author and speaker”. That quote comes from none other than the world-renowned author and speaker, Dr Richard Carrier. Fellow atheist Massimo Pigliucci says,

The guy writes too much, is too long winded, far too obnoxious for me to be able to withstand reading him for more than a few minutes at a time.

I know the feeling. When Carrier’s essay comes to address evolution, he recommends that we “consider only actual scholars with PhD’s in some relevant field”. One wonders why, when we come to consider the particular intersection of physics, cosmology and philosophy wherein we find fine-tuning, we should consider the musings of someone with a PhD in ancient history. (A couple of articles on philosophy does not a philosopher make). Especially when Carrier has stated that there are six fundamental constants of nature, but can’t say what they are, can’t cite any physicist who believes that laughable claim, and refers to the constants of the standard model of particle physics (which every physicist counts as fundamental constants of nature) as “trivia”.

In this post, we will consider Carrier’s account of probability theory. In the next post, we will consider Carrier’s discussion of fine-tuning. The mathematical background and notation of probability theory were given in a previous post, and follow the discussion of Jaynes. (Note: probabilities can be either p or P, and both an overbar \bar{A} and tilde \sim A denote negation.)

Probability theory, a la Carrier

I’ll quote Carrier at length.

Bayes’ theorem is an argument in formal logic that derives the probability that a claim is true from certain other probabilities about that theory and the evidence. It’s been formally proven, so no one who accepts its premises can rationally deny its conclusion. It has four premises … [namely P(h|b), P(~h|b), P(e|h.b), P(e|~h.b)]. … Once we have [those numbers], the conclusion necessarily follows according to a fixed formula. That conclusion is then by definition the probability that our claim h is true given all our evidence e and our background knowledge b.

We’re off to a dubious start. Bayes’ theorem, as the name suggests, is a theorem, not an argument, and certainly not a definition. Also, Carrier seems to be saying that P(h|b), P(~h|b), P(e|h.b), and P(e|~h.b) are the premises from which one formally proves Bayes’ theorem. This fails to understand the difference between the derivation of a theorem and the terms in an equation. Bayes’ theorem is derived from the axioms of probability theory – Kolmogorov’s axioms or Cox’s theorem are popular starting points. Any necessity in Bayes’ theorem comes from those axioms, not from the four numbers P(h|b), P(~h|b), P(e|h.b), and P(e|~h.b). (more…)

Read Full Post »

I recently read philosopher of science Tim Maudlin’s book Philosophy of Physics: Space and Time and thought it was marvellous, so I was expecting good things when I came to read Maudlin’s article for Aeon Magazine titled “The calibrated cosmos: Is our universe fine-tuned for the existence of life – or does it just look that way from where we’re sitting?“. I’ve got a few comments. Indented quotes below are from Maudlin’s article unless otherwise noted.

In a weekend?

Theories now suggest that the most general structural elements of the universe — the stars and planets, and the galaxies that contain them — are the products of finely calibrated laws and conditions that seem too good to be true. … The details of these sorts of calculations should be taken with a grain of salt. No one could sit down and rigorously work out an entirely new physics in a weekend.

Two few quick things. “Theories” has a ring of “some tentative, fringe ideas” to the lay reader, I suspect. The theories on which one bases fine-tuning calculations are precisely the reigning theories of modern physics. These are not “entirely new physics” but the same equations (general relativity, the standard model of particle physics, stellar structure equations etc.) that have time and again predicted the results of observations, now applied to different scenarios. I think Maudlin has underestimated both the power of order of magnitude calculations in physics,  and the effort that theoretical physicists have put into fine-tuning calculations. For example, Epelbaum and his collaborators, having developed the theory and tools to use supercomputer lattice simulations to investigate the structure of the C12 nucleus, write a few papers (2011, 2012) to describe their methods and show how their cutting-edge model successfully reproduces observations. They then use the same methods to investigate fine-tuning (2013). My review article cites upwards of a hundred papers like this. This is not a back-of-the-envelope operation, not starting from scratch, not entirely new physics, not a weekend hobby. This is theoretical physics.

Telling your likelihood from your posterior

It can be unsettling to contemplate the unlikely nature of your own existence … Even if your parents made a deliberate decision to have a child, the odds of your particular sperm finding your particular egg are one in several billion. … after just two generations, we are up to one chance in 10^27. Carrying on in this way, your chance of existing, given the general state of the universe even a few centuries ago, was almost infinitesimally small. You and I and every other human being are the products of chance, and came into existence against very long odds.

The slogan I want to invoke here is “don’t treat a likelihood as if it were a posterior”. That’s a bit to jargon-y. The likelihood is the probability of what we know, assuming that some theory is true. The posterior is the reverse – the probability of the theory, given what we know. It is the posterior that we really want, since it reflects our situation: the theory is uncertain, the data is known. The likelihood can help us calculate the posterior (using Bayes theorem), but in and of itself, a small likelihood doesn’t mean anything. The calculation Maudlin alludes to above is a likelihood: what is the probability that I would exist, given that the events that lead to my existence came about by chance? The reason that this small likelihood doesn’t imply that the posterior – the probability of my existence by chance, given my existence – is small is that the theory has no comparable rivals. Brendon has explained this point elsewhere. (more…)

Read Full Post »

Older Posts »

Follow

Get every new post delivered to your Inbox.

Join 371 other followers