Archive for December, 2013

I’ve read two of Daniel Dennett’s books, and while I enjoyed them at the time I find myself unable to remember what they were about, what their arguments were, or indeed any memorable passages. Maybe it’s just me, but I remember almost nothing from “Freedom Evolves”.

I’ve just watched one of Dennett’s TED talks, having been pointed there by 3quarksdaily. The title of the talk is “The Illusion of Consciousness”. Maybe I’m being thick, but I after 20 minutes I’m left with this question: what does any of this have to do with consciousness at all, let alone showing it to be an illusion? Before I move on, I should stress that I’m no kind of philosopher of mind or neuroscientist. I’m not even particularly well-read in the popular literature of these fields. Comments, please!

What I’m going to try to do today is to shake your confidence … that you know your own, inner-most mind, that you are, yourselves, authoritative about your own consciousness. …

Somehow we have to explain how, when you put together teams, armies, battalions, of hundreds of millions of little robotic unconscious cells … the result is colour, content, ideas, memories, history. And somehow all that concept [content?] of consciousness is accomplished by the busy activity of those hoards of neurons.

So we’re off to a good start. The hard problem of consciousness is to explain why certain collections of cells become conscious at all. Dennett particularly wants to question whether we really know our own conscious selves. Good. What is his method?

How many of you here, if some smart alec starts telling you how a particular magic trick is done, want to block your ears and say, “I don’t want to know. Don’t take the thrill of it away. I’d rather be mystified. Don’t tell me the answer.” A lot of people feel that way about consciousness, I’ve discovered. I’m sorry if I impose some clarity, some understanding on you. You better leave now if you don’t want to know these tricks.

Method: condescension. He’s going to smug those illusions right out of us.

The example is wrong. I don’t want you to tell me how a magic trick is done for the same reason I don’t want the stranger on the train to lean over and give me crossword answers. It’s a puzzle. The fun is thinking about it yourself. No one says “I don’t want the crossword answers. I just want the mystery of the empty squares.”

Note the implicit ad hominem. Anyone who disagrees with Dennett is weak-minded, a blissful ignoramus. Actually, those who criticised books such a Dennett’s “Consciousness Explained” usually complained that it failed to explain consciousness.

I’m not going to explain it all to you. … You know the sawing the lady in half trick? The philosopher says “I’m going to explain to you how that’s done. You see  – the magician doesn’t really saw the lady in half. He merely makes you think that he does.” How does he do that? “Oh, that’s not my department”.

This is all very amusing, and delivered with a twinkle in the eye. But the message of the metaphor is this: brace yourself for some bald assertion. I’ll tell you what follows from my assumptions, but don’t expect any evidence.


Read Full Post »

I thought I was done with Richard Carrier’s views on the fine-tuning of the universe for intelligent life (Part 1, Part 2). And then someone pointed me to this. It comes in response to an article by William Lane Craig. I’ve critiqued Craig’s views on fine-tuning here and here. The quotes below are from Carrier unless otherwise noted.

[H]e claims “the fundamental constants and quantities of nature must fall into an incomprehensibly narrow life-permitting range,” but that claim has been refuted–by scientists–again and again. We actually do not know that there is only a narrow life-permitting range of possible configurations of the universe. As has been pointed out to Craig by several theoretical physicists (from Krauss to Stenger), he can only get his “narrow range” by varying one single constant and holding all the others fixed, which is simply not how a universe would be randomly selected. When you allow all the constants to vary freely, the number of configurations that are life permitting actually ends up respectably high (between 1 in 8 and 1 in 4: see Victor Stenger’s The Fallacy of Fine-Tuning).

I’ve said an awful lot in response to that paragraph, so let’s just run through the highlights.

  • “Refuted by scientists again and again”. What, in the peer-reviewed scientific literature? I’ve published a review of the scientific literature, 200+ papers, and I can only think of a handful that oppose this conclusion, and piles and piles that support it. Here are some quotes from non-theist scientists. For example, Andrei Linde says: “The existence of an amazingly strong correlation between our own properties and the values of many parameters of our world, such as the masses and charges of electron and proton, the value of the gravitational constant, the amplitude of spontaneous symmetry breaking in the electroweak theory, the value of the vacuum energy, and the dimensionality of our world, is an experimental fact requiring an explanation.” [emphasis added.]

  • “By several theoretical physicists (from Krauss to Stenger)”. I’ve replied to Stenger. I had a chance to talk to Krauss briefly about fine-tuning but I’m still not sure what he thinks. His published work on anthropic matters doesn’t address the more general fine-tuning claim. Also, by saying “from” and “to”, Carrier is trying to give the impression that a great multitude stands with his claim. I’m not even sure if Krauss is with him. I’ve read loads on this subject and only Stenger defends Carrier’s point, and in a popular (ish) level book. On the other hand, Craig can cite Barrow, Carr, Carter, Davies, Deutsch, Ellis, Greene, Guth, Harrison, Hawking, Linde, Page, Penrose, Polkinghorne, Rees, Sandage, Smolin, Susskind, Tegmark, Tipler, Vilenkin, Weinberg, Wheeler, and Wilczek. (See here). With regards to the claim that “the fundamental constants and quantities of nature must fall into an incomprehensibly narrow life-permitting range”, the weight of the peer-reviewed scientific literature is overwhelmingly with Craig. (If you disagree, start citing papers).

  • “He can only get his “narrow range” by varying one single constant”. Wrong. The very thing that got this field started was physicists noting coincidences between a number of constants and the requirements of life. Only a handful of the 200+ scientific papers in this field vary only one variable. Read this.

  • “1 in 8 and 1 in 4: see Victor Stenger”. If Carrier is referring to Stenger’s program MonkeyGod, then he’s kidding himself. That “model” has 8 high school-level equations, 6 of which are wrong. It fails to understand the difference between an experimental range and a possible range, which is fatal to any discussion of fine-tuning. Assumptions are cherry-picked. Crucial constraints and constants are missing. Carrier has previously called MonkeyGod “a serious research product, defended at length in a technical article”. It was published in a philosophical journal of a humanist society, and a popular level book, and would be laughed out of any scientific journal. MonkeyGod is a bad joke.

And even those models are artificially limiting the constants that vary to the constants in our universe, when in fact there can be any number of other constants and variables.

In all the possible universes we have explored, we have found that a tiny fraction would permit the existence of intelligent life. There are other possible universes,that we haven’t explored. This is only relevant if we have some reason to believe that the trend we have observed until now will be miraculously reversed just beyond the horizon of what we have explored. In the absence of such evidence, we are justified in concluding that the possible universes we have explored are typical of all the possible universes. In fact, by beginning in our universe, known to be life-permitting, we have biased our search in favour of finding life-permitting universes. (more…)

Read Full Post »

Last time, we looked at historian Richard Carrier’s article, “Neither Life nor the Universe Appear Intelligently Designed”. We found someone who preaches Bayes’ theorem but thinks that probabilities are frequencies, says that likelihoods are irrelevant to posteriors, and jettisons his probability principles at his leisure. In this post, we’ll look at his comments on the fine-tuning of the universe for intelligent life. Don’t get your hopes up.

Simulating universes

Here’s Carrier.

Suppose in a thousand years we develop computers capable of simulating the outcome of every possible universe, with every possible arrangement of physical constants, and these simulations tell us which of those universes will produce arrangements that make conscious observers (as an inevitable undesigned by-product). It follows that in none of those universes are the conscious observers intelligently designed (they are merely inevitable by-products), and none of those universes are intelligently designed (they are all of them constructed merely at random). Suppose we then see that conscious observers arise only in one out of every 10^{1,000,000} universes. … Would any of those conscious observers be right in concluding that their universe was intelligently designed to produce them? No. Not even one of them would be.

To see why this argument fails, replace “universe” with “arrangement of metal and plastic” and “conscious observers” with “driveable cars”. Suppose we could simulate the outcome of every possible arrangement of metal and plastic, and these simulations tell us which arrangements produce driveable cars. Does it follow that none of those arrangements could have been designed? Obviously not. This simulation tells us nothing about how actual cars are produced. The fact that we can imagine every possible arrangement of metal and plastic does not mean that every actual car is constructed merely at random. This wouldn’t even follow if cars were in fact constructed by a machine that produced every possible arrangement of metal and plastic, since the machine itself would need to be designed. The driveable cars it inevitably made would be the product of design, albeit via an unusual method.

Note a few leaps that Carrier makes. He leaps from bits in a computer to actual universes that contain conscious observers. He leaps from simulating every possible universe to producing universes “merely at random”. As a cosmological simulator myself, I can safely say that a computer program able to simulate every possible universe would require an awful lot of intelligent design. Carrier also seems to assume that a random process is undesigned. Tell that to these guys. Random number generators are a common feature of intelligently designed computer programs. This argument is an abysmal failure.

How to Fail Logic 101

Carrier goes on … (more…)

Read Full Post »

After a brief back and forth in a comments section, I was encouraged by Dr Carrier to read his essay “Neither Life nor the Universe Appear Intelligently Designed”. I am assured that the title of this essay will be proven “with such logical certainty” that all opposing views should be wiped off the face of Earth.

Dr Richard Carrier is a “world-renowned author and speaker”. That quote comes from none other than the world-renowned author and speaker, Dr Richard Carrier. Fellow atheist Massimo Pigliucci says,

The guy writes too much, is too long winded, far too obnoxious for me to be able to withstand reading him for more than a few minutes at a time.

I know the feeling. When Carrier’s essay comes to address evolution, he recommends that we “consider only actual scholars with PhD’s in some relevant field”. One wonders why, when we come to consider the particular intersection of physics, cosmology and philosophy wherein we find fine-tuning, we should consider the musings of someone with a PhD in ancient history. (A couple of articles on philosophy does not a philosopher make). Especially when Carrier has stated that there are six fundamental constants of nature, but can’t say what they are, can’t cite any physicist who believes that laughable claim, and refers to the constants of the standard model of particle physics (which every physicist counts as fundamental constants of nature) as “trivia”.

In this post, we will consider Carrier’s account of probability theory. In the next post, we will consider Carrier’s discussion of fine-tuning. The mathematical background and notation of probability theory were given in a previous post, and follow the discussion of Jaynes. (Note: probabilities can be either p or P, and both an overbar \bar{A} and tilde \sim A denote negation.)

Probability theory, a la Carrier

I’ll quote Carrier at length.

Bayes’ theorem is an argument in formal logic that derives the probability that a claim is true from certain other probabilities about that theory and the evidence. It’s been formally proven, so no one who accepts its premises can rationally deny its conclusion. It has four premises … [namely P(h|b), P(~h|b), P(e|h.b), P(e|~h.b)]. … Once we have [those numbers], the conclusion necessarily follows according to a fixed formula. That conclusion is then by definition the probability that our claim h is true given all our evidence e and our background knowledge b.

We’re off to a dubious start. Bayes’ theorem, as the name suggests, is a theorem, not an argument, and certainly not a definition. Also, Carrier seems to be saying that P(h|b), P(~h|b), P(e|h.b), and P(e|~h.b) are the premises from which one formally proves Bayes’ theorem. This fails to understand the difference between the derivation of a theorem and the terms in an equation. Bayes’ theorem is derived from the axioms of probability theory – Kolmogorov’s axioms or Cox’s theorem are popular starting points. Any necessity in Bayes’ theorem comes from those axioms, not from the four numbers P(h|b), P(~h|b), P(e|h.b), and P(e|~h.b). (more…)

Read Full Post »

More about Bayes’ theorem; an introduction was given here. Once again, I’m not claiming any originality.

You can’t save a theory by stapling some data to it, even though this will improve its likelihood. Let’s consider an example.

Suppose, having walked into my kitchen, I know a few things.

D_1 = There is a cake in my kitchen.

D_2 = The cake has “Happy Birthday Luke!” on it, written in icing.

B = My name is Luke + Today is my birthday + whatever else I knew before walking to the kitchen.

Obviously, D_2 \Rightarrow D_1 i.e. D_2 presupposes D_1. Now, consider two theories of how the cake got there.

W = my Wife made me a birthday cake.

A = a cake was Accidentally delivered to my house.

Consider the likelihood of these two theories. Using the product rule, we can write:

p(D_1D_2 | WB) = p(D_2 | D_1 WB) p(D_1 | WB)

p(D_1D_2 | AB) = p(D_2 | D_1 AB) p(D_1 | AB)

Both theories are equally able to place a cake in my kitchen, so p(D_1 | WB) \approx p(D_1 | AB). However, a cake made by my wife on my birthday is likely to have “Happy Birthday Luke!” on it, while a cake chosen essentially at random could have anything or nothing at all written on it. Thus, p(D_2 | D_1 WB) \gg p(D_2 | D_1 AB). This implies that p(D_1D_2 | WB) \gg p(D_1D_2 | AB) and the probability of W has increased relative to A since learning D_1 and D_2.

So far, so good, and hopefully rather obvious. Let’s look at two ways to try to derail the Bayesian account.

Details Details

Before some ad hoc-ery, consider the following objection. We know more than D_1 and D_2, one might say. We also know,

D_3 = there is a swirly border of piped icing on the cake, with a precisely measured pattern and width.

Now, there is no reason to expect my wife to make me a cake with that exact pattern, so our likelihood takes a hit:

p(D_3 | D_1 D_2 WB) \ll 1 ~ \Rightarrow ~ p(D_1 D_2 D_3 | WB) \ll p(D_1D_2 | WB)

Alas! Does the theory that my wife made the cake become less and less likely, the closer I look at the cake? No, because there is no reason for an accidentally delivered cake to have that pattern, either. Thus,

p(D_3 | D_1 D_2 WB) \approx p(D_3 | D_1 D_2 AB)

And so it remains true that,

p(D_1 D_2 D_3 | WB) \gg p(D_1 D_2 D_3 | AB)

and the wife hypothesis remains the prefered theory. This is point 5 from my “10 nice things about Bayes’ Theorem” – ambiguous information doesn’t change anything. Additional information that lowers the likelihood of a theory doesn’t necessarily make the theory less likely to be true. It depends on its effect on the rival theories.

Ad Hoc Theories

What if we crafted another hypothesis, one that could better handle the data? Consider this theory.

A_D = a cake with “Happy Birthday Luke!” on it was accidentally delivered to my house.

Unlike A, A_D can explain both D_1 and D_2. Thus, the likelihoods of A_D and W are about equal: p(D_1D_2 | WB) \approx p(D_1D_2 | A_DB). Does the fact that I can modify my theory to give it a near perfect likelihood sabotage the Bayesian approach?

Intuitively, we would think that however unlikely it is that a cake would be accidentally delivered to my house, it is much less likely that it would be delivered to my house and have “Happy Birthday Luke!” on it. We can show this more formally, since A_D is a conjunction of propositions A_D = A A', where

A' = The cake has “Happy Birthday Luke!” on it, written in icing.

But the statement A' is simply the statement D_2. Thus A_D = A D_2. Recall that, for Bayes’ Theorem, what matters is the product of the likelihood and the prior. Thus,

p(D_1 D_2 | A_D B) ~ p(A_D | B)

= p(D_1 D_2 | A D_2 B) ~ p(A D_2 | B)

= p(D_1|A D_2B) ~ p(D_2|AB) ~ p(A|B)

= p(D_1 D_2 | A B) ~ p(A | B)

Thus, the product of the likelihood and the prior the same for the ad hoc theory A_D and the original theory A. You can’t win the Bayesian game by stapling the data to your theory. Ad hoc theories, by purchasing a better likelihood at the expense of a worse prior, get you nowhere in Bayes’ theorem. It’s the postulates that matter. Bayes’ Theorem is not distracted by data smuggled into the hypothesis.

Too strong?

While all this is nice, it does assume rather strong conditions. It requires that the theory in question explicitly includes the evidence. If we look closely at the statements that make up T, we will find D amongst them, i.e. we can write the theory as T = T' D. A theory can be jerry-rigged without being this obvious. I’ll have a closer look at this in a later post.

Read Full Post »

Continuing on my series on Bayes’ Theorem, recall that the question of any rational investigation is this: what is the probability of the theory of interest T, given everything that I know K? Thanks to Bayes’ theorem, we can take this probability p(T | K) and break it into manageable pieces. In particular, we can divide K into background information B and data D. Remember that this is just convenience, and in particular that B and D are both assumed to be known.

Suppose one calculates p(T | DB) for some theory, data and background information. Think of it as a practice problem in a textbook. This calculation, in and of itself, knows nothing of the real world. So what follows? We can think of the probability as a conditional if-then statement:

1. If DB, then the probability of T is p(T | DB).

To draw a conclusion from this, we must add the premise.

2. DB.

Only then can we conclude,

3. The probability of T is p(T | DB).

But wait a minute … the whole point of this exercise was to reason in the face of uncertainty. Where do we get the nerve to simply assert 2, that DB is true? Where is the inevitable uncertainty of measurement? Isn’t treating the data as certain hopelessly idealized? Shouldn’t we take into account how probable DB is? But there are no raw probabilities, so with respect to what should we calculate the probability of DB? We’re headed for an infinite regress if we keep asking for probabilities. How do we get premise 2? Are probabilities all merely hypothetical?


Read Full Post »