[Edit, 4/2/2012: I’ve written a more complete critique of Stenger’s book The Fallacy of Fine-Tuning: Why the Universe Is Not Designed for Us. It’s posted on on Arxiv.]
This post is part of a series that responds to internet articles on the fine tuning of the universe. Here I will respond to Prof. Victor Stenger, who is a particle physicist at the University of Hawaii and known for his defence of atheism. Stenger, according to Wikipedia, is currently writing a book on fine-tuning. Here I will respond to a point he made in a debate with Dr. William Lane Craig.
Stenger proposes the following counterexample to the claim that interesting conclusions can be drawn from the improbability of the fine-tuning of the constants/initial conditions/laws of nature:
Low probability events happen every day. What’s the probability that my distinguished opponent exists? You have to calculate the probability that a particular sperm united with a particular egg, then multiply that by the probability that his parents met, and then repeat that calculation for his grandparents and all his ancestors going back to the beginning of life on Earth. Even if you stop the calculation with Adam and Eve, you are going to get a fantastically small number. To use words that Dr Craig has used before, “Improbability is multiplied by improbability by improbability until our minds are reeling in incomprehensible numbers.” Well, Dr Craig has a mind-reeling, incomprehensibly low probability – a priori probability – for existing. Yet here he is before us today.
Stenger’s argument is that sometimes we cannot draw interesting conclusions from low probabilities. The most obvious problem with Stenger’s argument is that sometimes we do, in fact, draw interesting conclusions from low probabilities. For example, British illusionist Derren Brown claimed that he could predict the lottery, and then appeared to do so on national television. From the extremely small probability that he would predict the correct numbers by chance alone, we rightly infer that he didn’t just guess and get lucky.
So what’s the difference between your existence and a lotto draw? The difference is the existence of an independently specified target or pattern.
Let each of the 14 million possible lotto predictions be represented by a ping-pong ball. Place all 14 million balls in a big bag, and shake well. Brown (apparently) reaches blindly into the bag and pulls out the one winning ball. Why is this amazing? It is not just that his ball is unlikely – any ball is unlikely. It is this low probability coupled with the fact that the winning ball is specified independently of Brown’s choice. While the balls are all still in the bag, one is a winner (independent of Brown’s choice) and the rest are losers. He didn’t just pick an unlikely ball; he picked the winning ball. He can’t pull out just any ball and proclaim: “I win”.
Now fill the bag with balls representing the vast number of possible outcomes of different egg-sperm combinations. The hand of fate goes into the bag and out you come. Why isn’t this anything special? Because there is nothing to single out this ball, improbable though it is, while it is still in the bag. We only know who “you” are after you come out of the bag. “You” are not specified independently of the choice of ball. Whatever ball comes out of the bag, the corresponding person can proclaim: “I win”. (In this game, you win by existing.) You can’t lose!
Let me illustrate the difference another way. I shoot an arrow at a huge wall, 100 metres away. When the impact zone is inspected, we find that the arrow has hit the centre of a small red spot. The probability of hitting this point on the wall is tiny. Am I a talented archer? It depends. If I proclaimed: “watch me hit that red spot” before firing the arrow, then I’m the new Robin Hood. However, if I shot the arrow and then took some red paint and painted the spot around my arrow’s impact point, then you can’t reach any conclusion about my archery skills.
So which of these cases does the fine-tuning of the universe resemble? Potential universes can be marked “intelligent life can/cannot live here” independently of the properties of the actual universe. This universe is not special because it is ours. It is special because it can support intelligent life. When we consider the fine-tuning of the universe, we are not considering the probability of this universe. We are considering the probability of a universe that supports intelligent life. Choose a different sperm, you get a different person. Choose a different universe, and you almost certainly do not get a different form of intelligent life. You get no intelligent life at all. The fine-tuning of the universe involves a low probability event and an independently specified target, and thus cannot be dismissed as just another low probability event. Stenger’s counterexample misses the target.
More of my posts on fine-tuning are here.
[…] 18, 2010 by lukebarnes This post is the second critiquing Victor Stenger’s take on the fine-tuning of the universe for intelligent life. Here […]
What, precisely, do you mean by “independently-specified”? Dembski has attempted to formalize this, but in my paper with Elsberry we show in detail why his attempt fails.
Then there is also the problem of what exactly the specification should be. Is the proper specification for William Lane Craig a description of William Lane Craig, or an American, or a male human, or a primate, or a biological entity? A posteriori, you can make a specification as detailed as you like to make the probability as small as you like.
Actual statisticians do not speak about “independent specification”.
How are you defining “intelligent life”? is it synonymous with Homo sapiens? or are you aiming for a broader based approach to intelligence.
it seems to me that what is lacking is a clear definition of an intelligence that one is fine-tuning the universe for.
Oarobin: I’m not defining “intelligent life” synonymously with homo sapiens. I don’t think that the definition of what really constitutes intelligent life is all that relevant. Just take a range of definitions, and the define the range of “intelligent-life-permitting” universes as the union of them all. This set will still be very small when compared to the range of possible universes.
Jeffrey: Greetings, Professor Shallit. I’ve just printed your article. I’ll try to read it sometime soon. Just out of curiosity, how would you analyse the difference between the cases I presented in my article? Why is the low improbability relevant for Derren Brown but not for the sperm+egg?
It’s common practice to do probabilistic reasoning on specifications announced ahead of time, as in the case of Derren Brown. It’s less obvious that this kind of reasoning on specifications announced after the fact is justifiable.
What kinds of specifications do you allow? If you don’t allow me to say “the sperm that creates Luke Barnes”, would you allow “the sperm that creates an astrophysicist”? What, precisely, differentiates these two cases?
Mathematicians have considered such issues for 50 years. The universally-accepted solution is that it makes no sense to have a binary decision between “acceptable pattern” and “phony pattern”, the way Dembski does. The universally-accepted solution ranks patterns by the number of bits it takes to completely specify the thing being specified, and the shorter, the better. This is called Kolmogorov complexity.
There’s also another problem with your reasoning in your CommonSense Atheism interview. You seem to imply that the two alternatives are “design” and “uniform probability”. But this is not the case. We could have a wildly non-uniform distribution; or we could have an underlying causality chain that is not apparent.
Take the case of Derren Brown. One could propose the hypothesis that he knew the program to choose the numbers had a bug in it that on a certain day would provide the numbers he “predicted”. This would not be cheating in the same sense implied, yet how would you evaluate the probability of this hypothesis?
In the same way, getting 12 royal flushes in a row is not proof of cheating. I can think of many additional hypotheses that could account for this, such as someone else set up the decks that way, or the decks come that way automatically from the factory, etc.
>>Is the proper specification for William Lane Craig a description of William Lane Craig, or an American, or a male human, or a primate, or a biological entity? A posteriori, you can make a specification as detailed as you like to make the probability as small as you like.<<
All of those. Everything that you know, that might be relevant. Insert nth repetition of me recommending Radford Neal's paper. What matters is not the tiny probability of what happened given some hypothesis, but the probability of hypotheses that we're interested in, given what we know. I still haven't overcome the suspicion that all sides on fine tuning arguments are asking the wrong questions.
All of those… What matters is not the tiny probability of what happened given some hypothesis…
I am not talking about the probability of “what happened given some hypothesis”; rather, I am trying to show why Dembski’s account of specification is incoherent. In Dembski’s account, choosing the specification can dramatically affect the probability he computes. I apologize if I was unclear.
I’m sorry if I came in late to the discussion, but I don’t know which paper of Radford Neal you are referring to.
>>Actual statisticians do not speak about “independent specification”.<>I’m sorry if I came in late to the discussion, but I don’t know which paper of Radford Neal you are referring to.<<
http://arxiv.org/abs/math/0608592
“It’s less obvious that this kind of reasoning on specifications announced after the fact is justifiable.” – surely the important thing is that the specification is independent of the event. Giving it before the event is a sure-fire way to accomplish this, but it is not the only way.
I’m not really following Dembski’s account, so your criticisms of him are irrelevant. (Curiously, however, he does spend a lot of his “Specification: The Pattern That Signifies Iintelligence” paper discussing Kolmogorov complexity.) I believe that Brendon is correct in considering broad classes of events.
‘You seem to imply that the two alternatives are “design” and “uniform probability”.’ I have no idea how I gave that impression. There are plenty of other hypotheses worth considering – deeper laws, multiverse etc. 12 royal flushes might not be “proof” of cheating, but its a pretty strong hint that something is amiss i.e. the hypothesis “these cards were dealt at random from a fair deck” is probably false. I submit that the fine-tuning of the universe for intelligent life is a strong indication that something is up.
“Actual statisticians do not speak about “independent specification”
Actually they do, just without those words. For example a model selection problem where you’re interested in whether a parameter x is zero or nonzero. You would do this if there was some REASON, independent of the data, for x=0 being singled out for special attention. “Independent specification” means the difference between having a prior p(x) with a delta function in it, or without a delta function. And that can make a big difference.
“I’m sorry if I came in late to the discussion, but I don’t know which paper of Radford Neal you are referring to.”
http://arxiv.org/abs/math/0608592
Also I think the quality of this blog will be inversely proportional to the amount of attention we pay to William Dembski.
surely the important thing is that the specification is independent of the event.
Ok, give us your formal definition of “independent”.
Firstly, we can usefully apply concept of an independent specification to the scenario in the original post even in the absence of a general, formal definition.
Let’s think about prespecification. Why does this guarantee that we are not simply choosing our pattern (T) to fit the data/event/outcome (E)? Because the event hasn’t happened yet. There is now way that knowledge of E has prejudiced our specification of T. We must choose T without any knowledge of how things actually turned out.
Thus we come our definition:
* T is an independent specification if it’s formulation does not take E into account.
In the case in the original post:
* the specification “you” is dependent on (in fact, identical to) the event “you”.
* the specification “the winning lotto numbers” is independent of the event “Brown’s prediction of the lotto numbers”.
* T is an independent specification if it’s [sic] formulation does not take E into account.
That doesn’t work, as we show in our paper, for the following simple reason: if T does “not take E into account”, then there is no way to determine whether E matches T.
To make it concrete, suppose I witness E, a run of 50 heads. Now I try to create a specification T for E that “does not take E into account”. What is it? Is it “all heads”? Then it took E into account, because otherwise how could I know to create T? Is it “all the same”? Etc.
Computer scientists considered and long ago rejected vague formulations in complexity theory like “compute A without using B”. For example, consider the recent methods to compute the i’th digit of Pi “without using previous digits”. Computer scientists reject this kind of assertion because there is no way to make it precise; instead we would speak of computing the i’th digit of Pi using space bounded by some function f(n).
oh no … the apostrophe police! Bad grammar makes me [sic]!
The requirement is that E matches a specification T that can be given independently of E. The specification “50 heads in a row” can be given independently of the observation of 50 heads in a row.
But, I hear you say, given any sequence of flips, it is possible to specify a pattern that is the same as the exact sequence.
Hmmm … perhaps this is where we need a bit of Kolmogorov. The vast majority of 50 bit sequences will not be algorithmically compressible. These sequences could be considered to be specified independently of any particular coin tossing event. It seems like this idea would use algorithmic compressibility as an all purpose (pre-)specification.
We could further specify that any sequence that is not algorithmically compressible would need to be prespecified before it can be used as evidence against a chance hypothesis. The classic example would be if, 50 times in a row, I predict the outcome of the next coin toss.
I appreciate your comments on all this!
So, your definition of “independent specification” is The requirement is that E matches a specification T that can be given independently of E.
That clears things up, and thanks for that penetrating insight!
perhaps this is where we need a bit of Kolmogorov.
Well, now you’re getting somewhere, but you’re not there yet. The point is that it is pointless to try to determine if specifications are “independent” or “dependent”. For any string there are lots of specifications, and some are shorter than others. We rate a specification based on how short it is. There are very few strings with short specifications, so if a long string matches a short specification, we can be skeptical of a uniform probability interpretation. All this has been known for a long time; see the work of my colleagues Ming Li and Paul Vitanyi.
Now we can see why “life-friendly universe” is a lousy specification. It doesn’t specify the universe uniquely, and it’s not clear how to make this specification concrete in a small number of bits.
‘It is pointless to try to determine if specifications are “independent” or “dependent”’.
I disagree, but this is probably just semantics. I’d say that algorithmic complexity (“if a long string matches a short specification” – nicely phrased!) is a precise and practical definition of independence – an all purpose prespecification, if you will. It is precisely the independence of a prespecification that we are trying to achieve, but after the fact so more care is required. I’ll have to track down the work of Li ad Vitanyi – any more names I should be aware of?
I disagree that “life-friendly universe” is a lousy specification. You say that it is not unique – but I don’t see how that’s relevant. Why does a specification have to be unique? “10 royal flushes in a row” is not unique, neither is “highly algorithmically compressible”.
“It’s not clear how to make this specification concrete in a small number of bits.” That may be so, but surely the specification that a universe be able to evolve entities capable of rational thought is a very specific requirement to place on the space of possible universes. Think of it this way: suppose that human beings themselves were capable of universe creation. If we set some undergraduates the task of creating a universe capable of discovering the laws of the universe that they created, this would be a challenging task. This isn’t be a vague requirement. And even if we expanded the “pass criteria” to any form of life (with any of the common definitions of life), we can still safely say that a student who simply guessed at random would have an extremely low chance of passing.
I’d say that algorithmic complexity (“if a long string matches a short specification” – nicely phrased!) is a precise and practical definition of independence – an all purpose prespecification, if you will.
Yes, and this idea (as I said) is old: look up Vitanyi’s paper on the universal distribution. But to really make it precise, you have to quantify what you mean by “short” and “long”.
You say that it is not unique – but I don’t see how that’s relevant.
Well, that’s one of the requirements of Kolmogorov complexity: the specification must uniquely determine the string. For example, “string of length 10” is a specification for 0110101000, but it doesn’t specify it uniquely. Otherwise you could use “string of length n” as a specification for any length-n string.
“10 royal flushes in a row” is not unique
No, but if you have a string of 50 cards representing 10 royal flushes in a row, it will be compressible by giving “royal flush” and the 10 starting suits.
This is an example of a more general phenomenon. If you have a description that specifies a subset S of length n-strings, then you can extend it to a unique specification of a particular element of S by adding at most log |S| bits. So a vague specification will need more bits to extend to a unique specification than a precise one.
we can still safely say that a student who simply guessed at random would have an extremely low chance of passing.
Seems like an unjustified assertion to me. Care to make it more rigorous, or are we just supposed to accept it?
I suppose the question here is about which facts should be considered surprising and which therefore cry out for an explanation, and which don’t. Specifically, should we consider the apparent fine-tuning of the fundamental constants surprising or not?
First of all, it seems clear that the answer depends on the context. So, for example, in the scientific context the apparent fine-tuning of the fundamental constants does not require any explanation whatsoever. Science discovers and mathematically models the order present in the physical phenomena it studies, and the fundamental constants simply represent part of the order discovered. Why that order is as it is and not otherwise, is not a scientific but a metaphysical question, for it concerns not the order present in phenomena but the nature of reality.
Coming back to what one should consider surprising, one idea is those facts with a low Kolmogorov complexity (or “complexity” henceforth) require explanation. (Incidentally, even though the Kolmogorov complexity is well defined it is not always feasible to estimate its value. So, for example, the values of the fundamental constants appear to be of high complexity, but if somebody should find a way to reduce them to one constant, or even to the properties of the number 2, then their complexity will be seen to be lower.) Now, in many cases the idea that low complexity is surprising and cries out for an explanation works well. So for example, the sequence 6666666666 of die throws has lower complexity then the sequence 3162234254, and therefore is more surprising and requires an explanation. Similarly a life form such as a mouse has lower complexity than a stone and is hence more surprising and requires more explanation. But there are counterexamples to this apparent rule. For example, a crystal has less complexity than a life form, but one would not say that its presence is more surprising than that of a mouse. If some SETI radio telescope received the signal 1011101111101111011000110011… (which is the concatenation of the first primes) it would be very surprising, whereas the sequence 1010101010101010101010101010… would not be even though it has a lower complexity. Another counterexample: If future science were to reduce all fundamental constants to some properties of the number 2, then we would all consider this new state of affairs to be less surprising.
Philosopher George N. Schlesinger suggests that we should find surprising not the thing that is improbable but the kind of thing that is improbable. When we try to decide whether a fact is surprising, we should consider the set of all facts of the same kind. If some element of this set must obtain (or would obtain with high probability) then the original fact is not surprising. If it’s unlikely that any member of the set will obtain then the original fact is surprising. He mentions a nice example: If John Smith wins a lottery of one billion tickets, then it’s not surprising. If John Smith wins three lotteries of one thousand tickets each, then it’s surprising. I find this method works well in all previous examples, and also comports with our intuition that the apparent fine-tuning of the fundamental constants is surprising. Further, if we consider the set of all intelligence producing universes then it’s clearly not the case that one of them must obtain, so the fact that one did obtain is surprising and requires explanation.
Finally, perhaps one could think that the problem of what should be considered surprising is one that does not allow for a mechanistic answer. We know a surprise when we see one. A fact is surprising simply when it is unexpected given one’s background beliefs. Confronted with a surprise one must either find an explanation, or else modify one’s background beliefs. Given naturalism, which entails that there is no metaphysical principle that favors intelligence, the fact that our universe’s constants appear to be fine-tuned for intelligence make this fact quite surprising on naturalism. Incidentally the universe favors intelligence in a way that goes beyond the apparent fine-tuning of its fundamental constants: The universe is ordered in an intelligible manner (a necessary condition for scientific knowledge to be possible) and is moreover ordered in a manner that allows the evolution of intelligence capable of discovering that very order (another necessary condition for scientific knowledge to be possible).
Well, I think the preceding analysis is quite confused. Witnessing a long event with low Kolmogorov complexity means two things: (a) it is unlikely to have been generated by a random series of coin flips with uniform probability and (b) there is potentially a simple explanation for it, as it could have been produced by a relatively simple program (or if you prefer, “computational process”) acting on a short input (or, if you prefer, “initial conditions”).
(a) is not widely applicable, as most events that we witness are not, even potentially, the result of a random series of coin flips with uniform probability.
(b) I hardly find surprising, as there are all sorts of tools available in our universe to create simple computational processes.
So when we witness 666666666666666 as a series of die tosses, it is not the low Kolmogorov complexity per se that causes us to find it surprising, but because it is in the context of contrasting it with our default assumption; namely, that it was obtained by a random series of tosses, each with probability 1/6.
Indeed, it is precisely those events with low Kolmogorov complexity that are likely to be witnessed if one accepts the “universal distribution” (see the paper by Li and Vitanyi on this subject) as valid.
If some SETI radio telescope received the signal 1011101111101111011000110011… (which is the concatenation of the first primes) it would be very surprising
I see this kind of claim made all the time, but I see no real justification for it. For example, there is a 2001 paper by Goles, Schulz, and Markus that show how the prime numbers can occur naturally in a predator-prey model. Given this, I don’t find it implausible that some relatively simple natural process could generate signals encoding the prime numbers.
There is a theory of “computational depth” put forward by Bennett. Perhaps witnessing a deep string is something requiring more explanation than a non-deep string.
We know a surprise when we see one.
That’s scarcely a basis for scientific or philosophical inquiry. It sounds like Stephen Wolfram’s definition of complexity, which basically amounts to “something is complex if the human visual system finds it so”, for which Wolfram was justly derided.
It’s kind of interesting that unintelligent naturalistic processes can produce phenomena that display prime numbers, but it’s still a fact that receiving a radio signal with the concatenation of the binary representation of prime numbers would make SETI hunters very excited in a way that receiving a periodic loop of 0s and 1s would not. That was my point. Incidentally, I too disagree with the idea that natural data of low Kolmogorov complexity should be considered surprising; that’s why I gave several counterexamples.
In my judgment George Schlesinger’s idea comes closest to model what it is we call “surprising”. I’d like to know if other posters here see a problem with his idea.
Jeffrey Shallit writes: “There is a theory of “computational depth” put forward by Bennett. Perhaps witnessing a deep string is something requiring more explanation than a non-deep string.”
Well, that’s an idea, but I don’t see how one can estimate the computational depth of the *fundamental* physical constants.
Finally, Jeffrey Shallit objects to my statement “We know a surprise when we see one”, but I qualified what I meant in the next sentence “A fact is surprising simply when it is unexpected given one’s background beliefs”. It seems to me that’s exactly what Jeffrey Shallit means when he writes “So when we witness 666666666666666 as a series of die tosses [we find it surprising because we] contrast it with our default assumptions [of a fair die]”. So we find that something is surprising when it does not comport with our previous assumptions. The relevant question then is this: Doesn’t the apparent fine-tuning of the fundamental constants contrast with naturalism’s default assumption that the physical order that science discovers has not been purposefully designed?
“We can still safely say that a student who simply guessed at random would have an extremely low chance of passing.”
Some examples:
* The matter density at the Planck time must be right to 1 part in 10^55. Too much: universe recollapses in a big crunch. Too Little: universe expands to fast for galaxies/stars/planets to form. The alternative is to have your universe inflate, but this process will need to be fine-tuned as well, to ensure that it ends “gracefully”.
* Cosmological constant (L): given the natural range for L (from particle physics), the life-permitting is range has a width of at most 10^-53 of that range.
* Q (“lumpiness”): must be between 0.0001 and 0.000001. Too large: only black holes. Too small: no galaxies/stars/planets. Natural range is probably from 0 to at least 0.1.
* Strong force:
– increase by 50%: no protons left over from big bang to make long live stars
– decrease by 50%: all elements used by life are unstable to nuclear fission.
– decrease by 9% – stars are unable to synthesise anything larger than deuterium
– change of 0.4% either way – stars create either carbon or oxygen, but not both.
* Weak force (can’t remember the numbers for these)
– too weak: too little hydrogen left over from big bang to make long lived stars.
– too weak: supernovae unable to blow the large elements they have made out into the universe to make planets.
– too strong: elements are unstable to beta decay
* gravity – suppose the possible range of gravity is form 0 to at least the strength of the strong force. Then
– stronger by 1 part in 10^34 – Stars burn out too fast to permit biological evolution.
– stronger by 1 part in 10^36 – stars are unstable to either support by degeneracy pressure (if too small) or radiative pressure blowing the outer parts of the star off (if too large).
– stronger by 1 part in 10^30 – a planet which allowed large-brained life would have to have a radius of less than 50 metres, which isn’t large enough to support an ecosystem. Any larger, and the organism gets crushed by gravity.
– too weak, and gravity/stars/planets (which all form via gravitational collapse) could not form, and planets could not orbit stars.
* neutron mass:
– 0.1% heavier: stars cannot create large elements. Stellar nucleosynthesis fails.
– 0.2% lighter: Big bang turns all protons into neutrons. No stars.
* electron/proton mass ratio (1/1836 in our universe):
– if too large (of order one): solids and molecules are not stable.
– if too large: no energy gap between chemical and nuclear reactions. Chemical reactions would not be reliable.
* number of (large) space dimensions
– if not three: planetary orbits are unstable
– if not three: atoms are unstable
– if not three: wave propagation is dispersive. Waves arrive undistorted. May not make life impossible, but certainly more difficult.
These are just the parameters. If she doesn’t make the laws that govern her atoms quantum, then:
* They’ll be unstable – electrons will lose energy to EM radiation, and spiral into the nucleus.
* they won’t have stable chemical properties. Electrons could orbit wherever they like (like planets in a solar system).
She also needs to make sure that her strongest forces have regimes in which they become weaker than the other forces. E.g.:
– strong force only has a limited range. It it had an infinite range (like EM) then there would be no chemical reactions and “interesting” molecules, only nuclear reactions and spherical agglomerations of protons and neutrons.
– EM charge comes in positive and negative forms, so they can cancel out. Thus EM is negligible a long way from neutral lumps of matter, allowing gravity to dominate and make them collapse into galaxies/stars/planets.
You’ll notice that I quoted the parameter ranges as that: ranges, not probabilities. With no further information, our student (who is ignorant of the requirements of life) would have no reason to suspect that one part of parameter space is preferable to any other. Thus, she would be faced with a uniform probability distribution over the ranges of the variables.
But we can make the case much more robust than just assuming a uniform prior. What would it take for the statement (F) “given the distributions on the parameters, the probability of a life-permitting universe is *not* small” to be true. Here’s Anthony Aguirre in “Universe or Multiverse” putting things quite well:
“If the probability of the laws/constants P is to have interesting structure over the relatively small life-permitting range (parameterised by A), there must be a parameter of order A in the expression for P. But it is precisely the absence of such a parameter that motivated the anthropic approach.”
The life-permitting range is so small compared to the possible range that only a distribution which had a narrow peak in the life-permitting range would lead to F being true. This would require two parameters in the expression for the distribution: one to place a peak in the life-permitting range, and another to make the width of the peak of order of the width of the life-permitting range. But these parameters themselves would have to be fine-tuned – just more chances to get the universe wrong!
Some examples:
Yes, we’re all familiar with the litany of fine-tuning claims.
What I was asking for was some evidence that these parameters can actually vary in the ranges you claim. Haven’t got any? Didn’t think so. Talk about “natural range” is just speculation.
Thus, she would be faced with a uniform probability distribution over the ranges of the variables.
Uniform probability assumptions: the refuge of scoundrels.
The life-permitting range is so small compared to the possible range…
Pure assertion. You, and nobody else, has any real idea what kinds of universes could support life. Heck, even Conway’s game of Life could be possible, but I don’t see that listed as a possibility in your space.
it’s still a fact that receiving a radio signal with the concatenation of the binary representation of prime numbers would make SETI hunters very excited in a way that receiving a periodic loop of 0s and 1s would not.
Well, SETI hunters don’t actually look for prime numbers or anything similar; they look for narrow-band signals. So in fact receiving a narrow band signal that is periodic would be very exciting.
As for whether the prime numbers would make them more excited than a periodic one, the real questions are (1) would this claimed excitement be justified and (2) is there a fundamental mathematical reason to think so?
My answer to (2) is no. The excitement, such as it is, would be based on reasoning like “Humans find prime numbers interesting. Therefore extraterrestrials would too. And we find it more likely that such a signal would be generated by creatures like us than it would be generated by some other natural process.” But the last statement is just vague intuition, and not supported by any mathematical reason.
Doesn’t the apparent fine-tuning of the fundamental constants contrast with naturalism’s default assumption that the physical order that science discovers has not been purposefully designed?
It doesn’t surprise me particularly because (1) I think fine-tuning has been exaggerated (2) we really have no idea at all currently what the distribution of universes is (3) we have no idea at all how many universes there are.
Jeffrey Shallit wrote: “ The excitement, such as it is, would be based on reasoning like “Humans find prime numbers interesting. Therefore extraterrestrials would too.”
I’d say any intelligent being would find prime numbers interesting. This is not the main issue under discussion, but here’s an interesting question: If you wanted to advertise your existence as an intelligent race by beaming a signal, and if for some reason you had only 30 binary bits at your disposal, which 30 bits would you choose? Here is a possible answer: Beam the first 30 bits of the binary expansion of pi. On the face of it, this does not look like a good answer because this sequence will look random to anyone who is not suspecting that it may be an intelligent signal in the first place. On the other hand, somebody looking for an intelligent signal will think about what signal another intelligent race might beam, and the binary expansion of some famous constant will certainly make it to a list for signals to look for.
Jeffrey Shallit wrote: “ is there a fundamental mathematical reason to think so?”
There is the saying that to a hammer everything looks like a nail. So biologists tend to see everything from the prism of Darwinian processes, physicists tend to see everything from the prism of the scientific method, and information theorists tend to see everything as information produced by an algorithm. Well, I think the latter may have a point. All mechanistic knowledge about the universe cannot be such as to not be describable as the result of an algorithmic process – actually as the result of a probabilistic algorithm. I wonder sometimes whether the structure of the physical laws is anything but the description of some high level properties of an underlying algorithmic process – which process may be much easier to describe. To explain what I mean let me suggest an analogy between the structure of physical phenomena that science studies and the morphology of the Mandelbrot set. People confronted with the Mandelbrot set may spend a lot of time discovering complicated high level mathematical properties in it without realizing the simple algorithmic means by which the set comes about.
Jeffrey Shallit wrote: “It doesn’t surprise me particularly because (1) I think fine-tuning has been exaggerated (2) we really have no idea at all currently what the distribution of universes is (3) we have no idea at all how many universes there are.”
As per (1) I just note the fact that most (if not virtually all) physicists and astronomers who occupied themselves with this question think that the fundamental constants are fine-tuned for the evolution of complex life forms, and indeed fine-tuned to a very high degree (of the order of 10^-100 at least). When philosophers found out about this, they argued that the apparent fine-tuning of the fundamental constants is not even the main problem. The main problem is that out of all possible physical universes a vanishingly small proportion of universes is such that complex life forms will evolve in them, and if our universe is the only one that exists (as everybody previously thought) then it’s highly surprising that our universe should belong to this very small set. To suggest that, just as a matter of brute fact, reality happens to favor the evolution of intelligence, is terribly ad-hoc.
As per (2) and (3) I note the fact that no naturalist was suggesting the existence of parallel universes with different constants, or discussing their properties such as their distribution or their number, before finding out about the apparent fine-tuning of the fundamental constants in the universe. For doing this they had to abandon the venerable principle that one should not suggest the existence of invisible and unfalsifiable entities for which no scientific rationale exists (not to mention violate the spirit of Occam’s razor which warns against the multiplying of entities). So, it’s clear that the discovery of the apparent fine-tuning of the universe did highly surprise many naturalists.
Luke,
As you have obviously invested quite some thought in this issue, could you please apply the so-called principle of charity and explain the best arguments that the other side may put forward? In other words, what is, in your judgment, the best case for the thesis that the fundamental constants may seem but are not fine-tuned for the evolution of complex life?
“To suggest that, just as a matter of brute fact, reality happens to favor the evolution of intelligence, is terribly ad-hoc.”
To suggest that it’s ad hoc to suggest that reality happens to be reality is what’s terribly ad hoc, I’d say. Tautologies often are.
“What I was asking for was some evidence that these parameters can actually vary in the ranges you claim.”
Theoretical predictions about these other universes are exactly the same process as theoretical predictions about this universe. Theoretical physics can investigate a wide range of possible universes, and then it is up to observations to discover which one describes our universe. How do we know that these parameters can actually vary? Because we can do theoretical physics.
All claims (that I can think of – corrections welcome) that something is possible (in the absence of an example of that something having been actual) boil down to: I know what it would be like in theory and I am not aware of any reason why it is not possible. Is a 217 sided die possible? Is it possible that there are galaxies beyond the range of our telescopes? Is it possible for Australia to win the soccer world cup? Are wormholes possible? Is it possible that there is life elsewhere in the universe? In the absence of any reason to believe that these scenarios are not possible, we are justified in concluding that they are possible. Remember that “possibly X” is an extremely weak claim. Possibilities are cheap, as they say.
Let’s consider more closely the opposite claim: universes in which the strength of gravity, say, is different in our universe are not possible. This could mean one of a few things:
Logically possible: this is surely false. Our current theories are mathematically consistent independently of the values of the parameters that are put in. The logical/mathematical consistency of general relativity doesn’t depend on G having precisely the value that it has in our universe. So these other universes are provably mathematically self-consistent, and thus logically possible.
Physically possible: maybe there are deeper physical laws that show why these other universes are not physically possible. But there is certainly nothing in current physics that constrains these parameters. We have to go and measure them.
It follows that if there is some as yet unknown fact that constrains these parameters to be what they are, then this fact is itself contingent.
“Talk about “natural range” is just speculation.”
False. Penrose’s discussion of the entropy of the early universe is an application of statistical mechanics to the possible microstates of the early universe. Unless you believe that statistical mechanics or cosmology is just speculation. Further, particle physics gives us a natural scale for the cosmological constant, as well as a natural mass scale (the Planck scale) and a mechanism for assigning masses (Higgs). String theory, if successful, looks as if it will give us a landscape of possibilities for the laws of nature, upon which a multiverse could be constructed.
“Uniform probability assumptions: the refuge of scoundrels”. An interesting slogan. What about: “the principle of indifference: an indispensable part of statistical physics”? Replace principle of indifference with maximum entropy if so desired.
“You, and nobody else, has any real idea what kinds of universes could support life”.
Bollocks. Conway’s game of life is actually a perfect example. Conway “choose the rules carefully after trying many other possibilities, some of which caused the cells to die too fast and others which caused too many cells to be born.” (www.math.com/students/wonders/life/life.html) Conway deliberately searched for laws which created a balance between stability and complexity, along the way finding plenty of laws that failed this requirement.
We can reasonable conclude what universes can and cannot support intelligent life because we can do theoretical physics. I propose that in a universe which lasts one second, in which the temperature never drops below 10^10K, where neutral atoms never form, which is so clumpy that any matter that could collapse would only, in which no elements heavier than helium are stable, in which neutral atoms are not stable, will almost certainly not form structures complex enough to have the information processing abilities necessary for any entity deserving the name “intelligent life”. Is this an unreasonable conclusion?
Matt,
I do agree that “Reality happens to be reality” is a tautological truth. What I was saying is that the claim “Reality happens to be such that 1) it is ordered in a mathematically elegant way, 2) it is moreover fine-tuned in a way such that complex life will evolve with sufficient intelligence for discovering that very order, and 3) properties (1) and (2) obtain not as the result of purposeful design but just happen to be brute facts of reality” – that claim I’d say is terribly ad-hoc.
I think most naturalists who have considered this issue agree, and conclude that it is not tenable to assume that reality consists of the visible universe. That’s why they propose the multiverse hypothesis, no matter the cost to traditional naturalistic epistemology. Only, arguably, this cost is so great that the cure is worse than the disease.
Incidentally, apart from the two properties mentioned above, there are other highly curious properties that reality, according to naturalism, just happens to have. For example that reality is ordered in such a way that the complex life forms evolved will happen to instantiate information processing structures of matter which, somehow, become conscious. – I can’t help but think that the degree of “ad-hocness” that naturalism entails is really something to behold.
How do we know that these parameters can actually vary? Because we can do theoretical physics.
In theoretical computer science, we sometimes discuss models of computation that no one can currently realize, such as Zeno machines, but we never assert that these models are achievable solely because we can think about them. Really, the arrogance of theoretical physics is astonishing!
In any event, I think it was clear from the context that I was asking for empirical evidence that, say, it is possible to alter the fine structure constant. But you haven’t provided any. Heck, I can think of a universe populated by invisible 5-dimensional unicorns that speak Klingon, but it doesn’t mean it exists.
What about: “the principle of indifference: an indispensable part of statistical physics”?
The trouble with the principle of indifference, as I’m sure you know, is that it leads to silly conclusions. Consider
1. Roger Penrose is a Methodist.
2. Roger Penrose is not a Methodist.
Two possibilities, so by the principle of indifference we should conclude, a priori, that the probability that Roger Penrose is a Methodist is 1/2. But the same reasoning works for any denomination – a contradiction.
I have no problem people using the principle of indifference, provided they are at least somewhat modest about its applicability – something which is not true in the present circumstance.
As for Conway, surely you realize you have completely missed my point? Where, precisely, in your calculation of life-sustaining universes do I find Conway’s model? If it is not enumerated, what other ones have you missed?
complex life forms evolved will happen to instantiate information processing structures of matter which, somehow, become conscious
Mark me down as someone who doesn’t see consciousness as particularly mysterious.
Organisms develop models of their environment through natural selection. The green color of chlorophyll, for example, models the peak emission spectrum of the sun. The fauna of the Miocene bears remarkable resemblance to today’s savannah-dwellers, testifying to similar evolutionary pressures.
More complicated models, allowing better predictions of the environment, may dramatically improve survival, if the are not too costly. Consciousness is just the name we give to the fact that an organism’s model of the environment has become so elaborate that it includes a model of the organism itself.
Jeffrey,
You write: “ Consciousness is just the name we give to the fact that an organism’s model of the environment has become so elaborate that it includes a model of the organism itself.”
This strikes me as completely false. After all, newborn babies do not thus model their environment and even so we say that they are conscious beings. Conversely, simple robots (say, the 1997 Sojourner rover) do model their environment to include a model of themselves, but nobody says that they are therefore conscious beings.
We all know what consciousness is, and there is no question that it is a very mysterious thing. For example we don’t need the consciousness hypothesis in the natural sciences, yet we all know that consciousness exists.
After all, newborn babies do not thus model their environment and even so we say that they are conscious beings.
Actually, I think you’re confusing consciousness with awareness. And surely you must be aware that there has actually been extensive debate about whether babies are conscious – in both your sense and mine.
Conversely, simple robots (say, the 1997 Sojourner rover) do model their environment to include a model of themselves, but nobody says that they are therefore conscious beings.
False dichotomy. Nobody is saying that something is either conscious or not. To the extent that the model becomes more elaborate and capable of prediction, something becomes more conscious. I have no problem at all calling a very sophisticated robot conscious.
and there is no question that it is a very mysterious thing.
Repetition is not a particularly enlightening form of debate.
And surely you must be aware that there has actually been extensive debate about whether babies are conscious – in both your sense and mine.
I know that Daniel Dennett has claimed that animals as well as pre-linguistic children are not conscious beings. I am personally as certain that babies are conscious beings, as I am certain that other people beside me are conscious beings, so for me debating about whether babies are conscious or not is like debating about whether solipsism is true or not. And I wonder if anybody of those debating would actually agree to have their own baby operated upon without anesthesia.
Nobody is saying that something is either conscious or not.
I think that virtually everybody is saying that. Either a system is capable of experiencing or a system is not capable of experiencing; in the former case we say that that system possesses consciousness, in the latter case that it doesn’t.
Repetition is not a particularly enlightening form of debate.
OK, so let me justify my claim: Some philosophers believe that babies are not conscious, and other philosophers believe that thermostats are conscious. A well known philosopher publishes a book in 1992 with the title “Consciousness Explained”, and another well known philosopher publishes a book eight years later with the title “The Mysterious Flame” claiming that the problem of consciousness is beyond the capacity of human intelligence to solve. There is no scientific instrument that detects the presence of consciousness, nor is the consciousness hypothesis necessary to explain any objective phenomenon, but some scientists claim that they are nonetheless studying consciousness. The founders of quantum mechanics thought that the only way to interpret objective facts was by positing that consciousness defines physical reality, in the sense that the moon is not actually there when nobody is looking. In response some other physicists came up with the idea that, rather, the universe is all the time splitting into copies, so that each one of us is aware of a huge number of different worlds (in some of which, by the way, we shall all be raised three days after dieing). – Considering the above facts, I think I am justified in claiming that consciousness is a very mysterious thing.
“I can think of a universe populated by invisible 5-dimensional unicorns that speak Klingon, but it doesn’t mean it exists.”
Hang on a minute … who said anything about actually existing? I’m not claiming that these other universes actually exist because I say so .. that, I admit, would be arrogant. I’m only claiming that they are possible.
“We never assert that these models [Zeno machines] are achievable solely because we can think about them.” There are a few issues here:
* If Zeno machines are logically contradictory (Hilbert believed that an actual infinite could not exist in reality, which would make Zeno machines logically impossible), then we cannot really “think about them”, just as we cannot really think about a square circle or a married bachelor.
* If Zeno machines are logically possible, they could still turn out to be physically impossible. In fact, it seems that quantum mechanics and/or relativity would prohibit the actual construction of a zeno machine.
* If Zeno machines are physically possible, they could still be currently technologically impossible, like a solar-system sized telescope.
You talk about whether Zeno machines are “achievable”. Which of these senses do you mean?
In any case, let’s apply these categories to fine-tuning. Are these other universes logically possible? Well, the mathematical consistency of, say, general relativity is independent of the actual value of the strength of gravity, which is precisely why we have to actually measure G. Thus, we seem to have very good reasons to believe that universes with a different value of G are logically possible.
Physically possible? If there are deeper laws that prohibit these other universes, then the fine-tuning of the universe is a relevant clue to these deeper laws. That is sufficient to make the fine-tuning of the universe an interesting fact that physicists should (and increasingly are) taking seriously.
Technologically achievable: not really relevant.
“The principle of indifference”. I’d say that the principle needs to be applied carefully. Let’s consider your example, and especially the background information …
Let:
A = all persons can be described as either “is” or “is not” a Methodist.
B = there are a large number (N) of “denominations” (include “none of the above” as an option), and all persons can be placed into one of these categories.
M = Roger Penrose is a Methodist.
If we are only given A, then I would say that P(M|A) = 0.5. is in fact correct. To see this, replace A with A’:
A’ = all entities can be described as either C or not C.
M’ = a particular entity E is C.
Then, have some madman with a gun shout at you: “is this entity E C or not C?!?! Make your choice!”. You have no reason to prefer C or not C, so you must assign P(M’|A’) = 0.5.
Where does the contradiction come? It comes when you add the extra information B, namely that “not Methodist” can be written as the union of a number of other denominations. This is true for denominations, but is not true, e.g., for coins. Not-heads cannot be written as the union of 2 or more outcomes, each as likely as heads. Not-heads is tails. Thus, P(heads) = 0.5. In short, the fact that P(M|A) does not equal P(M|B) is not a contradiction.
You example does show the dangers of recklessly applying the principle of indifference. Is the fine-tuning of the universe a reckless application? I would contend that we have taken into account all the information we have, and thus the conclusion that “a universe chosen at random from the space of possible universes is extremely unlikely to support intelligent life” is a reasonable conclusion, even if not indubitable.
Re: Conway – I wasn’t very clear. My point was: sure, go ahead … include these models. To be fair, you should include in the list of possibly universes at least a representative class of cellular automata – don’t just cherry pick the interesting ones like Conway’s life. And, if Conway’s experience in choosing the rules of life are a fair indication, adding cellular automata will add many many more ways of making a universe that cannot support intelligent life, and only a very small number of ways that can.
P.S what programming language do you use – if you want a) speed? b) something quick and easy c) plotting/graphics? I’ve always wondered what the pros use day in and day out. I use a) Fortran b) Matlab c) Matlab, though I’ve heard good things about python for b).
Also another objection:
“I know what it would be like in theory and I am not aware of any reason why it is not possible.”
“I can conceive of a possibility therefore it is a valid possibility”
This does not follow.
Using your logic I could assert:
“I can conceive of a universe where God exists and the universe is completely habitable without the use of natural laws”
Is there a logical contradiction here?
If not then, of course, then your argument against PZ Meyers would be irrelevant…
I’m not sure if I follow your argument.
“I can conceive of a possibility therefore it is a valid possibility.” This does not follow.
What doesn’t follow? The second statement in quotation marks (“I can conceive of a possibility therefore it is a valid possibility”) isn’t a quotation from me.
Regarding the statement: “I can conceive of a universe where God exists and the universe is completely habitable without the use of natural laws” … a few comments:
1. If God acted in a regular way to make the universe habitable, then this action would simply appear to be a natural law. In a theistic universe, natural laws simply are the way that God has chosen the universe to operate regularly. It makes no sense to say that, in some universe, God always intervenes to change the laws of nature to make the universe habitable – the intervention itself would, by definition, be a natural law. So, if the universe involves God acting in regular ways without the use of natural laws, then this is a contradiction.
2. If these “habitable-ensuring” interventions were not regular, then we have a scenario that I addressed in the Myers critique:
“Perhaps Myers expects “someone” to suspend the laws of nature to preserve his utopia. Science would then be impossible, so it’s a bit odd that Myers would prefer this kind of universe, where rational inquiry isn’t rewarded with knowledge.”
I’m not denying that Myers lake-front universe is a possible world. Myers claims that, if an intelligent agent made our universe, then he would choose to make a human zoo enclosure. If Myers wants such a universe to behave according to natural laws, then those natural laws predict that such a universe will meet a fiery end. And if God unpredictably intervenes to save the universe, then we have a universe in which you can do all the science you like, but you won’t learn anything. In short, there seem to be reasons why a creator would choose our universe over Myers’.
Luke –
““I can conceive of a possibility therefore it is a valid possibility.” This does not follow.
What doesn’t follow? The second statement in quotation marks (“I can conceive of a possibility therefore it is a valid possibility”) isn’t a quotation from me.”
I forgot to put one line in my last post. The quotation I used was a summary what I take to be the logic you employ for assuming it is a valid possibility the universal constants could be any other value then they are. The quotation I gave above that was a line you gave, among others, which contributed to this understanding of your argument.
I guess I mean to say that I don’t see how it follows. How does it follow? At the very least “conceivability does not equal possibility”. So its not obvious. However, I will present to you a problem with this way of thinking: If you were to assume God exists, then you must assume God is necessary (else you run into other problems). Because of this, you can assume that it is possible for something within the scope of reality to be necessary. Is it not conceivable the universe exists with some or all of the conditions that happen to sustain life due to its very nature(In other words, it is necessary for it to be this way)? If conceivability warrants valid possibility then this will be a possibility. So now it is both a possibility that the universe could exist in a way that does not allow intelligent life and it is a possibility the universe must exist in a way that happens to allow for intelligent life. Why would one possibility require explanation while the other doesn’t? If you run by the rule of conceivability justifying valid possibility, then you will run into at least this dilemma. Although there would be many more…
Some problems:
Luke –
“If God acted in a regular way to make the universe habitable, then this action would simply appear to be a natural law. In a theistic universe, natural laws simply are the way that God has chosen the universe to operate regularly.”
Luke –
“I’m not denying that Myers lake-front universe is a possible world. Myers claims that, if an intelligent agent made our universe, then he would choose to make a human zoo enclosure. If Myers wants such a universe to behave according to natural laws, then those natural laws predict that such a universe will meet a fiery end.”
It seems as though you are using “those natural laws” that we know to exist. But can you not conceive of the possibility that other laws could exist? My assumption is that, if you think merely conceiving of other universal constants makes them valid, then you can do the same with natural laws. I can conceive of other natural laws. Is God limited in the natural laws he makes, other than by logical absolutes. As you said, natural laws are relegated to how God regularly acts in the universe. What is your justification for limiting the way God can act in a regular way?
Also:
“If these “habitable-ensuring” interventions were not regular, then we have a scenario that I addressed in the Myers critique:
“Perhaps Myers expects “someone” to suspend the laws of nature to preserve his utopia. Science would then be impossible, so it’s a bit odd that Myers would prefer this kind of universe, where rational inquiry isn’t rewarded with knowledge.””
What does it matter what Meyers would prefer? Is it impossible that God would prefer this? Is it impossible God would want to make the entire universe habitable to his most valued creation?
Here is the formal argument:
1) It is conceivable that God could prefer a universe completely habitable to intelligent life (PZ Meyers Universe)
2) IF God prefers a universe THEN it is conceivable he will create that universe.
3) If something is conceivable THEN it is a valid possibility
Now lets say you take #3 to be true…
THEREFORE PZ Meyers’ universe is a Valid Possibility
Interestingly enough, given a paradise universe like that, would PZ Meyers really care that much about science?
PS: I wrote another comment and tried to post it twice. Yet it has not posted. Do you know why this is?
I have managed to get two comments posted since trying this one, so I will try it one more time
Luke –
“So which of these cases does the fine-tuning of the universe resemble? Potential universes can be marked “intelligent life can/cannot live here” independently of the properties of the actual universe. This universe is not special because it is ours. It is special because it can support intelligent life. When we consider the fine-tuning of the universe, we are not considering the probability of this universe. We are considering the probability of a universe that supports intelligent life. Choose a different sperm, you get a different person. Choose a different universe, and you almost certainly do not get a different form of intelligent life. You get no intelligent life at all. The fine-tuning of the universe involves a low probability event and an independently specified target, and thus cannot be dismissed as just another low probability event. Stenger’s counterexample misses the target.”
With your lottery analogy, there is something utterly unique about this ball when no other balls are unique. The winning ball has a value to the person who drew it. In your archery analogy, the same goes with the bulls eye. However, this is not the case with the universe. Every phenomena in the universe is unique in its own way. The value we give to intelligent life does not necessarily exist outside ourselves. It is not, as you say “independently specified.” Its like the lottery ball giving value to itself or the target giving value to itself. Without that external value (ex: whoever decided that particular number meant you won the lottery or the archer for wanting to hit the bulls eye), intelligent life is not unique in the way you describe. What makes intelligent life so special outside our own human values? Just like victor’s opponent may have assigned a value to himself, we have assigned a value to ourselves as intelligent life. This is not independent of ourselves just as the person is not independent of himself. Victor’s argument seems to stand (i gave you a similar one on commonsenseatheism)
Luke, what do you consider justification for something to be a valid possibility?
To elaborate a bit on the last comment
Luke –
“Choose a different sperm, you get a different person. Choose a different universe, and you almost certainly do not get a different form of intelligent life. You get no intelligent life at all.”
You may not get intelligent life, but you’ll get some phenomena. Choose a different universe, you get a different phenomena. Your choosing of the parameters for intelligent life is like Victor stinger is choosing the parameters of a certain person. One is just on a different scale. You could say the parameters are a person with a specific dna sequence, blood type, first and last name, religion, favorite hobby, etc… Choose a different sperm and a different egg and you almost certainly do not get the same person. Getting the same person with a different sperm and different egg is not probable just like getting the same phenomenon, known as intelligent life, in a different universe is not probable. Just how much is irrelevant as Stenger could always extend generations back to start.
“I can think of a universe populated by invisible 5-dimensional unicorns that speak Klingon, but it doesn’t mean it exists.”
This would entail a successful defence of actualism, which is very hard to reconcile with quantified modal logic due to the provability of the Barcan formula in it (and Kripke semantics are of dubious help). How quickly our nice, clean mathematics sinks in to the philosophical mire!
lukebarnes
Interesting post, but I agree with Jeffrey Shallit in the second comment down. Craig attempts to scare us all with fantastical improbabilities, when in fact asserting a highly specific event a priori and crunching the numbers back through calculus to the beginning of time will invariably produce fantastically low probability.
Atheist scientists like Richard Dawkins speak of probabilities relating to certain types of events, such as the probability that a genetic mutation will be beneficial rather than the probability of a specific phenotype could have evolved. As Shallit alludes to, the existence of William Lane Craig is highly improbable, but of course the existence of a male Homo sapien is highly probable.
Perhaps our specific kind of intelligent life is highly improbable, but if we reshuffle the universe’s deck of physical constants, we could well end up with some other kind of intelligent life. As Stenger went on to point out in his rebuttal (which you really ought to have quoted in your main post), we simply do not have the knowledge to rule that out, but that is no reason to give up and allow The Thing That Made The Things For Which There Is No Known Maker occupy the gap forever.
You have also ignored the second strand of Stenger’s argument in the 2003 Craig debate, namely, that we must compare the probabilities of the natural explanation against the supernatural explanation. What is the probability of the alternative to a naturalistic explanation i.e. that there is an all-good, all-powerful, yet undetectable supernatural being is behind the universe? What data do we have to make the calculation? None, I would say, since there is no evidence of a designer, natural or supernatural.
The Fine Tuning and Intelligent Design arguments are just modern variations of Aquinas’ argument from design – “It’s too complicated and improbable for my tiny mind to understand how it could have happened naturally except this time with a loada zeros after the decimal point” – and both suffer from the same fatal flaw that natural explanations can be ruled out by some arbitrary notion of low probability in favour of an unprovable supernatural alternative.
A good try, lukebarnes, but Stenger’s objection to Craig’s reasoning is still valid by my reckoning.
MSP
“Perhaps our specific kind of intelligent life is highly improbable, but if we reshuffle the universe’s deck of physical constants, we could well end up with some other kind of intelligent life … we simply do not have the knowledge to rule that out.”
This is not Stenger’s point in the passage I quoted. Stenger’s point is that that low probability events happen all the time. My simple response is that this is not sufficient to conclude that we can conclude nothing from the low probability associated with the fine-tuning of the universe.
Yes, Stenger went on to say other things, but I chose not to consider those in my reply for the sake of brevity. I have responded to the claim that there could be other forms of life in my Cambridge talk (http://www.phys.ethz.ch/~barnesl/video/). I think that we do have very good reasons (though not conclusive proofs) to think that no form of intelligent life would evolve in the vast majority of universes.
“… we must compare the probabilities of the natural explanation against the supernatural explanation”. Agreed. Once again, such a case was beyond the scope of this particular blog post. Because it’s a blog post.
Robin Collins makes such an argument in the “Blackwell companion to natural theology”. It is simply a fallacy to say that the fine-tuning of the universe cannot be evidence for a designer because we have no other evidence of a designer. If that kind of reasoning were true, then we couldn’t discover anything new.
I’m not even defending a design argument. I’m more interested in using the fine-tuning as evidence for the multiverse. If Stenger’s point were valid, then the fine-tuning could not be used as evidence for the existence of the multiverse.
“I’m not even defending a design argument. I’m more interested in using the fine-tuning as evidence for the multiverse. If Stenger’s point were valid, then the fine-tuning could not be used as evidence for the existence of the multiverse.”
Out of curiousity, who said that fine-tuning is evidence of a multiverse?
Carter, Carr, Hawking, Penrose, Rees, Wilczek, Wheeler, Tegmark, Linde, Vilenkin, Smolin, Weinberg, Deutsch …
[…] Stenger, that also brings out some of the subtleties of the cosmological fine tuning design case: 1, 2. (HT: Mung. Also cf Luke Barnes’ links to his series on fine tuning critiques on both […]
Dear Luke, you are on the right path till until the last sentences:
Quoting you: “Choose a different universe, and you almost certainly do not get a different form of intelligent life. You get no intelligent life at all. The fine-tuning of the universe involves a low probability event and an independently specified target, and thus cannot be dismissed as just another low probability event. Stenger’s counterexample misses the target.”
The problem is, the low probability event is being specified by you, and you are depending on the universe supporting your being alive. So, the low probability event is NOT independently specified, since you specified it. Hope that clears up your mind. Nate.
Nate,
It is an independently specified low-probability event based upon the material requirements for life as we understand them. Organisms require *organization* — the types which would be prohibited in the vast majority of logically possible universes.
Brian
Brian:
The notion of “specification” is incoherent and unusable, as demonstrated in my Synthese paper with Elsberry. Your claim about organization is simply an assertion which you haven’t supported.