Before I get onto Carroll’s other replies to the fine-tuning argument, I need to discuss a feature of naturalism that will be relevant to what follows.

I take naturalism to be the claim that physical stuff is the only stuff. That is, the only things that exist concretely are physical things. (I say “concretely” in order to avoid the question of whether abstract things like numbers exist. Frankly, I don’t know.)

On naturalism, the ultimate laws of nature are the ultimate brute facts of reality. I’ve discussed this previously (here and here): the study of physics at any particular time can be summarised by three statements:

- A list of the fundamental constituents of physical reality and their properties.
- A set of mathematical equations describing how these entities change, interact and rearrange.
- A statement about how the universe began (or some other boundary condition, if the universe has no beginning point).

In short, what is there, what does it do, and in what state did it start?

Naturalism is the claim that there is some set of statements of this kind which forms the ultimate brute fact foundation of all concrete reality. There is some scientific theory of the physical contents of the universe, and once we’ve discovered that, we’re done. All deeper questions – such as where that stuff came from, why it is that type of stuff, why it obeys laws, why those laws, or why there is anything at all – are not answerable in terms of the ultimate laws of nature, and so are simply unanswerable. They are not just in need of more research; there are literally no true facts which shed any light whatsoever on these questions. There is no logical contradiction in asserting that the universe could have obeyed a different set of laws, but nevertheless there is no reason why our laws are the ones attached to reality and the others remain mere possibilities.

(Note: if there is a multiverse, then the laws that govern our cosmic neighbourhood are not the ultimate laws of nature. The ultimate laws would govern the multiverse, too.)

### Non-informative Probabilities

In probability theory, we’ve seen hypotheses like naturalism before. They are known as “non-informative”.

In Bayesian probability theory, probabilities quantify facts about certain states of knowledge. The quantity p(A|B) represents the plausibility of the statement A, given only the information in the state of knowledge B. Probability aims to be an extension of deductive logic, such that:

“if A then B”, A -> B, and p(B|A) = 1

are the same statement. Similarly,

“if A then not B”, A -> ~B and p(B|A) = 0

are the same statement.

Between these extremes of logical implication, probability provides degrees of plausibility.

It is sometimes the case that the proposition of interest A is very well informed by B. For example, what is the probability that it will rain in the next 10 minutes, given that I am outside and can see blue skies in all directions? On other occasions, we are ignorant of some relevant information. For example, what is the probability that it will rain in the next 10 minutes, given that I’ve just woken up and can’t open the shutters in this room? Because probability describes states of knowledge, it is not necessarily derailed by a lack of information. Ignorance is just another state of knowledge, to be quantified by probabilities.

In Chapter 9 of his textbook “Probability Theory” (highly recommended), Edwin Jaynes considers a reasoning robot that is “poorly informed” about the experiment that it has been asked to analyse. The robot has been informed only that there are N possibilities for the outcome of the experiment. The poorly informed robot, with no other information to go on, should assign an equal probability to each outcome, as any other assignment would show unjustified favouritism to an arbitrarily labeled outcome. (See Jaynes Chapter 2 for a discussion of the principle of indifference.)

When no information is given about any particular outcome, all that is left is to quantify some measure of the size of the set of possible outcomes. This is * not* to assume some randomising selection mechanism. This is not a frequency, nor the objective chance associated with some experiment. It is simply a mathematical translation of the statement: “I don’t know which of these N outcomes will occur”. We are simply reporting our ignorance.

At the same time, the poorly informed robot can say more than just “I don’t know”, since it does know the number of possible outcomes. A poorly informed robot faced with 7 possibilities is in a different state of knowledge to one faced with 10,000 possibilities.

A particularly thorny case is characterising ignorance over a continuous parameter, since then there are an infinite number of possibilities. When a probability distribution for a certain parameter is not informed by data but only “prior” information, it is called a “non-informative prior”. Researchers continue the search for appropriate non-informative priors for various situations; the interested reader is referred to the “Catalogue of Non-informative Priors”. Continue Reading »