Before I get onto Carroll’s other replies to the fine-tuning argument, I need to discuss a feature of naturalism that will be relevant to what follows.
I take naturalism to be the claim that physical stuff is the only stuff. That is, the only things that exist concretely are physical things. (I say “concretely” in order to avoid the question of whether abstract things like numbers exist. Frankly, I don’t know.)
On naturalism, the ultimate laws of nature are the ultimate brute facts of reality. I’ve discussed this previously (here and here): the study of physics at any particular time can be summarised by three statements:
- A list of the fundamental constituents of physical reality and their properties.
- A set of mathematical equations describing how these entities change, interact and rearrange.
- A statement about how the universe began (or some other boundary condition, if the universe has no beginning point).
In short, what is there, what does it do, and in what state did it start?
Naturalism is the claim that there is some set of statements of this kind which forms the ultimate brute fact foundation of all concrete reality. There is some scientific theory of the physical contents of the universe, and once we’ve discovered that, we’re done. All deeper questions – such as where that stuff came from, why it is that type of stuff, why it obeys laws, why those laws, or why there is anything at all – are not answerable in terms of the ultimate laws of nature, and so are simply unanswerable. They are not just in need of more research; there are literally no true facts which shed any light whatsoever on these questions. There is no logical contradiction in asserting that the universe could have obeyed a different set of laws, but nevertheless there is no reason why our laws are the ones attached to reality and the others remain mere possibilities.
(Note: if there is a multiverse, then the laws that govern our cosmic neighbourhood are not the ultimate laws of nature. The ultimate laws would govern the multiverse, too.)
In probability theory, we’ve seen hypotheses like naturalism before. They are known as “non-informative”.
In Bayesian probability theory, probabilities quantify facts about certain states of knowledge. The quantity p(A|B) represents the plausibility of the statement A, given only the information in the state of knowledge B. Probability aims to be an extension of deductive logic, such that:
“if A then B”, A -> B, and p(B|A) = 1
are the same statement. Similarly,
“if A then not B”, A -> ~B and p(B|A) = 0
are the same statement.
Between these extremes of logical implication, probability provides degrees of plausibility.
It is sometimes the case that the proposition of interest A is very well informed by B. For example, what is the probability that it will rain in the next 10 minutes, given that I am outside and can see blue skies in all directions? On other occasions, we are ignorant of some relevant information. For example, what is the probability that it will rain in the next 10 minutes, given that I’ve just woken up and can’t open the shutters in this room? Because probability describes states of knowledge, it is not necessarily derailed by a lack of information. Ignorance is just another state of knowledge, to be quantified by probabilities.
In Chapter 9 of his textbook “Probability Theory” (highly recommended), Edwin Jaynes considers a reasoning robot that is “poorly informed” about the experiment that it has been asked to analyse. The robot has been informed only that there are N possibilities for the outcome of the experiment. The poorly informed robot, with no other information to go on, should assign an equal probability to each outcome, as any other assignment would show unjustified favouritism to an arbitrarily labeled outcome. (See Jaynes Chapter 2 for a discussion of the principle of indifference.)
When no information is given about any particular outcome, all that is left is to quantify some measure of the size of the set of possible outcomes. This is not to assume some randomising selection mechanism. This is not a frequency, nor the objective chance associated with some experiment. It is simply a mathematical translation of the statement: “I don’t know which of these N outcomes will occur”. We are simply reporting our ignorance.
At the same time, the poorly informed robot can say more than just “I don’t know”, since it does know the number of possible outcomes. A poorly informed robot faced with 7 possibilities is in a different state of knowledge to one faced with 10,000 possibilities.
A particularly thorny case is characterising ignorance over a continuous parameter, since then there are an infinite number of possibilities. When a probability distribution for a certain parameter is not informed by data but only “prior” information, it is called a “non-informative prior”. Researchers continue the search for appropriate non-informative priors for various situations; the interested reader is referred to the “Catalogue of Non-informative Priors”.
Challenging Non-informative Hypotheses
Non-informative hypotheses give no specific expectations about which possible outcome will be actual (or, in be consistent with our framing of probability, which possibly true proposition turns out to be actually true). They are at the mercy of the set of possibilities. The larger the set of possibilities, the smaller the likelihood of the actual outcome.
As I’ve noted before, a small likelihood represents an opportunity. We rank hypotheses by their likelihood (probability of the data given the theory and background) times their prior (probability of the theory given the background alone). A non-informative hypothesis will often be simple, and so will have a non-negligible prior. So our scorecard reads: likelihood bad, prior good. If there is an alternative theory which is similarly simple and yet explains the data better, then it will win over the non-informative hypotheses.
In particular, a non-informative hypothesis usually represents the “likelihood baseline”. If your proposed theory can’t manage a larger likelihood than the non-informative hypothesis, then it might as well go home.
The point: non-informative hypotheses in large possibility spaces are vulnerable. They represent the likelihood baseline, the lowest likelihood of any theory that we should consider.
Naturalism is Non-Informative
What is the relationship between naturalism and the ultimate laws of nature, probabilistically? When it comes to our expectations of the ultimate laws of nature, naturalism is non-informative.
The reason for this was noted previously: naturalism claims that the ultimate laws of nature are brute facts. There are no true facts which can inform our expectations for any particular set of possible ultimate laws. (Of course, we’re assuming that they are logically consistent, but this seems like a weak constraint.) In fact, on naturalism, our expectations are not merely uninformed; they are not informable. It is not just ignorance. There are no deeper reasons.
Note well: This is neither criticism nor endorsement. “Ignorance” is not meant pejoratively. As noted above, Bayesian theory testing is very interested in characterising states of ignorance using probabilities. Ignorance is just a fact of life. There are a lot of possible states of knowledge that I could be in regarding the outcome of rolling a pair of dice. I could know about all manner of loaded dice or expert die rollers. Ignorance is one of those states, and a very important one because it is so often our actual state of knowledge.
I am simply trying to clarify the claims to which naturalism is committed. Naturalism is non-informative with respect to the ultimate laws of nature. It is at the mercy of the set of possible ultimate laws of nature.
Brute facts are not immune from probabilities
One final question: how can we talk about the probability of brute facts? They just are, with no deeper explanation. We can calculate probabilities with them, but not of them.
This objection confuses Bayesian degrees of plausibility with objective chances. Objective chances describes a stochastic property of physical system: if the experiment were repeated many times, the relative frequency of a particular outcome would approach its objective chance. A good example is the probability of a radioactive nucleus decaying in the next 30 seconds.
There cannot be an objective chance of a brute fact being true, because there is no experiment, no system, nothing repeatable. The brute fact is not the outcome of a process. There is nothing chancy.
However, we can still ask whether the brute fact is true, and in particular whether it really is a brute fact. We are not compelled to accept the truth of a brute fact simply because there is no associated objective chance. Proposed brute facts must run the same Bayesian gauntlet as any other hypothesis.