Before I get onto Carroll’s other replies to the fine-tuning argument, I need to discuss a feature of naturalism that will be relevant to what follows.
I take naturalism to be the claim that physical stuff is the only stuff. That is, the only things that exist concretely are physical things. (I say “concretely” in order to avoid the question of whether abstract things like numbers exist. Frankly, I don’t know.)
On naturalism, the ultimate laws of nature are the ultimate brute facts of reality. I’ve discussed this previously (here and here): the study of physics at any particular time can be summarised by three statements:
- A list of the fundamental constituents of physical reality and their properties.
- A set of mathematical equations describing how these entities change, interact and rearrange.
- A statement about how the universe began (or some other boundary condition, if the universe has no beginning point).
In short, what is there, what does it do, and in what state did it start?
Naturalism is the claim that there is some set of statements of this kind which forms the ultimate brute fact foundation of all concrete reality. There is some scientific theory of the physical contents of the universe, and once we’ve discovered that, we’re done. All deeper questions – such as where that stuff came from, why it is that type of stuff, why it obeys laws, why those laws, or why there is anything at all – are not answerable in terms of the ultimate laws of nature, and so are simply unanswerable. They are not just in need of more research; there are literally no true facts which shed any light whatsoever on these questions. There is no logical contradiction in asserting that the universe could have obeyed a different set of laws, but nevertheless there is no reason why our laws are the ones attached to reality and the others remain mere possibilities.
(Note: if there is a multiverse, then the laws that govern our cosmic neighbourhood are not the ultimate laws of nature. The ultimate laws would govern the multiverse, too.)
Non-informative Probabilities
In probability theory, we’ve seen hypotheses like naturalism before. They are known as “non-informative”.
In Bayesian probability theory, probabilities quantify facts about certain states of knowledge. The quantity p(A|B) represents the plausibility of the statement A, given only the information in the state of knowledge B. Probability aims to be an extension of deductive logic, such that:
“if A then B”, A -> B, and p(B|A) = 1
are the same statement. Similarly,
“if A then not B”, A -> ~B and p(B|A) = 0
are the same statement.
Between these extremes of logical implication, probability provides degrees of plausibility.
It is sometimes the case that the proposition of interest A is very well informed by B. For example, what is the probability that it will rain in the next 10 minutes, given that I am outside and can see blue skies in all directions? On other occasions, we are ignorant of some relevant information. For example, what is the probability that it will rain in the next 10 minutes, given that I’ve just woken up and can’t open the shutters in this room? Because probability describes states of knowledge, it is not necessarily derailed by a lack of information. Ignorance is just another state of knowledge, to be quantified by probabilities.
In Chapter 9 of his textbook “Probability Theory” (highly recommended), Edwin Jaynes considers a reasoning robot that is “poorly informed” about the experiment that it has been asked to analyse. The robot has been informed only that there are N possibilities for the outcome of the experiment. The poorly informed robot, with no other information to go on, should assign an equal probability to each outcome, as any other assignment would show unjustified favouritism to an arbitrarily labeled outcome. (See Jaynes Chapter 2 for a discussion of the principle of indifference.)
When no information is given about any particular outcome, all that is left is to quantify some measure of the size of the set of possible outcomes. This is not to assume some randomising selection mechanism. This is not a frequency, nor the objective chance associated with some experiment. It is simply a mathematical translation of the statement: “I don’t know which of these N outcomes will occur”. We are simply reporting our ignorance.
At the same time, the poorly informed robot can say more than just “I don’t know”, since it does know the number of possible outcomes. A poorly informed robot faced with 7 possibilities is in a different state of knowledge to one faced with 10,000 possibilities.
A particularly thorny case is characterising ignorance over a continuous parameter, since then there are an infinite number of possibilities. When a probability distribution for a certain parameter is not informed by data but only “prior” information, it is called a “non-informative prior”. Researchers continue the search for appropriate non-informative priors for various situations; the interested reader is referred to the “Catalogue of Non-informative Priors”.
Challenging Non-informative Hypotheses
Non-informative hypotheses give no specific expectations about which possible outcome will be actual (or, in be consistent with our framing of probability, which possibly true proposition turns out to be actually true). They are at the mercy of the set of possibilities. The larger the set of possibilities, the smaller the likelihood of the actual outcome.
As I’ve noted before, a small likelihood represents an opportunity. We rank hypotheses by their likelihood (probability of the data given the theory and background) times their prior (probability of the theory given the background alone). A non-informative hypothesis will often be simple, and so will have a non-negligible prior. So our scorecard reads: likelihood bad, prior good. If there is an alternative theory which is similarly simple and yet explains the data better, then it will win over the non-informative hypotheses.
In particular, a non-informative hypothesis usually represents the “likelihood baseline”. If your proposed theory can’t manage a larger likelihood than the non-informative hypothesis, then it might as well go home.
The point: non-informative hypotheses in large possibility spaces are vulnerable. They represent the likelihood baseline, the lowest likelihood of any theory that we should consider.
Naturalism is Non-Informative
What is the relationship between naturalism and the ultimate laws of nature, probabilistically? When it comes to our expectations of the ultimate laws of nature, naturalism is non-informative.
The reason for this was noted previously: naturalism claims that the ultimate laws of nature are brute facts. There are no true facts which can inform our expectations for any particular set of possible ultimate laws. (Of course, we’re assuming that they are logically consistent, but this seems like a weak constraint.) In fact, on naturalism, our expectations are not merely uninformed; they are not informable. It is not just ignorance. There are no deeper reasons.
Note well: This is neither criticism nor endorsement. “Ignorance” is not meant pejoratively. As noted above, Bayesian theory testing is very interested in characterising states of ignorance using probabilities. Ignorance is just a fact of life. There are a lot of possible states of knowledge that I could be in regarding the outcome of rolling a pair of dice. I could know about all manner of loaded dice or expert die rollers. Ignorance is one of those states, and a very important one because it is so often our actual state of knowledge.
I am simply trying to clarify the claims to which naturalism is committed. Naturalism is non-informative with respect to the ultimate laws of nature. It is at the mercy of the set of possible ultimate laws of nature.
Brute facts are not immune from probabilities
One final question: how can we talk about the probability of brute facts? They just are, with no deeper explanation. We can calculate probabilities with them, but not of them.
This objection confuses Bayesian degrees of plausibility with objective chances. Objective chances describes a stochastic property of physical system: if the experiment were repeated many times, the relative frequency of a particular outcome would approach its objective chance. A good example is the probability of a radioactive nucleus decaying in the next 30 seconds.
There cannot be an objective chance of a brute fact being true, because there is no experiment, no system, nothing repeatable. The brute fact is not the outcome of a process. There is nothing chancy.
However, we can still ask whether the brute fact is true, and in particular whether it really is a brute fact. We are not compelled to accept the truth of a brute fact simply because there is no associated objective chance. Proposed brute facts must run the same Bayesian gauntlet as any other hypothesis.
I think I agree with everything you write. I would simply add that naturalism can be informative (in your sense) with respect to other data, i.e., data other than the ultimate laws of nature.
Surely, to the naturalist, everything (in principle!) follows from the ultimate laws of nature. What other data did you have in mind?
So where does a self-caused universe fit into this? It seems to be an amalgam of “brute fact” and “caused”. And would you exclude something like this from the category of naturalism?
I’m inclined to think that “self-caused” is an impossibility. The universe, on naturalism, would simply be uncaused. Thoughts?
Self-causation is really a pretty old heresy, cf. “causa sui”.
It implies that X existed before X to bring X into existence, in an ontological order. I’ve never seen any convincing response to this seemingly breach in logic.
Seems like it’s what Hawking and Mlodinow attempted in The Grand Design though…
Speaking of: Feser’s last posts on the Principle of Sufficient Reason has some really interesting discussion on the points of brute facts, alternatives and ultimate explanations. It may interest both of you as well! 🙂
Given the difficulty comprehending causation outside of time, I’m not sure any explanation should be deemed impossible. Can you identify any particular reason why we should exclude “self-caused” from the set of possible explanations for the existence of the universe?
“X is self caused” would seem to imply that “X exists because X exists”.
“A because A” is surely a non-explanation. “The sky is blue because the sky is blue” is false.
@travis
Since the order is ontological, I don’t see why the issue of time should be relevant.
I guess you could doubt the principle of causation altogether, but as a Thomist Realist I think there are a number of independent reasons to hold to it, and not slide into a total skepticism. In short, we’re not really impressed by Hume and his empiricist disciples. And we aren’t really talking about only “causation outside of time”, but about causation of even created time itself?
I would also be critical on whether it’s even possible to talk of the “universe” as an entity that could be substantially endowed with inherent creative power to cause itself, as isn’t it really just a cumulative term of its content?
Please correct me if I seem mistaken! 🙂
@luke
But is that completely comparable? We’re not only implying a (loose) term of identification, such as “X because X”, but “X causes X”. As in a blue sky bringing itself into existence. Still ontologically speaking, that is.
@Daniel & Luke
I understand that self-cause is paradoxical, but I’m not sure that its any less bizarre than the alternatives that we encounter when we go looking for the last turtle, namely infinite regress or necessary existence. These are all so foreign to my understanding that I can’t find reason to discard any of them or to favor one over another. I’m not advocating for self-causation except as a possibility. I mostly just wanted to see what happened when I threw the wrench. If you haven’t already checked out the link in my first comment, it’s worth a glance.
Totally missed the link. I’ll have a look.
Self-causation perhaps seems more obviously impossible (to me) than the other options, although they all seem pretty baffling. I think the way forward here is a robust account of causation, from which such questions could be answered, rather than relying on intuition.
All theories are non-informative about their brute facts.
That’s a good way to put it. Question: Do all theories have brute facts?
I think they all do, one way or another.
What do you think about Lincon Cannon’s new God argument? He speculates that God and naturalism may be compatible. It is a transhumanist argument, what do you think about it?
http://www.new-god-argument.com/p/god-argument.html
http://hpics.li/4ae54b6
Everything works as one particle in motion.
Please, please, please see how I can explain it and with my graphs at that link –> http://welovewords.com/documents/hologram-by-arnaud-antoine-andrieu
I only have a layman’s understanding of physics and philosophy. This is my initial impression about fine-tuning argument. Personally I don’t like fine-tuning argument the way Dr. Craig phrases it whether or not multiverse exists. Personally I believe both multiverse or Anthropic Principle does not explain why life and even intelligent life occur in the those universes where it actually occurs. So I am not impressed by both ideas. So what I don’t like how Dr. Craig phases this argument is that the evidence given in support of argument is an appeal to the various constants that we observe i.e. Gravitational Constant, Cosmological Constant, mass of electron and proton etc. Here are my issues with that line of argumentation:
Firstly, these are based on our understanding and physics. It assumes that our understanding of physical reality in our current physics fully matches the reality. But we might not have plugged every possible factor into the equation. Those values are calculated from what we observe on both sides of the equation and various factors that we think might be contributing to it and whatever difference is designated as a constant. Hence these constants are part of the equations that we use to describe reality and not part of the reality itself. Certain physicists might think that equations in current physics somehow accurately represents what is going on in the reality, but smart physicists will know that these are just human representations of reality, not what goes on in the reality itself.
Secondly, in Dr. Craig’s argument, I still have not fully understood why he dismisses physical necessity. Many of the values like mass of electrons and other particles etc might be due to deeper physical laws that we have not yet fully understood like spontaneous symmetry breaking. For example, a deeper theory similar to String Theory might possibly bring to light many of those puzzles. The other reason will be balance exerted by each force (among the four fundamental forces) on each other resulting in a given value for a constant. Dark energy that forces the expansion of the spacetime is balanced by gravitational energy that prevents it. Is it always like this or is it particular to the spacetime that we are inhabiting now is an interesting question to ponder about.
Thirdly we don’t know for sure whether the physical conditions at the origin of current universe will necessarily create life. It might only provide sufficient conditions in some pockets where life could possibly occur. We also don’t know whether life would emerge if there were different constants than the current ones. Maybe it would, it may not be the same kind of life that we observe now, who knows.
But on the positive note for theists, only thing that fine tuning argument alludes to in my view is a more relevant metaphysical question, which is, why laws of nature are such that there is a possibility that life and even intelligent life would occur if there are sufficient conditions for it or why is the cosmos intelligible to us. That points towards an intelligence or rational principle (Nous for Neoplatonists; Logos for Stoics) and life or animating principle (World Soul or Anima Mundi for Neoplatonists; Pneuma for Stoics) behind the workings of cosmos as originally postulated by ancient Greek philosophers. So in that sense it is a good argument against materialistic naturalism held by many current atheists but not the naturalism of Stoics or Neoplatonist philosophers. But I take this more as a metaphysical conclusion rather than a scientific conclusion. Just my random ramblings 🙂
[…] argument for 1 might be summarised as follows. Naturalism is non-informative – as I have discussed in detail here. Thus, we can treat a naturalistic universe, for Bayesian purposes, as being random – not in […]
Hallo Luke Barnes,
I’m wondering about one thing:
How come that scientists might always consider what the specific conditions are for statistical significance for their hypothesis test because of the possibility of false positives and false negatives occuring, yet philosophers never do that?
Philosophers are only considering, which specific hypothesis might explain a specific data better than another hypothesis, but where is their and your consideration, Dr. Barnes, of the possibility of that specific data being supposedly “best” explained by a specific hypothesis according to our current “best” of knowledge, yet that specific data being actually caused by not that supposedly “best” hypothesis to our current “best” of knowledge, but being caused by another hypothesis – the possibility of the occurrence of either a false positive (a hypothesis falsely attributed being the best explanation for the considered specific data) or false negative (a hypothesis falsely not attributed being the best explanation for the considered specific data)?
Best regards,
Zsolt Nagy
It’s very difficult to generalise about philosophers. There are plenty of philosophers who apply probabilistic thinking just fine. Discussions of fine-tuning – philosopher and scientist, routinely consider all the offered solutions.
“Difficult” doesn’t mean that it is “impossible”.
On the contrary, the generalisation about philosophers being “difficult” means rather, that it is possible, but at the same time not easy to do so.
Besides that I’m not very much concerned about every or all philosophers, but specifically I’m concerned more about you and you disregarding the possibility of the occurence of false positives (a hypothesis falsely attributed being the best explanation for the considered specific data) and the possibility of the occurence of false negatives (a hypothesis falsely not attributed being the best explanation for the considered specific data).
From your blogpost and previous blank reply it is not apparent, that you would at all consider those possibilities. How comes?
Did you know, that once upon a time Thor the god of thunder was falsely attributed being the best explanation for the natural phenomenon of lightning and thunder?
Or do you know anything about the luminiferous aether hypothesis?
Since you are a physicist, you probably know about that hypothesis. But it also might be the case, that you don’t know about it since it is today currently very much so not the best explanation for the light as a physical wave phenomenon.
So how come, that you are not that concerned with the possibilities of the occurence of false positives and false negatives?
Hi Zsolt,
Out of curiosity, which specific false positives and/or false negatives do you think Luke has not considered? I tend to think his work on fine-tuning, while mostly tending to focus more on establishing that certain fundamental constants are in fact fine-tuned than on what conclusions we should draw from fine-tuning, tends to be pretty thorough in considering what possible hypotheses might account for fine-tuning. But if you think there are specific hypotheses he has neglected, I think it would helpful to be explicit about which ones.
Alternatively, do you think the issue is not so much taking into account known hypotheses, but not taking into account hypotheses no one has thought of yet? If so, is this really reasonable? And is this second idea really so routine in physics (or science more generally) anyway?
Best, Andrew.
Hallo Andrew,
This video here explained quite nicely, what false positives and false negatives or type I and type II errors are regarding to our believes in some truth of our universe (or general truth) till the time mark 17:52
“The Fatal Flaw of New Atheism (understanding the cult of confidence)”
by The Distributist ( https://www.youtube.com/watch?v=CltwD0Ek9Kk&t=0s ).
After that time mark there is also an interesting hypothetical scenario. But the interpretation and the analysis and implications of that hypothetical scenario are quite questionable in my opinion.
It’s not, that Barnes has neglected some specific hypotheses. But he neglects in his analysis of results the occurrences of false positives and false negatives.
Generally speaking, how probable or frequent are those false positives (believing in something, that is false) and false negatives (not believing in something, that is true), especially how frequent are false positives?
Upon millennia and basically everywhere humans believed in something, that is false, i.e. the different mythologies and religions of the antice time, very questionable medicine practices and doctrines like geocentrism in the middle ages not to speak of the believe in the existence of witches and the practise of burning “witches” in the middle ages and so on.
Those false beliefs have been gradually replaced by natural explanations, which are for the most part correct in my opinion. Maybe it is not complete, but so then what? For the most part those explanations are correct and it would take quite the effort to complete them or “correct” them.
Barnes doesn’t consider naturalism as incomplete, but how did he put it in his work and blogpost here? Hm, “non-informative”. Naturalism is “non-informative” about the fine-tuning of our universe. So then what?
That only implies, that naturalism is incomplete. Time will tell if it’s gonna be completed on that or not or be proven to be never completable.
As for a theistical or a superstition explanation for a phenomenon:
Good luck with that since there has never been a naturalistic explanation replaced by a superstition explanation. You might try to fill in the gap of the incomplete knowledge of naturalism (my a** is “non-informative” about fine-tuning), but then don’t be amazed and perplexed, if one day, that gap might be filled with a naturalistic explanation. If time and history implies something about the future, then it implies the replacement of superstition explanations with naturalistic ones.
That’s my opinion on this subject matter and the work and “analysis” of Barnes’s.
Best regards, Zsolt