Another video of one of my talks. The goal is to take Bayesian probability theory as it is used in the physical sciences and see if it can make sense of postulating and testing a multiverse theory.

As part of a project called Establishing the Philosophy of Cosmology, I attended a conference in Tenerife, Spain in September last year. The line-up of fellow attendees was, frankly, intimidating. Nevertheless, I had a wonderful time, learned a lot and presented some of my own ideas towards the end of the conference.

The videos are now available on YouTube here; talk slides are here. Just about all the talks are worth a listen – I’ve been enjoying listening to them again. Here are a few highlights.

Joel Primack – Cosmological Structure Formation. A nice introduction to how the universe made its galaxies.

Barry Loewer – Metaphysics of Laws & Time in Cosmology. A very helpful talk on how to think about the laws of nature, and the place of probabilities therein.

George Ellis – Observability and Testability in cosmology and Cosmology: what are the Limits of Science. Made an important distinction between “big-C” Cosmology, whose purview is all of reality, and “little-c” cosmology, which is a branch of science about what physics and physical observations can say about the universe as a whole.

Sean Carroll – What Happens Inside the Wave Function? (I’ll let Sean explain here.)

The talks by Don Page, Bob Wald, Jim Hartle, Joe Silk, David Wallace, David Albert, Chris Smeenk, Brian Pitts, Tom Banks, and Jean-Philippe Uzan were very interesting, as were the discussion panels of Dean Zimmerman, Jennan Ismael & Tim Maudlin, and Janna Levin, Priya Natarajan, Claus Beisbart & Pedro Ferreira.

Here’s mine. Enjoy.

(My sister is a TV journalist. I’m going to have to get some tips about not fidgeting, what to do with my hands, and not flubbing my words. I say “quantise” instead of “quantify” at one point. *cringe* My good wife has seen me give public lectures, and has commented that I appear to be on speed.)

Hi Luke,
I enjoyed your recent Bayesian lecture. I do wonder about the irony of your example involving the patient with a positive test result–Given the probability of false positives, what is the probability that this particular patient actually has the disease in question?

The irony of this example is that in order to use the Bayesian method effectively, we must have some means of finding the probability of false positives. How do we do this in practice? Well, we just count them; that is, we become “frequentists.” So on the right side of Bayes theorem we have a frequentist estimate yielding a Bayesian result for probability on the left side.

My suspicion is that this operation, although very useful, tends to give many users too much false confidence in Bayesian probabilities. I would favor calling the left side of Bayes theorem the “estimated probability,” to avoid the trap of sweeping our ignorance under some rug. In this case the actual probability would be an abstract idea as in the classical theory of stochastic systems.

“So on the right side of Bayes theorem we have a frequentist estimate yielding a Bayesian result for probability on the left side.”

All the probabilities in a Bayesian calculation are plausibilities. In this case the plausibility happens to be equal to a frequency: this is equivalent to using the principle of indifference about the identity of the particular person.

Luke- Thanks for these references. Looking forward to watching. When you have a chance, I think your readers would appreciate your commenting on Guth’s recent suggestion (https://edge.org/response-detail/25538) that it may no longer be necessary to assume that the universe began in a state of extraordinarily low entropy.

Guth says physicists have had zero success solving the arrow-of-time mystery. Is he not aware of the recent work by Popescu, Short, Linden, Winter and Reimann arguing that a phenomenon known as “quantum entanglement” causes the arrow of time? See this Aug 2014 article in Quanta Magazine: https://www.quantamagazine.org/20140416-times-arrow-traced-to-quantum-source/ (Sometimes I get the feeling that cosmologists interact with each other’s ideas much less than the general public assumes.)

Guth says he and Carroll and Chien-Yao Tseng are working on a new “two-arrow” model that will proceed under the assumption that the maximum possible entropy of the universe is infinite, thus eliminating the need for an extraordinarily low entropy in the initial state.

Three questions in particular:
(1) What exactly does infinite entropy mean or entail?
(2) Is there some deficiency in the original Carroll-Chen two-arrow model that requires a revision?
3) Are there philosophical issues of time that need to be considered here, or are those irrelevant for two- vs. one-arrow models?

on March 12, 2015 at 5:30 pm |Guillaume BelangerThanks for sharing this and for highlighting those talks most worth the time to watch

on March 13, 2015 at 1:52 pm |Paul NunezHi Luke,

I enjoyed your recent Bayesian lecture. I do wonder about the irony of your example involving the patient with a positive test result–Given the probability of false positives, what is the probability that this particular patient actually has the disease in question?

The irony of this example is that in order to use the Bayesian method effectively, we must have some means of finding the probability of false positives. How do we do this in practice? Well, we just count them; that is, we become “frequentists.” So on the right side of Bayes theorem we have a frequentist estimate yielding a Bayesian result for probability on the left side.

My suspicion is that this operation, although very useful, tends to give many users too much false confidence in Bayesian probabilities. I would favor calling the left side of Bayes theorem the “estimated probability,” to avoid the trap of sweeping our ignorance under some rug. In this case the actual probability would be an abstract idea as in the classical theory of stochastic systems.

on March 22, 2015 at 5:26 am |Brendon J. Brewer“So on the right side of Bayes theorem we have a frequentist estimate yielding a Bayesian result for probability on the left side.”

All the probabilities in a Bayesian calculation are plausibilities. In this case the plausibility happens to be equal to a frequency: this is equivalent to using the principle of indifference about the identity of the particular person.

on April 20, 2015 at 8:51 pm |Wallace MarshallLuke- Thanks for these references. Looking forward to watching. When you have a chance, I think your readers would appreciate your commenting on Guth’s recent suggestion (https://edge.org/response-detail/25538) that it may no longer be necessary to assume that the universe began in a state of extraordinarily low entropy.

Guth says physicists have had zero success solving the arrow-of-time mystery. Is he not aware of the recent work by Popescu, Short, Linden, Winter and Reimann arguing that a phenomenon known as “quantum entanglement” causes the arrow of time? See this Aug 2014 article in Quanta Magazine: https://www.quantamagazine.org/20140416-times-arrow-traced-to-quantum-source/ (Sometimes I get the feeling that cosmologists interact with each other’s ideas much less than the general public assumes.)

Guth says he and Carroll and Chien-Yao Tseng are working on a new “two-arrow” model that will proceed under the assumption that the maximum possible entropy of the universe is infinite, thus eliminating the need for an extraordinarily low entropy in the initial state.

Three questions in particular:

(1) What exactly does infinite entropy mean or entail?

(2) Is there some deficiency in the original Carroll-Chen two-arrow model that requires a revision?

3) Are there philosophical issues of time that need to be considered here, or are those irrelevant for two- vs. one-arrow models?