Feeds:
Posts
Comments

Archive for February, 2016

Gravitational Waves!

It’s been a big 24 hours for science. As I’m sure you know by now, LIGO announced the direct detection of gravitational waves from a merging black hole. The signal showed the characteristic upwardly-pitched “chirp”. More details here.

When I was a postdoc at ETH Zurich in 2011, Kip Thorne gave a wonderful set of lectures to scientists and laypersons on gravitational waves astronomy. He was good enough to have lunch with the students and postdocs as well, where he regaled us with stories of working with the Russians in the 1970’s and a movie he was working on with Steven Spielberg. Given his decades of remarkable work in the field, I remember thinking “I really hope that he sees gravitational waves observed in his lifetime”. So it was great to see him sharing the stage at the LIGO press conference.

It’s also been a big 24 hours for me turning up in unusual places. The New York Times reported the trend, kicked off by Katie Mack, of anticipating the announcement by mimicking the LIGO chirp. I was at Monash University for the 10th conference-workshop of the Australian National Institute for Theoretical Astrophysics (ANITA), and joined an enthusiastic bunch of students and staff (including Katie) in staying up until 2:30 am to hear the announcement. We made our own chirping video, complete with background noise. And so, somehow, I ended up on the New York Times website.

Screen Shot 2016-02-12 at 10.46.44 PM.png

Also today, my post about the effect of altitude on cricket ball trajectories was linked by ESPN’s cricinfo.com, previewing a game at the Wanderers Stadium in Johannesburg:

At 1633m above sea level, the Wanderers Stadium is at an unusually high altitude. Scientific models have worked out that a shot that would just reach the boundary at the Wanderers (approx. 65m) would fall some four metres short at lower-altitude venues.

Beer bottle performance art and sports science aren’t really my research focus at the moment, but I’m happy to branch out.

Read Full Post »

Abstract

In what follows, I’ll consider Carrier’s claims about the mathematical foundations of probability theory. What Carrier says about probability is at odds with every probability textbook (or lecture notes) I can find. He rejects the foundations of probability laid by frequentists (e.g. Kolmogorov’s axioms) and Bayesians (e.g. Cox’s theorem). He is neither, because we’re all wrong – only Carrier knows how to do probability correctly. That’s why he has consistently refused my repeated requests to provide scholarly references – they do not exist. As such, Carrier cannot borrow the results and standing of modern probability theory. Until he has completed his revolution and published a rigorous mathematical account of Carrierian probability theory, all of his claims about probability are meaningless.

Carrier’s version of Probability Theory

I intend to demonstrate these claims, so we’ll start by quoting Carrier at length. I won’t be relying on previous posts. In TEC, Carrier says:

Bayes’ theorem is an argument in formal logic that derives the probability that a claim is true from certain other probabilities about that theory and the evidence. It’s been formally proven, so no one who accepts its premises can rationally deny its conclusion. It has four premises … [namely P(h|b), P(~h|b), P(e|h.b), P(e|~h.b)]. … Once we have [those], the conclusion necessarily follows according to a fixed formula. That conclusion is then by definition the probability that our claim h is true given all our evidence e and our background knowledge b.

In OBR, he says:

[E]ver since the Principia Mathematica it has been an established fact that nearly all mathematics reduces to formal logic … The relevant probability theory can be deduced from Willard Arithmetic … anyone familiar with both Bayes’ Theorem (hereafter BT) and conditional logic (i.e. syllogisms constructed of if/then propositions) can see from what I show there [in Proving History] that BT indeed is reducible to a syllogism in conditional logic, where the statements of each probability-variable within the formula is a premise in formal logic, and the conclusion of the equation becomes the conclusion of the syllogism. In the simplest terms, “if P(h|b) is w and P(e|h.b) is x and P(e|~h.b) is y, then P(h|e.b) is z,” which is a logically necessary truth, becomes the concluding major premise, and “P(h|b) is w and P(e|h.b) is x and P(e|~h.b) is y” are the minor premises. And one can prove the major premise true by building syllogisms all the way down to the formal proof of BT, again by symbolic logic (which one can again replace with old-fashioned propositional logic if one were so inclined).

More specifically it is a form of argument, that is, a logical formula that describes a particular kind of argument. The form of this argument is logically valid. That is, its conclusion is necessarily true when its premises are true. Which means, if the three variables in BT are true (each representing a proposition about a probability, hence a premise in an argument), the epistemic probability that results is then a logically necessary truth. So, yes, Bayes’ Theorem is an argument.

He links to, and later shows, the following “Proof of Bayes Theorem … by symbolic logic”, saying that “the derivation of the theorem is this.”

btproof.png

For future reference, we’ll call this “The Proof”. Of his mathematical notation, Carrier says:

P(h|b) is symbolic notation for the proposition “the probability that a designated hypothesis is true given all available background knowledge but not the evidence to be examined is x,” where x is an assigned probability in the argument.

Like nothing we’ve ever seen

I have 13 probability textbooks/lecture notes open in front of me: Bain and Engelhardt; Jaynes (PDF); Wall and Jenkins; MacKay (PDF); Grinstead and Snell; Ash; Bertsekas and Tsitsiklis; Rosenthal; Bayer; Dembo; Sokol and Rønn-NielsenVenkateshDurrett; Tao. I recently stopped by Sydney University’s Library to pick up a book on nuclear reactions, and took the time to open another 15 textbooks. I’ve even checked some of the philosophy of probability literature, such as Antony Eagle’s collection of readings (highly recommended), Arnborg and SjodinCatichaColyvanHajek (who has a number of great papers on probability), and Maudlin.

When presenting the foundations of probability theory, these textbooks and articles roughly divide along Bayesian vs frequentist lines. The purely mathematical approach, typical of frequentist textbooks, begins by thinking about relative frequencies before introducing measure theory, explaining Kolmogorov’s axioms, motivating the definition of conditional probability, and then – in one line of algebra – giving “The Proof” of Bayes theorem. Says Mosteller, Rourke and Thomas: “At the mathematical level, there is hardly any disagreement about the foundations of probability … The foundation in set theory was laid in 1933 by the great Russian probabilitist, A. Kolmogorov.” With this mathematical apparatus in hand, we use it to analyse relative frequencies of data.

Bayesians take a different approach (e.g. Probability Theory by Ed Jaynes). We start by thinking about modelling degrees of plausibility. The frequentist, quite rightly, asks what the foundations of this approach are. In particular, why think that degrees of plausibility should be modelled by probabilities? Why think that “plausibilities” can be mathematised at all, and why use Kolmogorov’s particular mathematical apparatus? Bayesians respond by motivating certain “desiderata of rationality”, and use these to prove via Cox’s theorem (or perhaps via de Finetti’s “Dutch Book” arguments) that degrees of plausibility obey the usual rules of probability. In particular, the product rule is proven, p(A and B | C) = p(A|B and C) p(B|C), from which Bayes theorem follows via “The Proof”.

In precisely none of these textbooks and articles will you find anything like Carrier’s account. When presenting the foundations of probability theory in general and Bayes Theorem in particular, no one presents anything like Carrier’s version of probability theory. Do it yourself, if you have the time and resources. Get a textbook (some of the links above are to online PDFs), find the sections on the foundations of probability and Bayes Theorem, and compare to the quotes from Carrier above. In this company, Carrier’s version of probability theory is a total loner. We’ll see why. (more…)

Read Full Post »

Continuing my response to Carrier (here’s Part 1 and Part 2).

Part Four: The Real Heart of the Matter

Note that this is actually not “my” conclusion. It is the conclusion of three mathematicians (including one astrophysicist) in two different studies converging on the same result independently of each other.

Wow! Two “studies”! (In academia, we call them “papers”. Though, neither were published in a peer-reviewed journal, so perhaps “articles”.) Three mathematicians! Except that Elliott Sober is a philosopher (and a fine one), not a mathematician – he has never published a paper in a mathematics journal. More grasping at straws.

 

Barnes wants to get a different result by insisting the prior probability of observers is low—which means, because prior probabilities are always relative probabilities, that that probability is low without God, i.e. that it is on prior considerations far more likely that observers would exist if God exists than if He doesn’t.

post-57226-bad-grade-f-meme-imgur-9gju

Those sentences fail Bayesian Probability 101. Prior probabilities are probabilities of hypotheses. Always. In every probability textbook there has ever been1. Probabilities of data given a hypothesis – such as the probability that this universe contains observers given naturalism – are called likelihoods. So, there is the prior probability of naturalism, and there is the likelihood of observers given naturalism, but there is no such thing as the “prior probability of observers”.

This is not a harmless slip in terminology. Carrier treats a likelihood as if it were a prior. He has confused the concepts, not just the names. Carrier states that “the only way the prior probability of observers can be low, is if the prior probability of observers is high on some alternative hypothesis.”2 This is true of prior probabilities, but it is not true of likelihoods. In the vernacular, likelihoods are not normalised with respect to hypotheses. They are normalised with respect to evidence: p(e|h.b) + p(~e|h.b) = 1.

It follows that this entire section on the “prior probability of observers” and the need to consider “some alternative hypothesis” is garbage. There is simply no argument to respond to, only a hopeless mess of Carrier’s confusions. It’s an extended discussion about prior probabilities from a guy who doesn’t know what a prior probability is. Given that he has previously confused priors and posteriors, he’s zero from three on the fundamentals of Bayes theorem. You cannot keep getting the basics of probability theory wrong and expect to be taken seriously. (more…)

Read Full Post »

Looking for a romantic evening on (the day after) Valentine’s day? Why not try the Macarthur Astronomy Forum!

Location: Western Sydney University, Lecture theatre, Building 30

Date: Monday 15th February, 7.30 pm

Title: There is more to the Universe than its good looks.

Abstract: The planets, stars and galaxies that fill the night sky obey elegant mathematical patterns: the laws of nature. Why does our Universe obey these particular laws? As a clue to answering this question, scientists have asked a similar question: what if the laws were slightly different? What if it had begun with more matter, had heavier particles, or space had four dimensions?

In the last 30 years, scientists have discovered something astounding: the vast majority of these changes are disastrous. We end up with a universe containing no galaxies, no stars, no planets, no atoms, no molecules, and most importantly, no intelligent life-forms wondering what went wrong. This is called the fine-tuning of the universe for life. After explaining the science of what happens when you change the way our universe works, we will ask: what does all this mean?

Read Full Post »