introduction

You are currently browsing articles tagged introduction.

Beginner

Bayes' theorem

intermediate

Advanced

Philosophical foundations

miscellaneous

A law of probability that describes the proper way to incorporate new evidence into prior probabilities to form an updated probability estimate. Bayesian rationality takes its name from this theorem, as it is regarded as the foundation of consistent rational reasoning under uncertainty. A.k.a. “Bayes’s Theorem” or “Bayes’s Rule”.

Eliezer Yudkowsky is on bloggingheads.tv with the statistician Andrew Gelman.

Several different points of fascination about Bayes…

When looking further, there is however a whole crowd on the blogs that seems to see more in Bayes’s theorem than a mere probability inversion…

Bayesian statistics is a system for describing epistemological uncertainty using the mathematical language of probability.
Bayesian probability is one of the most popular interpretations of the concept of probability.

Edwin T. Jaynes was one of the first people to realize that probability theory, as originated by Laplace, is a generalization of Aristotelian logic that reduces to deductive logic in the special case that our hypotheses are either true or false. This web site has been established to help promote this interpretation of probability theory by distributing articles, books and related material. As Ed Jaynes originated this interpretation of probability theory we have a large selection of his articles, as well as articles by a number of other people who use probability theory in this way…

Bayesian statistics is so closely linked with induction that one often hears it called “Bayesian induction.” What could be more inductive than taking a prior, gathering data, updating the prior with Bayes Law, and limiting to the true distribution of some parameter?

Gelman (of the popular statistics blog) and Shalizi point that, in practice, Bayesian statistics should actually be seen as Popper-style hypothesis-based deduction. The problem is intricately linked to the “taking a prior” above.

Or, how to recognize Bayes’ theorem when you meet one making small talk at a cocktail party.

Still, I’m sure Blogger won’t mind me using their resources instead. The basic idea is that there’s a distinction between true values x and measured values y. You start off with a prior probability distribution over the true values. You then have a likelihood function, which gives you the probability P(y|x) of measuring any value y given a hypothetical true value x.

In other words, What is so special about starting with a human-generated hypothesis? Bayesian methods suggest what I think is the right answer: To get from probabilistic evidence to the probability of something requires combining the evidence with a prior expectation, a “prior probability”, and human hypothesis generation enables this requirement to be ignored with considerable practical success.

Andrew Gelman recently responded to a commenter on the Yudkowsky/Gelman diavlog; the commenter complained that Bayesian statistics were too subjective and lacked rigor.  I shall explain why this is unbelievably ironic…

Maybe this kind of Bayesian method for “proving the null” could be used to achieve a better balance.

Bayesian brain is a term that is used to refer to the ability of the nervous system to operate in situations of uncertainty in a fashion that is close to the optimal prescribed by Bayesian statistics.


—————————————

P.S.

Expect this link collection to be permanently updated.

Please post a comment if you have something to add.

Tags: , , , , , , , ,