introduction

You are currently browsing articles tagged introduction.

Recommended book: Bayes’ Rule: A Tutorial Introduction to Bayesian Analysis

For more see the links listed in this post by Richard Carrier or see below:

Links

The term “Mind Projection Fallacy” was coined by the late great Bayesian Master, E. T. Jaynes, as part of his long and hard-fought battle against the accursed frequentists.  Jaynes was of the opinion that probabilities were in the mind, not in the environment – that probabilities express ignorance, states of partial information; and if I am ignorant of a phenomenon, that is a fact about my state of mind, not a fact about the phenomenon.

I remember (dimly, as human memories go) the first time I self-identified as a “Bayesian”. Someone had just asked a malformed version of an old probability puzzle…

You’ve probably seen the word ‘Bayesian’ used a lot on this site, but may be a bit uncertain of what exactly we mean by that.

Bayes’ theorem was the subject of a detailed article. The essay is good, but over 15,000 words long — here’s the condensed version for Bayesian newcomers like myself.

Bayes’ Theorem for the curious and bewildered; an excruciatingly gentle introduction.

This post is elementary: it introduces a simple method of visualizing Bayesian calculations. In my defense, we’ve had other elementary posts before, and they’ve been found useful; plus, I’d really like this to be online somewhere, and it might as well be here.

Everyday use of a mathematical concept.

I recently came up with what I think is an intuitive way to explain Bayes’ Theorem…

Bayes' theorem

A law of probability that describes the proper way to incorporate new evidence into prior probabilities to form an updated probability estimate. Bayesian rationality takes its name from this theorem, as it is regarded as the foundation of consistent rational reasoning under uncertainty. A.k.a. “Bayes’s Theorem” or “Bayes’s Rule”.

Eliezer Yudkowsky is on bloggingheads.tv with the statistician Andrew Gelman.

Several different points of fascination about Bayes…

When looking further, there is however a whole crowd on the blogs that seems to see more in Bayes’s theorem than a mere probability inversion…

Bayesian statistics is a system for describing epistemological uncertainty using the mathematical language of probability.
Bayesian probability is one of the most popular interpretations of the concept of probability.

Edwin T. Jaynes was one of the first people to realize that probability theory, as originated by Laplace, is a generalization of Aristotelian logic that reduces to deductive logic in the special case that our hypotheses are either true or false. This web site has been established to help promote this interpretation of probability theory by distributing articles, books and related material. As Ed Jaynes originated this interpretation of probability theory we have a large selection of his articles, as well as articles by a number of other people who use probability theory in this way…

Bayesian statistics is so closely linked with induction that one often hears it called “Bayesian induction.” What could be more inductive than taking a prior, gathering data, updating the prior with Bayes Law, and limiting to the true distribution of some parameter?

Gelman (of the popular statistics blog) and Shalizi point that, in practice, Bayesian statistics should actually be seen as Popper-style hypothesis-based deduction. The problem is intricately linked to the “taking a prior” above.

Or, how to recognize Bayes’ theorem when you meet one making small talk at a cocktail party.

Still, I’m sure Blogger won’t mind me using their resources instead. The basic idea is that there’s a distinction between true values x and measured values y. You start off with a prior probability distribution over the true values. You then have a likelihood function, which gives you the probability P(y|x) of measuring any value y given a hypothetical true value x.

In other words, What is so special about starting with a human-generated hypothesis? Bayesian methods suggest what I think is the right answer: To get from probabilistic evidence to the probability of something requires combining the evidence with a prior expectation, a “prior probability”, and human hypothesis generation enables this requirement to be ignored with considerable practical success.

Andrew Gelman recently responded to a commenter on the Yudkowsky/Gelman diavlog; the commenter complained that Bayesian statistics were too subjective and lacked rigor.  I shall explain why this is unbelievably ironic…

Maybe this kind of Bayesian method for “proving the null” could be used to achieve a better balance.

Bayesian brain is a term that is used to refer to the ability of the nervous system to operate in situations of uncertainty in a fashion that is close to the optimal prescribed by Bayesian statistics.


—————————————

P.S.

Expect this link collection to be permanently updated.

Please post a comment if you have something to add.

Tags: , , , , , , , , , ,