Bayes' Theorem (On his anniversary death in 1761)
Bayes' solution to a problem of "inverse probability" was presented in the Essay Towards Solving a Problem in the Doctrine of Chances (1764), published posthumously by his friend Richard Price in the Philosophical Transactions of the Royal Society of London. This essay contains a statement of a special case of Bayes' theorem.
In the first decades of the eighteenth century, many problems concerning the probability of certain events, given specified conditions, were solved. For example, given a specified number of white and black balls in an urn, what is the probability of drawing a black ball? These are sometimes called "forward probability" problems. Attention soon turned to the converse of such a problem: given that one or more balls has been drawn, what can be said about the number of white and black balls in the urn? The Essay of Bayes contains his solution to a similar problem, posed by Abraham de Moivre, author of The Doctrine of Chances (1718).
Bayes' Theorem is a simple mathematical formula used for calculating conditional probabilities. It figures prominently in subjectivist or Bayesian approaches to epistemology, statistics, and inductive logic.
The probability of a hypothesis H conditional on a given body of data E is the ratio of the unconditional probability of the conjunction of the hypothesis with the data to the unconditional probability of the data alone.
(1.1) | Definition. |
The probability of H conditional on E is defined as PE(H) = P(H & E)/P(E), provided that both terms of this ratio exist and P(E) > 0. |
(1.2) Bayes' Theorem.
PE(H) = [P(H)/P(E)] PH(E)
(1.3) | Bayes' Theorem (2nd form). |
PE(H) = P(H)PH(E) / [P(H)PH(E) + P(~H)P~H(E)] |
In this guise Bayes' theorem is particularly useful for inferring causes from their effects since it is often fairly easy to discern the probability of an effect given the presence or absence of a putative cause. For instance, physicians often screen for diseases of known prevalence using diagnostic tests of recognized sensitivity and specificity. The sensitivity of a test, its "true positive" rate, is the fraction of times that patients with the disease test positive for it. The test's specificity, its "true negative" rate, is the proportion of healthy patients who test negative. If we let H be the event of a given patient having the disease, and E be the event of her testing positive for it, then the test's specificity and sensitivity are given by the likelihoods PH(E) and P~H(~E), respectively, and the "baseline" prevalence of the disease in the population is P(H). Given these inputs about the effects of the disease on the outcome of the test, one can use (1.3) to determine the probability of disease given a positive test.
(from here and wikipedia)
0 comments:
Enviar um comentário