Clever Geek Handbook
📜 ⬆️ ⬇️

Inverse probability

In probability theory , inverse probability is an outdated term for the probability distribution of an unobservable variable.

Today, the problem of determining the distribution of an unobservable variable (by any methods) is called statistical inference , the inverse probability method (assigning the probability distribution to an unobservable variable) is called the Bayesian probability , the "distribution" of the unobservable variable under the condition of the observed data as a likelihood function (which is not a probability distribution), and the distribution of an unobservable variable under the condition of observed data and a priori distribution is called a posteriori distribution by definition . The development of terminology from “inverse probability” to “Bayesian probability” is described by Finberg (2006) [1] . The term “Bayesian,” which replaced “inverse probability,” was actually coined by R. A. Fisher as derogatory.

The term "inverse probability" appeared in De Morgan's 1837 article in reference to the Laplace method of probability (developed in an article in 1774, which Laplace independently discovered and then popularized Bayesian methods in his 1812 book), although the term "inverse probability" and not found in these articles.

The inverse probability, interpreted in different ways, was not the dominant approach to statistics until the development of the frequency approach at the beginning of the 20th century by R.A. Fisher , Jerzy Neumann and Egon Pearson . After the development of the frequency approach, the terms frequency and Bayesian developed when these approaches were contrasted, and became widespread in the 1950s.

Details

Under modern conditions, for a given probability distribution p ( x | θ) of the observed quantity x under the condition of the unobservable variable θ, the “inverse probability” is the posterior distribution p (θ | x ), which depends on the likelihood function (inversion of the probability distribution) and a priori distribution. The distribution p ( x | θ) is called the direct probability . The problem of inverse probability (in the 18th and 19th centuries) was the problem of estimating a parameter from data in experimental sciences, especially in astronomy and biology . A simple example is the task of assessing the position of a star in the sky (at a specific time on a certain date) for navigation purposes. Given the observational data, one should evaluate the true position (probably by averaging). This problem could now be considered one of the areas of statistical conclusions . The terms “direct probability” and “inverse probability” were used until the mid-20th century, when the terms “ likelihood function ” and “posterior distribution” became common.

See also

  • Bayesian probability
  • Bayes Theorem

Literature

  1. ↑ Fienberg, Stephen E. /issue01/fienberg.pdf When Did Bayesian Inference Become "Bayesian"? (unspecified) // Bayesian Analysis. - 2006. - T. 1 , No. 1 . - S. 1-40 . - DOI : 10.1214 / 06-BA101 . (inaccessible link)
Source - https://ru.wikipedia.org/w/index.php?title=Inverse_probability&oldid=100955430


More articles:

  • Prokalotermes hagenii
  • Petrosyan, Petros Akopovich
  • Plan (Cinema)
  • Chonggye
  • Moluccas
  • Yankin, Illarion Pavlovich
  • Wright, Bailey
  • Gren Mats
  • 1961 in sports
  • Molybdenum Iodide (IV)

All articles

Clever Geek | 2019