Probability Density Function

Get Probability Density Function essential facts below. View Videos or join the Probability Density Function discussion. Add Probability Density Function to your PopFlock.com topic list for future reference or share this resource on social media.
## Example

## Absolutely continuous univariate distributions

## Formal definition

### Discussion

## Further details

## Link between discrete and continuous distributions

## Families of densities

## Densities associated with multiple variables

### Marginal densities

### Independence

### Corollary

### Example

## Function of random variables and change of variables in the probability density function

### Scalar to scalar

### Vector to vector

### Vector to scalar

## Sums of independent random variables

## Products and quotients of independent random variables

### Example: Quotient distribution

### Example: Quotient of two standard normals

## See also

## References

## Bibliography

## External links

This article uses material from the Wikipedia page available here. It is released under the Creative Commons Attribution-Share-Alike License 3.0.

Probability Density Function

In probability theory, a **probability density function** (**PDF**), or **density** of a continuous random variable, is a function whose value at any given sample (or point) in the sample space (the set of possible values taken by the random variable) can be interpreted as providing a *relative likelihood* that the value of the random variable would equal that sample.^{[2]} In other words, while the *absolute likelihood* for a continuous random variable to take on any particular value is 0 (since there are an infinite set of possible values to begin with), the value of the PDF at two different samples can be used to infer, in any particular draw of the random variable, how much more likely it is that the random variable would equal one sample compared to the other sample.

In a more precise sense, the PDF is used to specify the probability of the random variable falling *within a particular range of values*, as opposed to taking on any one value. This probability is given by the integral of this variable's PDF over that range--that is, it is given by the area under the density function but above the horizontal axis and between the lowest and greatest values of the range. The probability density function is nonnegative everywhere, and its integral over the entire space is equal to 1.

The terms "*probability distribution function*"^{[3]} and "*probability function*"^{[4]} have also sometimes been used to denote the probability density function. However, this use is not standard among probabilists and statisticians. In other sources, "probability distribution function" may be used when the probability distribution is defined as a function over general sets of values or it may refer to the cumulative distribution function, or it may be a probability mass function (PMF) rather than the density. "Density function" itself is also used for the probability mass function, leading to further confusion.^{[5]} In general though, the PMF is used in the context of discrete random variables (random variables that take values on a countable set), while the PDF is used in the context of continuous random variables.

Suppose bacteria of a certain species typically live 4 to 6 hours. The probability that a bacterium lives *exactly* 5 hours is equal to zero. A lot of bacteria live for approximately 5 hours, but there is no chance that any given bacterium dies at exactly 5.0000000000... hours. However, the probability that the bacterium dies between 5 hours and 5.01 hours is quantifiable. Suppose the answer is 0.02 (i.e., 2%). Then, the probability that the bacterium dies between 5 hours and 5.001 hours should be about 0.002, since this time interval is one-tenth as long as the previous. The probability that the bacterium dies between 5 hours and 5.0001 hours should be about 0.0002, and so on.

In these three examples, the ratio (probability of dying during an interval) / (duration of the interval) is approximately constant, and equal to 2 per hour (or 2 hour^{-1}). For example, there is 0.02 probability of dying in the 0.01-hour interval between 5 and 5.01 hours, and (0.02 probability / 0.01 hours) = 2 hour^{-1}. This quantity 2 hour^{-1} is called the probability density for dying at around 5 hours. Therefore, the probability that the bacterium dies at 5 hours can be written as (2 hour^{-1}) *dt*. This is the probability that the bacterium dies within an infinitesimal window of time around 5 hours, where *dt* is the duration of this window. For example, the probability that it lives longer than 5 hours, but shorter than (5 hours + 1 nanosecond), is (2 hour^{-1})×(1 nanosecond) ? (using the unit conversion nanoseconds = 1 hour).

There is a probability density function *f* with *f*(5 hours) = 2 hour^{-1}. The integral of *f* over any window of time (not only infinitesimal windows but also large windows) is the probability that the bacterium dies in that window.

A probability density function is most commonly associated with absolutely continuous univariate distributions. A random variable has density , where is a non-negative Lebesgue-integrable function, if:

Hence, if is the cumulative distribution function of , then:

and (if is continuous at )

Intuitively, one can think of as being the probability of falling within the infinitesimal interval .

(*This definition may be extended to any probability distribution using the measure-theoretic definition of probability.*)

A random variable with values in a measurable space (usually with the Borel sets as measurable subsets) has as probability distribution the measure *X*_{*}*P* on : the **density** of with respect to a reference measure on is the Radon-Nikodym derivative:

That is, *f* is any measurable function with the property that:

for any measurable set

In the continuous univariate case above, the reference measure is the Lebesgue measure. The probability mass function of a discrete random variable is the density with respect to the counting measure over the sample space (usually the set of integers, or some subset thereof).

It is not possible to define a density with reference to an arbitrary measure (e.g. one can't choose the counting measure as a reference for a continuous random variable). Furthermore, when it does exist, the density is almost everywhere unique.

Unlike a probability, a probability density function can take on values greater than one; for example, the uniform distribution on the interval [0, ½] has probability density *f*(*x*) = 2 for 0 x f(*x*) = 0 elsewhere.

The standard normal distribution has probability density

If a random variable *X* is given and its distribution admits a probability density function *f*, then the expected value of *X* (if the expected value exists) can be calculated as

Not every probability distribution has a density function: the distributions of discrete random variables do not; nor does the Cantor distribution, even though it has no discrete component, i.e., does not assign positive probability to any individual point.

A distribution has a density function if and only if its cumulative distribution function *F*(*x*) is absolutely continuous. In this case: *F* is almost everywhere differentiable, and its derivative can be used as probability density:

If a probability distribution admits a density, then the probability of every one-point set {*a*} is zero; the same holds for finite and countable sets.

Two probability densities *f* and *g* represent the same probability distribution precisely if they differ only on a set of Lebesgue measure zero.

In the field of statistical physics, a non-formal reformulation of the relation above between the derivative of the cumulative distribution function and the probability density function is generally used as the definition of the probability density function. This alternate definition is the following:

If *dt* is an infinitely small number, the probability that *X* is included within the interval (*t*, *t* + *dt*) is equal to *f*(*t*) *dt*, or:

It is possible to represent certain discrete random variables as well as random variables involving both a continuous and a discrete part with a generalized probability density function, by using the Dirac delta function. (This is not possible with a probability density function in the sense defined above, it may be done with a distribution.) For example, consider a binary discrete random variable having the Rademacher distribution--that is, taking -1 or 1 for values, with probability ½ each. The density of probability associated with this variable is:

More generally, if a discrete variable can take *n* different values among real numbers, then the associated probability density function is:

where are the discrete values accessible to the variable and are the probabilities associated with these values.

This substantially unifies the treatment of discrete and continuous probability distributions. For instance, the above expression allows for determining statistical characteristics of such a discrete variable (such as its mean, its variance and its kurtosis), starting from the formulas given for a continuous distribution of the probability.

It is common for probability density functions (and probability mass functions) to be parametrized--that is, to be characterized by unspecified parameters. For example, the normal distribution is parametrized in terms of the mean and the variance, denoted by and respectively, giving the family of densities

It is important to keep in mind the difference between the domain of a family of densities and the parameters of the family. Different values of the parameters describe different distributions of different random variables on the same sample space (the same set of all possible values of the variable); this sample space is the domain of the family of random variables that this family of distributions describes. A given set of parameters describes a single distribution within the family sharing the functional form of the density. From the perspective of a given distribution, the parameters are constants, and terms in a density function that contain only parameters, but not variables, are part of the normalization factor of a distribution (the multiplicative factor that ensures that the area under the density--the probability of *something* in the domain occurring-- equals 1). This normalization factor is outside the kernel of the distribution.

Since the parameters are constants, reparametrizing a density in terms of different parameters, to give a characterization of a different random variable in the family, means simply substituting the new parameter values into the formula in place of the old ones. Changing the domain of a probability density, however, is trickier and requires more work: see the section below on change of variables.

For continuous random variables *X*_{1}, ..., *X _{n}*, it is also possible to define a probability density function associated to the set as a whole, often called

If *F*(*x*_{1}, ..., *x*_{n}) = Pr(*X*_{1} x_{1}, ..., *X*_{n} x_{n}) is the cumulative distribution function of the vector (*X*_{1}, ..., *X*_{n}), then the joint probability density function can be computed as a partial derivative

For *i* = 1, 2, ...,*n*, let *f*_{Xi}(*x*_{i}) be the probability density function associated with variable *X _{i}* alone. This is called the marginal density function, and can be deduced from the probability density associated with the random variables

Continuous random variables *X*_{1}, ..., *X _{n}* admitting a joint density are all independent from each other if and only if

If the joint probability density function of a vector of *n* random variables can be factored into a product of *n* functions of one variable

(where each *f _{i}* is not necessarily a density) then the

This elementary example illustrates the above definition of multidimensional probability density functions in the simple case of a function of a set of two variables. Let us call a 2-dimensional random vector of coordinates (*X*, *Y*): the probability to obtain in the quarter plane of positive *x* and *y* is

If the probability density function of a random variable (or vector) *X* is given as *f _{X}*(

It is tempting to think that in order to find the expected value *E*(*g*(*X*)), one must first find the probability density *f*_{g(X)} of the new random variable . However, rather than computing

one may find instead

The values of the two integrals are the same in all cases in which both *X* and *g*(*X*) actually have probability density functions. It is not necessary that *g* be a one-to-one function. In some cases the latter integral is computed much more easily than the former. See Law of the unconscious statistician.

Let be a monotonic function, then the resulting density function is

Here *g*^{-1} denotes the inverse function.

This follows from the fact that the probability contained in a differential area must be invariant under change of variables. That is,

or

For functions that are not monotonic, the probability density function for *y* is

where *n*(*y*) is the number of solutions in *x* for the equation , and are these solutions.

The above formulas can be generalized to variables (which we will again call *y*) depending on more than one other variable. *f*(*x*_{1}, ..., *x*_{n}) shall denote the probability density function of the variables that *y* depends on, and the dependence shall be . Then, the resulting density function is^{[]}

where the integral is over the entire (*n* - 1)-dimensional solution of the subscripted equation and the symbolic *dV* must be replaced by a parametrization of this solution for a particular calculation; the variables *x*_{1}, ..., *x*_{n} are then of course functions of this parametrization.

This derives from the following, perhaps more intuitive representation: Suppose * x* is an

with the differential regarded as the Jacobian of the inverse of *H(.)*, evaluated at * y*.

For example, in the 2-dimensional case **x** = (*x*_{1}, *x*_{2}), suppose the transform *H* is given as *y*_{1} = *H*_{1}(*x*_{1}, *x*_{2}), *y*_{2} = *H*_{2}(*x*_{1}, *x*_{2}) with inverses *x*_{1} = *H*_{1}^{-1}(*y*_{1}, *y*_{2}), *x*_{2} = *H*_{2}^{-1}(*y*_{1}, *y*_{2}). The joint distribution for **y** = (*y*_{1}, y_{2}) has density^{[7]}

Let be a differentiable function and be a random vector taking values in , be the probability density function of and be the Dirac delta function. It is possible to use the formulas above to determine , the probability density function of , which will be given by

This result leads to the Law of the unconscious statistician:

*Proof:*

Let be a collapsed random variable with probability density function (*i.e.* a constant equal to zero). Let the random vector and the transform be defined as

- .

It is clear that is a bijective mapping, and the Jacobian of is given by:

- ,

which is an upper triangular matrix with ones on the main diagonal, therefore its determinant is 1. Applying the change of variable theorem from the previous section we obtain that

- ,

which if marginalized over leads to the desired probability density function.

The probability density function of the sum of two independent random variables *U* and *V*, each of which has a probability density function, is the convolution of their separate density functions:

It is possible to generalize the previous relation to a sum of N independent random variables, with densities *U*_{1}, ..., *U _{N}*:

This can be derived from a two-way change of variables involving *Y=U+V* and *Z=V*, similarly to the example below for the quotient of independent random variables.

Given two independent random variables *U* and *V*, each of which has a probability density function, the density of the product *Y* = *UV* and quotient *Y*=*U*/*V* can be computed by a change of variables.

To compute the quotient *Y* = *U*/*V* of two independent random variables *U* and *V*, define the following transformation:

Then, the joint density *p*(*y*,*z*) can be computed by a change of variables from *U,V* to *Y,Z*, and *Y* can be derived by marginalizing out *Z* from the joint density.

The inverse transformation is

The Jacobian matrix of this transformation is

Thus:

And the distribution of *Y* can be computed by marginalizing out *Z*:

This method crucially requires that the transformation from *U*,*V* to *Y*,*Z* be bijective. The above transformation meets this because *Z* can be mapped directly back to *V*, and for a given *V* the quotient *U*/*V* is monotonic. This is similarly the case for the sum *U* + *V*, difference *U* - *V* and product *UV*.

Exactly the same method can be used to compute the distribution of other functions of multiple independent random variables.

Given two standard normal variables *U* and *V*, the quotient can be computed as follows. First, the variables have the following density functions:

We transform as described above:

This leads to:

This is the density of a standard Cauchy distribution.

- Density estimation
- Kernel density estimation
- Likelihood function
- List of probability distributions
- Probability mass function
- Secondary measure
- Uses as
*position probability density*:

**^**"AP Statistics Review - Density Curves and the Normal Distributions". Archived from the original on 2 April 2015. Retrieved 2015.**^**Grinstead, Charles M.; Snell, J. Laurie (2009). "Conditional Probability - Discrete Conditional" (PDF).*Grinstead & Snell's Introduction to Probability*. Orange Grove Texts. ISBN 161610046X. Retrieved .**^**Probability distribution function PlanetMath Archived 2011-08-07 at the Wayback Machine**^**Probability Function at MathWorld**^**Ord, J.K. (1972)*Families of Frequency Distributions*, Griffin. ISBN 0-85264-137-0 (for example, Table 5.1 and Example 5.4)**^**Devore, Jay L.; Berk, Kenneth N. (2007).*Modern Mathematical Statistics with Applications*. Cengage. p. 263. ISBN 0-534-40473-1.**^**David, Stirzaker (2007-01-01).*Elementary Probability*. Cambridge University Press. ISBN 0521534283. OCLC 851313783.

- Pierre Simon de Laplace (1812).
*Analytical Theory of Probability*.

- The first major treatise blending calculus with probability theory, originally in French:
*Théorie Analytique des Probabilités*.

- The first major treatise blending calculus with probability theory, originally in French:

- Andrei Nikolajevich Kolmogorov (1950).
*Foundations of the Theory of Probability*.

- The modern measure-theoretic foundation of probability theory; the original German version (
*Grundbegriffe der Wahrscheinlichkeitsrechnung*) appeared in 1933.

- The modern measure-theoretic foundation of probability theory; the original German version (

- Patrick Billingsley (1979).
*Probability and Measure*. New York, Toronto, London: John Wiley and Sons. ISBN 0-471-00710-2. - David Stirzaker (2003).
*Elementary Probability*. ISBN 0-521-42028-8.

- Chapters 7 to 9 are about continuous variables.

- Ushakov, N.G. (2001) [1994], "Density of a probability distribution", in Hazewinkel, Michiel (ed.),
*Encyclopedia of Mathematics*, Springer Science+Business Media B.V. / Kluwer Academic Publishers, ISBN 978-1-55608-010-4 - Weisstein, Eric W. "Probability density function".
*MathWorld*.

This article uses material from the Wikipedia page available here. It is released under the Creative Commons Attribution-Share-Alike License 3.0.

Popular Products

Music Scenes

Popular Artists