English

De Moivre–Laplace theorem

In probability theory, the de Moivre–Laplace theorem, which is a special case of the central limit theorem, states that the normal distribution may be used as an approximation to the binomial distribution under certain conditions. In particular, the theorem shows that the probability mass function of the random number of 'successes' observed in a series of n {displaystyle n} independent Bernoulli trials, each having probability p {displaystyle p} of success (a binomial distribution with n {displaystyle n} trials), converges to the probability density function of the normal distribution with mean n p {displaystyle np} and standard deviation n p ( 1 − p ) {displaystyle {sqrt {np(1-p)}}} , as n {displaystyle n} grows large, assuming p {displaystyle p} is not 0 {displaystyle 0} or 1 {displaystyle 1} . In probability theory, the de Moivre–Laplace theorem, which is a special case of the central limit theorem, states that the normal distribution may be used as an approximation to the binomial distribution under certain conditions. In particular, the theorem shows that the probability mass function of the random number of 'successes' observed in a series of n {displaystyle n} independent Bernoulli trials, each having probability p {displaystyle p} of success (a binomial distribution with n {displaystyle n} trials), converges to the probability density function of the normal distribution with mean n p {displaystyle np} and standard deviation n p ( 1 − p ) {displaystyle {sqrt {np(1-p)}}} , as n {displaystyle n} grows large, assuming p {displaystyle p} is not 0 {displaystyle 0} or 1 {displaystyle 1} . The theorem appeared in the second edition of The Doctrine of Chances by Abraham de Moivre, published in 1738. Although de Moivre did not use the term 'Bernoulli trials', he wrote about the probability distribution of the number of times 'heads' appears when a coin is tossed 3600 times. This is one derivation of the particular Gaussian function used in the normal distribution. As n grows large, for k in the neighborhood of np we can approximate in the sense that the ratio of the left-hand side to the right-hand side converges to 1 as n → ∞. The theorem can be more rigorously stated as follows: ( X − n p ) / n p q {displaystyle left(X!,-!,np ight)!/!{sqrt {npq}}} , with X {displaystyle extstyle X} a binomially distributed random variable, approaches the standard normal as n → ∞ {displaystyle n! o !infty } , with the ratio of the probability mass of X {displaystyle X} to the limiting normal density being 1. This can be shown for an arbitrary nonzero and finite point c {displaystyle c} . On the unscaled curve for X {displaystyle X} , this would be a point k {displaystyle k} given by For example, with c {displaystyle c} at 3, k {displaystyle k} stays 3 standard deviations from the mean in the unscaled curve. The normal distribution with mean μ {displaystyle mu } and standard deviation σ {displaystyle sigma } is defined by the differential equation (DE) The binomial distribution limit approaches the normal if the binomial satisfies this DE. As the binomial is discrete the equation starts as a difference equation whose limit morphs to a DE. Difference equations use the discrete derivative, p ( k + 1 ) − p ( k ) {displaystyle extstyle p(k!+!1)!-!p(k)} , the change for step size 1. As n → ∞ {displaystyle extstyle n! o !infty } , the discrete derivative becomes the continuous derivative. Hence the proof need show only that, for the unscaled binomial distribution,

[ "Statistics", "Mathematical analysis", "Central limit theorem", "Normal distribution" ]
Parent Topic
Child Topic
    No Parent Topic