# The Theorem

We don't have the tools yet to prove the Central Limit Theorem, so we'll just go ahead and state it without proof.

(1) the sample mean \(\bar{X}\) follows an approximate normal distribution (2) with mean \(E(\bar{X})=\mu_{\bar{X}}=\mu\) (3) and variance \(Var(\bar{X})=\sigma^2_{\bar{X}}=\dfrac{\sigma^2}{n}\) We write: \(\bar{X} \stackrel{d}{\longrightarrow} N\left(\mu,\dfrac{\sigma^2}{n}\right)\) as or: \(Z=\dfrac{\bar{X}-\mu}{\sigma/\sqrt{n}}=\dfrac{\sum\limits_{i=1}^n X_i-n\mu}{\sqrt{n}\sigma} \stackrel {d}{\longrightarrow} N(0,1)\) as |

So, in a nutshell, the Central Limit Theorem (CLT) tells us that the sampling distribution of the sample mean is, at least approximately, normally distributed, *regardless of the distribution of the underlying random sample*. In fact, the CLT applies regardless of whether the distribution of the *X _{i}* is discrete (for example, Poisson or binomial) or continuous (for example, exponential or chi-square). Our focus in this lesson will be on continuous random variables. In the next lesson, we'll apply the CLT to discrete random variables, such as the binomial and Poisson random variables.

You might be wondering why "sufficiently large" appears in quotes in the theorem. Well, that's because the necessary sample size *n* depends on the skewness of the distribution from which the random sample *X _{i}* comes:

- If the distribution of the
*X*is symmetric, unimodal or continuous, then a sample size_{i}*n*as small as 4 or 5 yields an adequate approximation. - If the distribution of the
*X*is skewed, then a sample size_{i}*n*of at least 25 or 30 yields an adequate approximation. - If the distribution of the
*X*is extremely skewed, then you may need an even larger_{i}*n*.

We'll spend the rest of the lesson trying to get an intuitive feel for the theorem, as well as applying the theorem so that we can calculate probabilities concerning the sample mean.