# 1.4 - Confidence Intervals and the Central Limit Theorem

1.4 - Confidence Intervals and the Central Limit TheoremThe idea behind confidence intervals is that it is not enough just using sample mean to estimate the population mean. The sample mean by itself is a single point. This does not give people any idea as to how good your estimation is of the population mean.

If we want to assess the accuracy of this estimate we will use confidence intervals which provide us with information as to how good our estimation is.

A confidence interval, viewed before the sample is selected, is the interval which has a pre-specified probability of containing the parameter. To obtain this confidence interval you need to know the sampling distribution of the estimate. Once we know the distribution, we can talk about confidence.

We want to be able to say something about \(\theta\), or rather \(\hat{\theta}\) because \(\hat{\theta}\) should be close to \(\theta\).

So the type of statement that we want to make will look like this:

\(P(|\hat{\theta}-\theta|<d)=1-\alpha\)

Thus, we need to know the distribution of \(\hat{\theta}\). In certain cases, the distribution of \(\hat{\theta}\) can be stated easily. However, there are many different types of distributions.

The normal distribution is easy to use as an example because it does not bring with it too much complexity.

#### Central Limit Theorem

When we talk about the Central Limit Theorem for the sample mean, what are we talking about?

The finite population Central Limit Theorem for the sample mean: What happens when *n* gets large?

\(\bar{y}\) has a population mean \(\mu\) and a standard deviation of \(\dfrac{\sigma}{\sqrt{n}}\) since we do not know \(\sigma\) so we will use *s *to estimate \(\sigma\). We can thus estimate the standard deviation of \(\bar{y}\) to be: \(\dfrac{s}{\sqrt{n}}\).

The value *n* in the denominator helps us because as *n* is getting larger the standard deviation of \(\bar{y}\) is getting smaller.

The distribution of \(\bar{y}\) is very complicated when the sample size is small. When the sample size is larger there is more regularity and it is easier to see the distribution. This is not the case when the sample size is small.

We want to find a confidence interval for \(\mu\). If we go about picking samples we can determine a \(\bar{y}\) and from here we can construct an interval about the mean. However, there is a slight complication that comes out of \(\dfrac{\sigma}{\sqrt{n}}\). We have two unknowns, \(\mu\) and \(\sigma\). What do you do now?

We will estimate \(\sigma\) by *s*, now \(\dfrac{\bar{y}-\mu}{s/\sqrt{n}}\)does not have a normal distribution but a *t* distribution with *n-1* degrees of freedom.

Thus, a \(100 (1-\alpha)\)% confidence interval for \(\mu\) can be derived as follows:

\(\dfrac{\bar{y}-\mu}{\sqrt{Var(\bar{y})}} \sim N(0,1)\) whereas, \(\dfrac{\bar{y}-\mu}{\sqrt{\hat{V}ar(\bar{y})}} \sim t_{n-1}\)

Now, we can compute the confidence interval as:

\(\bar{y} \pm t_{\alpha/2} \sqrt{\hat{V}ar(\bar{y})}\)

In addition, we are sampling without replacement here so we need to make a correction at this point and get a new formula for our sampling scheme that is more precise. If we want a \(100 (1-\alpha)\)% confidence interval for \(\mu\) , this is:

\(\bar{y} \pm t_{\alpha/2} \sqrt{(\dfrac{N-n}{N})(\dfrac{s^2}{n})}\)

What you now have above is the confidence interval for \(\mu\) and then the confidence interval for \(\tau\) is given below.

A \(100 (1-\alpha)\)% confidence interval for \(\tau\) is given by:

\(\hat{\tau} \pm t_{\alpha/2} \sqrt{N(N-n)\dfrac{s^2}{n}}\)

Be careful now, when can we use these? In what situation are these confidence intervals applicable?

These approximate intervals above are good when *n* is large (because of the Central Limit Theorem), or when the observations *y*_{1}, *y*_{2}, ..., *y _{n}* are normal.

##### Sample size 30 or greater

When sample size is 30 or more, we consider the sample size to be large and by Central Limit Theorem, \(\bar{y}\) will be normal even if the sample does not come from a Normal Distribution. Thus, when sample size is 30 or more, there is no need to check whether the sample comes from a Normal Distribution. We can use the *t*-interval.

##### Sample size 8 to 29

When sample size is 8 to 29, we would usually use a normal probability plot to see whether the data come from a normal distribution. If it does not violate the normal assumption then we can go ahead and use the *t*-interval.

##### Sample size less than 7

However, when sample size is 7 or less, if we use normal probability plot to check for normality, we may fail to reject Normality due to not enough sample size. In the examples here in these lessons and in the textbook we typically use small sample sizes and this might be the wrong image to give you. These small samples have been set for illustration purposes only. When you have a sample size of 5 you really do not have enough power to say the distribution is a normal and we will use a nonparametric methods instead of *t* .

#### Example 1-3 Revisited...

For the beetle example, an approximate 95% CI for \(\mu\) is:

\(\bar{y} \pm t_{\alpha/2} \sqrt{(\dfrac{N-n}{N})(\dfrac{s^2}{n})}\)

Note that the *t*-value for α = 0.025 and at *n* - 1 = 8 - 1 = 7 *df* can be found by using the t-table to be 2.365

\(\bar{y} \pm t_{\alpha/2} \sqrt{(\dfrac{N-n}{N})(\dfrac{s^2}{n})}\)

\(=222.875 \pm 2.365\sqrt{222.256}\)

\(=222.875 \pm 2.365 \times 14.908 \)

\(=222.875 \pm 35.258\)

And, an approximate 95% CI for \(\tau\) is then:

\(=22287.5 \pm 2.365 \sqrt{2222560}\)

\(=22287.5 \pm 3525.802\)