The topic of interpreting confidence intervals is one that can get frequentist statisticians all hot under the collar. Let's try to understand why!

Although the derivation of the \(Z\)-interval for a mean technically ends with the following probability statement:

\(P\left[ \bar{X}-z_{\alpha/2}\left(\dfrac{\sigma}{\sqrt{n}}\right) \leq \mu \leq \bar{X}+z_{\alpha/2}\left(\dfrac{\sigma}{\sqrt{n}}\right) \right]=1-\alpha\)

it is *incorrect *to say:

The probability that the population mean \(\mu\) falls between the lower value \(L\) and the upper value \(U\) is \(1-\alpha\).

For example, in the example on the last page, it is incorrect to say that "the probability that the population mean is between 27.9 and 30.5 is 0.95."

So, in short, frequentist statisticians don't like to hear people trying to make probability statements about constants, when they should only be making probability statements about random variables. So, okay, if it's incorrect to make the statement that seems obvious to make based on the above probability statement, what is the correct understanding of confidence intervals? Here's how frequentist statisticians would like the world to think about confidence intervals:

- Suppose we take a large number of samples, say 1000.
- Then, we calculate a 95% confidence interval for each sample.
- Then, "95% confident" means that we'd expect 95%, or 950, of the 1000 intervals to be correct, that is, to contain the actual unknown value \(\mu\).

So, what does this all mean in practice?

In reality, we take just one random sample. The interval we obtain is either **correct** or **incorrect**. That is, the interval we obtain based on the sample we've taken either contains the true population mean or it does not. Since we don't know the value of the true population mean, we'll never know for sure whether our interval is correct or not. We can just be very confident that we obtained a correct interval (because 95% of the intervals we could have obtained are correct).