As we saw at the beginning of this lesson, many of the sampling distributions that you have constructed and worked with this semester are approximately normally distributed. The reason behind this is one of the most important theorems in statistics.
Central Limit Theorem
The Central Limit Theorem states that if the sample size is sufficiently large then the sampling distribution will be approximately normally distributed for many frequently tested statistics, such as those that we have been working with in this course: one sample mean, one sample proportion, difference in two means, difference in two proportions, the slope of a simple linear regression model, and Pearson's r correlation.
Over the next few lessons we will examine what constitutes a "sufficiently large" sample size. Essentially, it is determined by the point at which the sampling distribution becomes approximately normal.
In practice, when we construct confidence intervals and conduct hypothesis tests we often use the normal distribution (or t distributions which you'll see next week) as opposed to bootstrapping or randomization procedures in situations when the sampling distribution is approximately normal. This method is preferred by many because z scores are on a standard scale (i.e., mean of 0 and standard deviation of 1) which makes interpreting results more straight forward.
Drag the slider at the bottom of the graph to see normal curve fit on the randomization plot.