4.4.1 - Properties of 'Good' Estimators

In determining what makes a good estimator, there are two key features:

  1. The center of the sampling distribution for the estimate is the same as that of the population. When this property is true, the estimate is said to be unbiased. The most often-used measure of the center is the mean.
  2. The estimate has the smallest standard error when compared to other estimators. For example, in the normal distribution, the mean and median are essentially the same. However, the standard error of the median is about 1.25 times that of the standard error of the mean. We know the standard error of the mean is \(\frac{\sigma}{\sqrt{n}}\). Therefore in a normal distribution, the SE(median) is about 1.25 times \(\frac{\sigma}{\sqrt{n}}\). This is why the mean is a better estimator than the median when the data is normal (or approximately normal).
Note!

We should stop here and explain why we use the estimated standard error and not the standard error itself when constructing a confidence interval. Remember we are using the known values from our sample to estimate the unknown population values. Therefore we cannot use the actual population values! This is actually easier to see by presenting the formulas. If we used the following as the standard error, we would not have the values for \(p\)  (because this is the population parameter):

\(\sqrt{\dfrac{p(1-p)}{n}}\)

Instead we have to use the estimated standard error by using \(\hat{p}\)  In this case the estimated standard error is...

\(\sqrt{\dfrac{\hat{p}(1-\hat{p})}{n}}\)

For the case for estimating the population mean, the population standard deviation, \(\sigma\), may also be unknown. When it is unknown, we can estimate it with the sample standard deviation, s. Then the estimated standard error of the sample mean is...

\(\dfrac{s}{\sqrt{n}}\)