# 5.6 - Comparing Two Population Means

5.6 - Comparing Two Population MeansWorking with two group means requires a few extra steps. We need to know the relationship between the two groups. Are the groups independent or dependent? Another important step is to understand the variability of the two groups (our old friend variability comes into play again!). If the groups have unequal variance we may have to account for that in our hypothesis test.

Let's take a look at independence and dependence first.

## Independent and Dependent Samples

- Independent Sample
- The samples from two populations are independent if the samples selected from one of the populations have no relationship with the samples selected from the other population.

- Dependent Sample
- The samples are dependent (also called paired data) if each measurement in one sample is matched or paired with a particular measurement in the other sample. Another way to consider this is how many measurements are taken off of each subject. If only one measurement, then independent; if two measurements, then paired. Exceptions are in familial situations such as in a study of spouses or twins. In such cases, the data is almost always treated as paired data.

These notes will first work through independent groups and then proceed to dependent groups.

# 5.6.1 - Inference for Independent Means

5.6.1 - Inference for Independent MeansAs with comparing two population proportions, when we compare two population means from independent populations, the interest is in the difference between the two means. In other words, if \(\mu_1\) is the population mean from population 1 and \(\mu_2\) is the population mean from population 2, then the difference is \(\mu_1-\mu_2\). If \(\mu_1-\mu_2=0\) then there is no difference between the two population parameters.

If each population is normal, then the sampling distribution of \(\bar{x}_i\) is normal with mean \(\mu_i\), standard error \(\dfrac{\sigma_i}{\sqrt{n_i}}\), and the estimated standard error \(\dfrac{s_i}{\sqrt{n_i}}\), for \(i=1, 2\).

Using the Central Limit Theorem, if the population is not normal, then with a large sample, the sampling distribution is approximately normal.

The theorem presented in this Lesson says that if either of the above are true, then \(\bar{x}_1-\bar{x}_2\) is approximately normal with mean \(\mu_1-\mu_2\), and standard error \(\sqrt{\dfrac{\sigma^2_1}{n_1}+\dfrac{\sigma^2_2}{n_2}}\).

That all sounds great, however, in most cases, \(\sigma_1\) and \(\sigma_2\) are unknown, and they have to be estimated. It seems natural to estimate \(\sigma_1\) by \(s_1\) and \(\sigma_2\) by \(s_2\). When the sample sizes are small, the estimates may not be that accurate and one may get a better estimate for the common standard deviation by pooling the data from both populations if the standard deviations for the two populations are not that different, however if the standard deviations are different, then we want to include that difference in our test.

Given this, there are two options for estimating the variances for the independent samples:

- Using pooled variances
- Using unpooled (or unequal) variances

When to use which? Well, first, the nice thing is that many software packages calculate the variances "behind the curtain" and will show you the most appropriate output. However, if you are NOT sure, you can always use the unpooled method. The consequence of using unpooled is that the test is more conservative making it marginally more difficult to reject the null. However, the consequence of using pooled variances is an incorrect model.

# 5.6.1.1 - Pooled Variances

5.6.1.1 - Pooled Variances## Hypothesis Tests for \(\mu_1− \mu_2\): The Pooled t-test

Now let's consider the hypothesis test for the mean differences with pooled variances.

**Null:**

\(H_0\colon\mu_1-\mu_2=0\)

**Conditions:**

The assumptions/conditions are:

- The populations are independent
- The population variances are equal
- Each population is either normal or the sample size is large

**Test Statistic:**

The test statistic is...

\(t^*=\dfrac{\bar{x}_1-\bar{x}_2-0}{s_p\sqrt{\frac{1}{n_1}+\frac{1}{n_2}}}\)

And \(t^*\) follows a t-distribution with degrees of freedom equal to \(df=n_1+n_2-2\).

The p-value, critical value, and conclusion are found similar to what we have done before.

# 5.6.1.2 - Unpooled Variances

5.6.1.2 - Unpooled VariancesWhen the assumption of equal variances is not valid, we need to use separate, or unpooled, variances. The mathematics and theory are complicated for this case and we intentionally leave out the details.

Hypothesis Tests for \(\mu_1− \mu_2\): The Pooled t-test

**Null:**

\(H_0\colon\mu_1-\mu_2=0\)

**Conditions:**

We still have the following assumptions:

- The populations are independent
- Each population is either normal or the sample size is large

**Test Statistic:**

If the assumptions are satisfied, then

\(t^*=\dfrac{\bar{x}_1-\bar{x_2}-0}{\sqrt{\frac{s^2_1}{n_1}+\frac{s^2_2}{n_2}}}\)

will have a t-distribution with degrees of freedom

\(df=\dfrac{(n_1-1)(n_2-1)}{(n_2-1)C^2+(1-C)^2(n_1-1)}\)

where \(C=\dfrac{\frac{s^2_1}{n_1}}{\frac{s^2_1}{n_1}+\frac{s^2_2}{n_2}}\).

**Note!**This calculation for the exact degrees of freedom is cumbersome and is typically done by software. An alternate, conservative option to using the exact degrees of freedom calculation can be made by choosing the smaller of \(n_1-1\) and \(n_2-1\).

- \((1-\alpha)100\%\) Confidence Interval for \(\mu_1-\mu_2\) for Unpooled Variances
- \(\bar{x}_1-\bar{x}_2\pm t_{\alpha/2} \sqrt{\frac{\sigma^2_1}{n_1}+\frac{\sigma^2_2}{n_2}}\)

Where \(t_{\alpha/2}\) comes from the t-distribution using the degrees of freedom above.

##
Minitab^{®}

## Unpooled t-test

To perform a separate variance 2-sample, t-procedure use the same commands as for the pooled procedure EXCEPT we do NOT check box for 'Use Equal Variances.'

- Choose
`Stat`>`Basic Statistics`>`2-sample t` - Select the Options box and enter the desired 'Confidence level,' 'Null hypothesis value' (again for our class this will be 0), and select the correct 'Alternative hypothesis' from the drop-down menu.
- Choose
`OK`.

For some examples, one can use both the pooled t-procedure and the separate variances (non-pooled) t-procedure and obtain results that are close to each other. However, when the sample standard deviations are very different from each other, and the sample sizes are different, the separate variances 2-sample t-procedure is more reliable.

# 5.6.2 - Inference for Paired Means

5.6.2 - Inference for Paired MeansWhen we developed the inference for the independent samples, we depended on the statistical theory to help us. The theory, however, required the samples to be independent. What can we do when the two samples are not independent, i.e., the data is paired?

Consider an example where we are interested in a person’s weight before implementing a diet plan and after. Since the interest is focusing on the difference, it makes sense to “condense” these two measurements into one and consider the difference between the two measurements. For example, if instead of considering the two measures, we take the before diet weight and subtract the after diet weight. The difference makes sense too! It is the weight lost on the diet.

When we take the two measurements to make one measurement (i.e., the difference), we are now back to the one sample case! Now we can apply all we learned for the one sample mean to the difference (Cool!)

## Hypothesis Test for the Difference of Paired Means, \(μ_d\)

In this section, we will develop the hypothesis test for the mean difference for paired samples. As we learned in the previous section, if we consider the difference rather than the two samples, then we are back in the one-sample mean scenario.

The possible null and alternative hypotheses are:

- Null:
- \(H_0\colon \mu_d=0\)

**Conditions:**

We still need to check the conditions and at least one of the following need to be satisfied:

- The differences of the paired follow a normal distribution
- The sample size is large, \(n>30\).

**Test Statistics:**

If at least one condition is satisfied then...

\(t^*=\dfrac{\bar{d}-0}{\frac{s_d}{\sqrt{n}}}\)

Will follow a t-distribution with \(n-1\) degrees of freedom.

The same process for the hypothesis test for one mean can be applied. The test for the mean difference may be referred to as the paired t-test or the test for paired means.

##
Minitab^{®}

## Paired t-Test

You can use a paired t-test in Minitab to perform the test. Alternatively, you can perform a 1-sample t-test on difference = before and after diet plan.

- Choose
`Stat`>`Basic Statistics`>`Paired t` - Click Options to specify the confidence level for the interval and the alternative hypothesis you want to test. The default null hypothesis is 0.

## Diet Plan

The Minitab output for paired T for before-after diet plan is as follows:

**Answer **

95% lower bound for mean difference: 0.0505

T-Test of mean difference = 0 (vs > 0): T-Value = 4.86 P-Value = 0.000

Using the p-value to draw a conclusion about our example:

p-value = \(0.000 < 0.05\)

Reject \(H_0\) and conclude that before diet weight is greater than after diet weight.