1.2 - The 7 Step Process of Statistical Hypothesis Testing

We will cover the seven steps one by one.

  1. Step 1: State the Null Hypothesis

    The null hypothesis can be thought of as the opposite of the "guess" the researchers made. In the example presented in the previous section, the biologist "guesses" plant height will be different for the various fertilizers. So the null hypothesis would be that there will be no difference among the groups of plants. Specifically, in more statistical language the null for an ANOVA is that the means are the same. We state the null hypothesis as:

    \(H_0 \colon \mu_1 = \mu_2 = ⋯ = \mu_T\)

    for T levels of an experimental treatment.

    Note: Why do we do this? Why not simply test the working hypothesis directly? The answer lies in the Popperian Principle of Falsification. Karl Popper (a philosopher) discovered that we can’t conclusively confirm a hypothesis, but we can conclusively negate one. So we set up a null hypothesis which is effectively the opposite of the working hypothesis. The hope is that based on the strength of the data we will be able to negate or reject the null hypothesis and accept an alternative hypothesis. In other words, we usually see the working hypothesis in \(H_A\).
  2. Step 2: State the Alternative Hypothesis

    \(H_A \colon \text{ treatment level means not all equal}\)

    The alternative hypothesis is stated in this way so that if the null is rejected, there are many alternative possibilities.

    For example, \(\mu_1\ne \mu_2 = ⋯ = \mu_T\) is one possibility, as is \(\mu_1=\mu_2\ne\mu_3= ⋯ =\mu_T\). Many people make the mistake of stating the alternative hypothesis as \(\mu_1\ne\mu_2\ne⋯\ne\mu_T\) which says that every mean differs from every other mean. This is a possibility, but only one of many possibilities. A simple way of thinking about this is that at least one mean is different from all others. To cover all alternative outcomes, we resort to a verbal statement of "not all equal" and then follow up with mean comparisons to find out where differences among means exist. In our example, a possible outcome would be that fertilizer 1 results in plants that are exceptionally tall, but fertilizers 2, 3, and the control group may not differ from one another.

  3. Step 3: Set \(\alpha\)

    If we look at what can happen in a hypothesis test, we can construct the following contingency table:

    Decision In Reality
    \(H_0\) is TRUE \(H_0\) is FALSE
    Accept \(H_0\) correct Type II Error
    \(\beta\) = probability of Type II Error
    Reject \(H_0\)

    Type I Error
    \(\alpha\) = probability of Type I Error

    correct

    You should be familiar with Type I and Type II errors from your introductory courses. It is important to note that we want to set \(\alpha\) before the experiment (a-priori) because the Type I error is the more grievous error to make. The typical value of \(\alpha\) is 0.05, establishing a 95% confidence level. For this course, we will assume \(\alpha\) =0.05, unless stated otherwise.

  4. Step 4: Collect Data

    Remember the importance of recognizing whether data is collected through an experimental design or observational study.

  5. Step 5: Calculate a test statistic

    For categorical treatment level means, we use an F-statistic, named after R.A. Fisher. We will explore the mechanics of computing the F-statistic beginning in Lesson 2. The F-value we get from the data is labeled \(F_{\text{calculated}}\).

  6. Step 6: Construct Acceptance / Rejection regions

    As with all other test statistics, a threshold (critical) value of F is established. This F-value can be obtained from statistical tables or software and is referred to as \(F_{\text{critical}}\) or \(F_\alpha\). As a reminder, this critical value is the minimum value of the test statistic (in this case \(F_{\text{calculated}}\)) for us to reject the null.

    The F-distribution, \(F_\alpha\), and the location of acceptance/rejection regions are shown in the graph below:

    Accept H0Reject H0P(F)αFαFThe F distribution
  7. Step 7: Based on Steps 5 and 6, draw a conclusion about \(H_0\)

    If \(F_{\text{calculated}}\) is larger than \(F_\alpha\), then you are in the rejection region and you can reject the null hypothesis with \(\left(1-\alpha \right)\) level of confidence.

    Note that modern statistical software condenses Steps 6 and 7 by providing a p-value. The p-value here is the probability of getting an \(F_{\text{calculated}}\) even greater than what you observe assuming the null hypothesis is true. If by chance, the \(F_{\text{calculated}} = F_\alpha\), then the p-value would be exactly equal to \(\alpha\). With larger \(F_{\text{calculated}}\) values, we move further into the rejection region and the p-value becomes less than \(\alpha\). So, the decision rule is as follows:

    If the p-value obtained from the ANOVA is less than \(\alpha\), then reject \(H_0\) in favor of \(H_A\).

    Note: If you are not familiar with this material, we suggest you return to course materials from your basic statistics course.