5.4 - Further Considerations for Hypothesis Testing5.4 - Further Considerations for Hypothesis Testing
In this section, we include a little more discussion about some of the issues with hypothesis tests and items to be conscious about.
5.4.1 - Errors5.4.1 - Errors
Committing an Error
Every time we make a decision and come to a conclusion, we must keep in mind that our decision is based on probability. Therefore, it is possible that we made a mistake.
Consider the example of the previous Lesson on whether the majority of Penn State students like the cold. In that example, we took a random sample of 500 Penn State students and found that 278 like the cold. We rejected the null hypothesis, at a significance level of 5% with a p-value of 0.006.
- Type I Error
Rejecting \(H_0\) when \(H_0\) is really true, denoted by \(\alpha\) ("alpha") and commonly set at .05
The significance level of 5% means that we have a 5% chance of committing a Type I error. That is, we have 5% chance that we rejected a true null hypothesis.
- Type II Error
Failing to reject \(H_0\) when \(H_0\) is really false, denoted by \(\beta\) ("beta")
If we failed to reject a null hypothesis, then we could have committed a Type II error. This means that we could have failed to reject a false null hypothesis.
|\(H_0\) is true||\(H_0\) is false||Probability Level|
|Reject \(H_0\), (conclude \(H_a\))||Type I error||Correct decision||P is LESS than .05 that the null is true Small probabilities (less than .05) lead to rejecting the null|
|Fail to reject \(H_0\)||Correct decision||Type II error||P is GREATER than .05 that the null is true Large probability (greater than .05) lead to not rejecting the null|
How Important are the Conditions of a Test?
In our six steps in hypothesis testing, one of them is to verify the conditions. If the conditions are not satisfied, we cannot, however, make a decision or state a conclusion. The conclusion is based on probability theory.
If the conditions are not satisfied, there are other methods to help us make a conclusion. The conclusion, however, may be based on other parameters, such as the median. There are other tests (called nonparametric) that can be used.
5.4.2 - Statistical and Practical Significance5.4.2 - Statistical and Practical Significance
Our decision in the Penn State example was to reject the null hypothesis and conclude that the proportion of Penn State students who like the cold was not 0.5. However, our sample proportion of 0.556 wasn't too far off from 0.5. What do you think of our conclusion? Yes, statistically there was a difference at the 5% level of significance, but are we "impressed" with the results? That is, do you think 0.556 is really that much different from 0.5?
Here we distinguish between statistical significance and practical significance. Statistical significance is concerned with whether an observed effect is due to chance and practical significance means that the observed effect is large enough to be useful in the real world.
5.4.3 - The Relationship Between Power, \(\beta\), and \(\alpha\)5.4.3 - The Relationship Between Power, \(\beta\), and \(\alpha\)
Recall that \(\alpha \) is the probability of committing a Type I error. It is the value that is preset by the researcher. Therefore, the researcher has control over the probability of this type of error. But what about \(\beta \), the probability of a Type II error? How much control do we have over the probability of committing this error? Similarly, we want power, the probability we correctly reject a false null hypothesis, to be high (close to 1). Is there anything we can do to have a high power?
The relationship between power and \(\beta \) is an inverse relationship, namely...
- \(Power = 1-\beta\)
- \(\beta\) = probability of committing a Type II Error.
If we increase power, then we decrease \(\beta \). But how do we increase power? One way to increase power is to increase the sample size. Sample size calculations are included in your textbook but not covered in the course. Remember, it is possible to answer the question of “how many ___ do I have to study” by learning about sample size estimates.
The concepts, logic, and terminology of hypothesis testing can take some time to master. It is worth it! Hypothesis testing is a very powerful statistical tool.
Next, we will move onto situations where we compare more than one population parameter.