3.1 - The Model

From Lesson 2 you learned that ANOVA is testing the effect of the treatment relative to the amount of error. In statistics, we call this the partitioning of variability (that due to our treatment and that left over due to random variability in the measurements). So, now we will take a look at the (simple) math behind this idea.

ANOVA at its core is shown in the following, in section 16.5 in the text. It shows the decomposition (partitioning) of the deviations:

"Note from (16.24) that we can decompose the total deviation \(Y_{ij}-\bar{Y}_{..}\) into two components:

\(\underbrace{Y_{ij}-\bar{Y}_{..}}_{\text{Total deviation}} = \underbrace{\bar{Y}_{i.}-\bar{Y}_{..}}_{\text{Deviation of estimated factor} \\ \text{level mean around overall mean}} + \underbrace{Y_{ij}-\bar{Y}_{i.}}_{\text{Deviation around} \\ \text{estimated factor level mean}}\)

Thus, the total deviation \(Y_{ij}-\bar{Y}_{..}\) can be viewed as the sum of two components:

  1. The deviation of the estimated factor level mean around the overall mean.
  2. The deviation of \(Y_{ij}\) around its estimated factor level mean, which is simply the error \(\epsilon_{ij}\) according to (16.20)"

(From Kutner, et al. (2004). Applied Linear Statistical Models)

Again, we think of this as the difference of each group mean from the grand mean and the difference of each observation from its own group mean. Another set of terminology that we can use to represent this concept in a very simple way is to think about the variability between groups and the variability within groups. We really aren't that interested in the variability within the group (although we hope this random variation is small) but obviously, we are very interested in the variability between groups because this means our treatment makes a difference!

Now, with that idea firmly planted, referencing the equation above we can develop a linear additive statistical model for the ANOVA, called the effects model:

\(Y_{ij}=\mu+\tau_i+\epsilon_{ij}\)

where \(\mu\) is the grand mean, \(\tau_i\) are the deviations from the grand mean due to the treatment levels and \(\epsilon_{ij}\) are the error terms. The errors show the amount ‘left over’ after considering the grand mean and the effect of being in a particular treatment level.

Think of someone coming into the greenhouse where the example experiment is ready to harvest. They ask “How tall are the plants? For the impatient or disinterested, a quick natural response would be to report the mean of all 24 plants (the grand mean) disregarding any treatment effects. Again this is the overall mean (which is, as is the case in our example, what most people will state when describing a simple phenomenon). You might then suggest that there is variability among the treatment levels. The estimate can be modified to include the effect of the treatment \(\tau_i\). We can never definitively answer the question, however, because \(\epsilon_{ij}\) is a random component (noise) reflecting unexplained variability among plants within treatment levels.

Note! that this formulation of the ANOVA model is referred to as the ‘Effects Model’. The text considers this formulation in Section 16.7. The text in Section 16.3 refers to a different formulation of the statistical model for ANOVA, the “Cell Means” model: \(Y_{ij}=\mu_i+\epsilon_{ij}\).

So what is the essential difference between these two models? In the cell means model the \(\mu_i\) are the group means effects, whereas in the effects model, the \(\tau_i \) represents the difference between the ith group mean from the grand mean -- hence the effect of the ith treatment!

Under the null hypothesis where the treatment effects are all zero, the reduced model can be written \(Y_{ij}=\mu+\epsilon_{ij}\) and the \(SS_{\text{Error}}\) for the ‘reduced model’ can be denoted \(SS_{\text{Error}_{(Reduced)}}\)

Under the alternative hypothesis,where the treatment effects are not all zero, the full model can be written\(Y_{ij}=\mu_i+\epsilon_{ij}\) and the \(SS_{\text{Error}}\) for the ‘full model’ can be denoted \(SS_{\text{Error}_{(Full)}}\)

The Treatment SS, SST, is then obtained by

\(SST = SS_{\text{Error}_{(Reduced)}}-SS_{\text{Error}_{(Full)}}\)

and the General Linear Test can be employed to test the Null Hypothesis. If the Null is true, then the group effect is equal to zero. If we reject the null, then we conclude that the group effect is significant! This approach is detailed in the Textbook, Section 16.6.