3.5 - One-way Random Effects Models

With quantitative factors, we may want to make inference to levels not measured in the experiment by interpolation or extrapolation on the measurement scale. With categorical factors, we may only be able to use a subset of all possible levels - e.g. brands of popcorn - but we would still like to be able to make inference to other levels. Imagine that we randomly select a of the possible levels of the factor of interest. In this case, we say that the factor is random. As before, the usual single factor ANOVA applies which is

\(y_{ij}=\mu +\tau_i+\varepsilon_{ij}
\left\{\begin{array}{c}
i=1,2,\ldots,a \\
j=1,2,\ldots,n
\end{array}\right. \)

However, here both the error term and treatment effects are random variables, that is

\(\varepsilon_{ij}\ \mbox{is }NID(0,\sigma^2)\mbox{ and }\tau_i\mbox{is }NID(0,\sigma^2_{\tau})\)

Also, \(\tau_i\) and \(\epsilon_{ij}\) are independent. The variances \(\sigma^2_{\tau} \) and \(\sigma^2\) are called variance components.

In the fixed effect models we test the equality of the treatment means. However, this is no longer appropriate because treatments are randomly selected and we are interested in the population of treatments rather than any individual one. The appropriate hypothesis test for a random effect is:

\(H_0:\sigma^2_{\tau}=0\)
\(H_1:\sigma^2_{\tau}>0\)

The standard ANOVA partition of the total sum of squares still works and leads to the usual ANOVA display. However, as before, the form of the appropriate test statistic depends on the Expected Mean Squares. In this case, the appropriate test statistic would be

\(F_0=MS_{Treatments}/MS_E\)

which follows an F distribution with a-1 and N-a degrees of freedom. Furthermore, we are also interested in estimating the variance components \(\sigma_{t}^{2}\) and \(\sigma^2\). To do so, we use the analysis of variance method which consists of equating the expected mean squares to their observed values.

\({\hat{\sigma}}^2=MS_E\ \mbox{and}\ {\hat{\sigma}}^2+n{\hat{\sigma}}^2_{\tau}=MS_{Treatments}\)

\({\hat{\sigma}}^2_{\tau}=\dfrac{MS_{Treatment}-MS_E}{n}\)

\({\hat{\sigma}}^2=MS_E\)

A potential problem that may arise here is that the estimated treatment variance component may be negative. It such a case, it is proposed to either consider zero in case of a negative estimate or use another method which always results in a positive estimate. A negative estimate for the treatment variance component can also be viewed as evidence that the model is not appropriate, which suggests looking for a better one.

Example 3.11 (13.1 in the 7th ed) discusses a single random factor case about the differences among looms in a textile weaving company. Four looms have been chosen randomly from a population of looms within a weaving shed and four observations were made on each loom. Table 13.1 illustrates the data obtained from the experiment. Here is the Minitab output for this example using Stat > ANOVA > Balanced ANOVA command.

Factor Type Levels Values
Loom random 4 1 2 3 4
Analysis of Variance for y
Source DF SS MS F P
Loom 3 89.188 29.729 15.68 0.000
Error 12 22.750 1.896    
Total 15 111.938      
Source Variance
component
Error
term
Expected Mean Square for Each Term
(using unrestricted model)
1 Loom 6.958 2 (2) + 4(1)
2 Error 1.896   (2)

The interpretation made from the ANOVA table is as before. With the p-value equal to 0.000 it is obvious that the looms in the plant are significantly different, or more accurately stated, the variance component among the looms is significantly larger than zero. And confidence intervals can be found for the variance components. The \(100(1-\alpha)\%\) confidence interval for \(\alpha^2\) is

\(\dfrac{(N-a)MS_E}{\chi^2_{\alpha/2,N-a}} \leq \sigma^2 \leq \dfrac{(N-a)MS_E}{\chi^2_{1-\alpha/2,N-a}}\)

Confidence intervals for other variance components are provided in the textbook. It should be noted that a closed form expression for the confidence interval on some parameters may not be obtained.