Differences among treatments can be explored through pre-planned orthogonal contrasts. Contrasts involve linear combinations of group mean vectors instead of linear combinations of the variables.
- Contrasts
-
The linear combination of group mean vectors
\(\mathbf{\Psi} = \sum_\limits{i=1}^{g}c_i\mathbf{\mu}_i\)
is a (treatment) contrast if
\(\sum_\limits{i=1}^{g}c_i = 0\)
Contrasts are defined with respect to specific questions we might wish to ask of the data. Here, we shall consider testing hypotheses of the form
\(H_0\colon \mathbf{\Psi = 0}\)
Example 8-5: Drug Trial Section
Suppose that we have a drug trial with the following 3 treatments:
- Placebo
- Brand Name
- Generic
Consider the following questions:
Question 1: Is there a difference between the Brand Name drug and the Generic drug?
\begin{align} \text{That is, consider testing:}&& &H_0\colon \mathbf{\mu_2 = \mu_3}\\ \text{This is equivalent to testing,}&& &H_0\colon \mathbf{\Psi = 0}\\ \text{where,}&& &\mathbf{\Psi = \mu_2 - \mu_3} \\ \text{with}&& &c_1 = 0, c_2 = 1, c_3 = -1 \end{align}
Question 2: Are the drug treatments effective?
\begin{align} \text{That is, consider testing:}&& &H_0\colon \mathbf{\mu_1} = \frac{\mathbf{\mu_2+\mu_3}}{2}\\ \text{This is equivalent to testing,}&& &H_0\colon \mathbf{\Psi = 0}\\ \text{where,}&& &\mathbf{\Psi} = \mathbf{\mu}_1 - \frac{1}{2}\mathbf{\mu}_2 - \frac{1}{2}\mathbf{\mu}_3 \\ \text{with}&& &c_1 = 1, c_2 = c_3 = -\frac{1}{2}\end{align}
Estimation Section
The contrast
\(\mathbf{\Psi} = \sum_{i=1}^{g}c_i \mu_i\)
is estimated by replacing the population mean vectors with the corresponding sample mean vectors:
\(\mathbf{\hat{\Psi}} = \sum_{i=1}^{g}c_i\mathbf{\bar{Y}}_i.\)
Because the estimated contrast is a function of random data, the estimated contrast is also a random vector. So the estimated contrast has a population mean vector and population variance-covariance matrix. The population mean of the estimated contrast is \(\mathbf{\Psi}\). The variance-covariance matrix of \(\hat{\mathbf{\Psi}}\)¸ is:
\(\left(\sum\limits_{i=1}^{g}\frac{c^2_i}{n_i}\right)\Sigma\)
which is estimated by substituting the pooled variance-covariance matrix for the population variance-covariance matrix
\(\left(\sum\limits_{i=1}^{g}\frac{c^2_i}{n_i}\right)\mathbf{S}_p = \left(\sum\limits_{i=1}^{g}\frac{c^2_i}{n_i}\right) \dfrac{\mathbf{E}}{N-g}\)
- Orthogonal Contrasts
-
Two contrasts
\(\Psi_1 = \sum_{i=1}^{g}c_i\mathbf{\mu}_i\) and \(\Psi_2 = \sum_{i=1}^{g}d_i\mathbf{\mu}_i\)
are orthogonal if
\(\sum\limits_{i=1}^{g}\frac{c_id_i}{n_i}=0\)
The importance of orthogonal contrasts can be illustrated by considering the following paired comparisons:
\(H^{(1)}_0\colon \mu_1 = \mu_2\)
\(H^{(2)}_0\colon \mu_1 = \mu_3\)
\(H^{(3)}_0\colon \mu_2 = \mu_3\)
We might reject \(H^{(3)}_0\), but fail to reject \(H^{(1)}_0\) and \(H^{(2)}_0\). But, if \(H^{(3)}_0\) is false then both \(H^{(1)}_0\) and \(H^{(2)}_0\) cannot be true.
Notes
- For balanced data (i.e., \(n _ { 1 } = n _ { 2 } = \ldots = n _ { g }\) ), \(\mathbf{\Psi}_1\) and \(\mathbf{\Psi}_2\) are orthogonal contrasts if \(\sum_{i=1}^{g}c_id_i = 0\)
- If \(\mathbf{\Psi}_1\) and \(\mathbf{\Psi}_2\) are orthogonal contrasts, then the elements of \(\hat{\mathbf{\Psi}}_1\) and \(\hat{\mathbf{\Psi}}_2\) are uncorrelated
- If \(\mathbf{\Psi}_1\) and \(\mathbf{\Psi}_2\) are orthogonal contrasts, then the tests for \(H_{0} \colon \mathbf{\Psi}_1= 0\) and \(H_{0} \colon \mathbf{\Psi}_2= 0\) are independent of one another. That is, the results of the test have no impact on the results of the other test.
- For g groups, it is always possible to construct g - 1 mutually orthogonal contrasts.
- If \(\mathbf{\Psi}_1, \mathbf{\Psi}_2, \dots, \mathbf{\Psi}_{g-1}\) are orthogonal contrasts, then for each ANOVA table, the treatment sum of squares can be partitioned into \(SS_{treat} = SS_{\Psi_1}+SS_{\Psi_2}+\dots + SS_{\Psi_{g-1}} \)
- Similarly, the hypothesis sum of squares and cross-products matrix may be partitioned: \(\mathbf{H} = \mathbf{H}_{\Psi_1}+\mathbf{H}_{\Psi_2}+\dots\mathbf{H}_{\Psi_{g-1}}\)