8.2 - The Multivariate Approach: One-way Multivariate Analysis of Variance (One-way MANOVA)

Now we will consider the multivariate analog, the Multivariate Analysis of Variance, often abbreviated as MANOVA.

Suppose that we have data on p variables which we can arrange in a table such as the one below:

Table of randomized block design data
Treatment
  1 2 \(\cdots\) g
Subject
Treatment 1 \(\mathbf{Y_{11}} = \begin{pmatrix} Y_{111} \\ Y_{112} \\ \vdots \\ Y_{11p} \end{pmatrix}\) \(\mathbf{Y_{21}} = \begin{pmatrix} Y_{211} \\ Y_{212} \\ \vdots \\ Y_{21p} \end{pmatrix}\) \(\cdots\) \(\mathbf{Y_{g1}} = \begin{pmatrix} Y_{g11} \\ Y_{g12} \\ \vdots \\ Y_{g1p} \end{pmatrix}\)
Treatment 2 \(\mathbf{Y_{21}} = \begin{pmatrix} Y_{121} \\ Y_{122} \\ \vdots \\ Y_{12p} \end{pmatrix}\) \(\mathbf{Y_{22}} = \begin{pmatrix} Y_{221} \\ Y_{222} \\ \vdots \\ Y_{22p} \end{pmatrix}\) \(\cdots\) \(\mathbf{Y_{g2}} = \begin{pmatrix} Y_{g21} \\ Y_{g22} \\ \vdots \\ Y_{g2p} \end{pmatrix}\)
\(\vdots\) \(\vdots\) \(\vdots\)   \(\vdots\)
\(n_i\) \(\mathbf{Y_{1n_1}} = \begin{pmatrix} Y_{1n_{1}1} \\ Y_{1n_{1}2} \\ \vdots \\ Y_{1n_{1}p} \end{pmatrix}\) \(\mathbf{Y_{2n_2}} = \begin{pmatrix} Y_{2n_{2}1} \\ Y_{2n_{2}2} \\ \vdots \\ Y_{2n_{2}p} \end{pmatrix}\) \(\cdots\) \(\mathbf{Y_{gn_{g}}} = \begin{pmatrix} Y_{gn_{g^1}} \\ Y_{gn_{g^2}} \\ \vdots \\ Y_{gn_{2}p} \end{pmatrix}\)

 

In this multivariate case the scalar quantities, \(Y_{ij}\), of the corresponding table in ANOVA, are replaced by vectors having p observations.

Notation

\(Y_{ijk}\) = Observation for variable k from subject j in group i. These are collected into vectors:
\(\mathbf{Y_{ij}}\) = \(\left(\begin{array}{c}Y_{ij1}\\Y_{ij2}\\\vdots\\Y_{ijp}\end{array}\right)\) = Vector of variables for subject j in group i

\(n_{i}\) = the number of subjects in group i

 \(N = n _ { 1 } + n _ { 2 } + \ldots + n _ { g }\) = Total sample size.

Assumptions

The assumptions here are essentially the same as the assumptions in Hotelling's \(T^{2}\) test, only here they apply to groups:

  1. The data from group i has common mean vector \(\boldsymbol{\mu}_i = \left(\begin{array}{c}\mu_{i1}\\\mu_{i2}\\\vdots\\\mu_{ip}\end{array}\right)\)
  2. The data from all groups have a common variance-covariance matrix \(\Sigma\).
  3. Independence: The subjects are independently sampled.
  4. Normality: The data are multivariate normally distributed.

Here we are interested in testing the null hypothesis that the group mean vectors are all equal to one another. Mathematically this is expressed as:

\(H_0\colon \boldsymbol{\mu}_1 = \boldsymbol{\mu}_2 = \dots = \boldsymbol{\mu}_g\)

The alternative hypothesis is:

\(H_a \colon \mu_{ik} \ne \mu_{jk}\) for at least one \(i \ne j\) and at least one variable \(k\)

This says that the null hypothesis is false if at least one pair of treatments is different on at least one variable.

Notation

The scalar quantities used in the univariate setting are replaced by vectors in the multivariate setting:

Sample Mean Vector

\(\bar{\mathbf{y}}_{i.} = \frac{1}{n_i}\sum_{j=1}^{n_i}\mathbf{Y}_{ij} = \left(\begin{array}{c}\bar{y}_{i.1}\\ \bar{y}_{i.2} \\ \vdots \\ \bar{y}_{i.p}\end{array}\right)\) = sample mean vector for group i . This sample mean vector is comprised of the group means for each of the p variables. Thus, \(\bar{y}_{i.k} = \frac{1}{n_i}\sum_{j=1}^{n_i}Y_{ijk}\) = sample mean vector for variable k in group i .

Grand Mean Vector

\(\bar{\mathbf{y}}_{..} = \frac{1}{N}\sum_{i=1}^{g}\sum_{j=1}^{n_i}\mathbf{Y}_{ij} = \left(\begin{array}{c}\bar{y}_{..1}\\ \bar{y}_{..2} \\ \vdots \\ \bar{y}_{..p}\end{array}\right)\) = grand mean vector. This grand mean vector is comprised of the grand means for each of the p variables. Thus, \(\bar{y}_{..k} = \frac{1}{N}\sum_{i=1}^{g}\sum_{j=1}^{n_i}Y_{ijk}\) = grand mean for variable k.

Total Sum of Squares and Cross Products

In the univariate Analysis of Variance, we defined the Total Sums of Squares, a scalar quantity. The multivariate analog is the Total Sum of Squares and Cross Products matrix, a p x p matrix of numbers. The total sum of squares is a cross products matrix defined by the expression below:

\(\mathbf{T = \sum\limits_{i=1}^{g}\sum_\limits{j=1}^{n_i}(Y_{ij}-\bar{y}_{..})(Y_{ij}-\bar{y}_{..})'}\)

Here we are looking at the differences between the vectors of observations \(Y_{ij}\) and the Grand mean vector. These differences form a vector which is then multiplied by its transpose.

Here, the \(\left (k, l \right )^{th}\) element of T is

\(\sum\limits_{i=1}^{g}\sum\limits_{j=1}^{n_i} (Y_{ijk}-\bar{y}_{..k})(Y_{ijl}-\bar{y}_{..l})\)

For k = l, this is the total sum of squares for variable k and measures the total variation in the \(k^{th}\) variable. For \(k ≠ l\), this measures the dependence between variables k and l across all of the observations.

We may partition the total sum of squares and cross products as follows:

\(\begin{array}{lll}\mathbf{T} & = & \mathbf{\sum_{i=1}^{g}\sum_{j=1}^{n_i}(Y_{ij}-\bar{y}_{..})(Y_{ij}-\bar{y}_{..})'} \\ & = & \mathbf{\sum_{i=1}^{g}\sum_{j=1}^{n_i}\{(Y_{ij}-\bar{y}_i)+(\bar{y}_i-\bar{y}_{..})\}\{(Y_{ij}-\bar{y}_i)+(\bar{y}_i-\bar{y}_{..})\}'} \\ & = & \mathbf{\underset{E}{\underbrace{\sum_{i=1}^{g}\sum_{j=1}^{n_i}(Y_{ij}-\bar{y}_{i.})(Y_{ij}-\bar{y}_{i.})'}}+\underset{H}{\underbrace{\sum_{i=1}^{g}n_i(\bar{y}_{i.}-\bar{y}_{..})(\bar{y}_{i.}-\bar{y}_{..})'}}}\end{array}\)

where E is the Error Sum of Squares and Cross Products, and H is the Hypothesis Sum of Squares and Cross Products.

The \(\left (k, l \right )^{th}\) element of the error sum of squares and cross products matrix E is:

\(\sum_\limits{i=1}^{g}\sum\limits_{j=1}^{n_i}(Y_{ijk}-\bar{y}_{i.k})(Y_{ijl}-\bar{y}_{i.l})\)

For k = l, this is the error sum of squares for variable k, and measures the within treatment variation for the \(k^{th}\) variable. For \(k ≠ l\), this measures the dependence between variables k and l after taking into account the treatment.

The \(\left (k, l \right )^{th}\) element of the hypothesis sum of squares and cross products matrix H is

\(\sum\limits_{i=1}^{g}n_i(\bar{y}_{i.k}-\bar{y}_{..k})(\bar{y}_{i.l}-\bar{y}_{..l})\)

For k = l, this is the treatment sum of squares for variable k and measures the between-treatment variation for the \(k^{th}\) variable. For \(k ≠ l\), this measures the dependence of variables k and l across treatments.

The partitioning of the total sum of squares and cross-products matrix may be summarized in the multivariate analysis of the variance table:

MANOVA

Source d.f. SSP
Treatments g - 1 H
Error N - g E
Total N - 1 T

We wish to reject

\(H_0\colon \boldsymbol{\mu_1 = \mu_2 = \dots =\mu_g}\)

if the hypothesis sum of squares and cross products matrix H is large relative to the error sum of squares and cross products matrix E.