7.1.2 - A Naive Approach

7.1.2 - A Naive Approach

Following the univariate method, a naive approach for testing a multivariate hypothesis is to compute the t-test statistics for each individual variable; i.e.,

\(t_j = \dfrac{\bar{x}_j-\mu^0_j}{\sqrt{s^2_j/n}}\)

Thus we could define \(t_{j}\), which would be the t-statistic for the \(j^{th}\) variable as shown above. We may then reject \(H_0\colon \boldsymbol{\mu} = \boldsymbol{\mu_0}\) if \(|t_j| > t_{n-1, \alpha/2}\) for at least one variable \(j \in \{1,2,\dots, p\}\) .

Problem with Naive Approach

The basic problem with this naive approach is that it does not control the family-wise error rate. By definition, the family-wise error rate is the probability of rejecting at least one of the null hypotheses \(H^{(j)}_0\colon \mu_j = \mu^0_j\) when all of the \(H_{0}\)’s are true.

To understand the family-wise error rate suppose that the experimental variance-covariance matrix is diagonal. This implies zero covariances between the variables. If the data are multivariate and normally distributed then this would mean that all of the variables are independently distributed. In this case, the family-wide error rate is

\(1-(1-\alpha)^p > \alpha\)

where p is the dimension of the multivariate data and \(α\) is the level of significance. By definition, the family-wide error rate is equal to the probability that we reject \(H _ { 0 } ^ { ( j ) }\) for at least one j, given that \(H _ { 0 } ^ { ( j ) }\) is true for all j. Unless p is equal to one, the error will be strictly greater than \(α\).

Consequence

The naive approach yields a liberal test. That is, we will tend to reject the null hypothesis more often than we should.

Bonferroni Correction

Under the Bonferroni Correction, we would reject the null hypothesis that mean vector \(\boldsymbol{\mu}\) is equal to our hypothesized mean vector \(\boldsymbol{\mu_{0}}\) at level \(\alpha\) if, for at least one variable j, the absolute value of \(t_{j}\) is greater than the critical value from the t-table with n-1 degrees of freedom evaluated at \(\alpha /(2p)\) for at least one variable j between 1 and p.

Note! For independent data, this yields a family-wide error rate of approximately \(\alpha\). For example, the family-wide error rate is shown in the table below for different values of p. You can see that these are all close to the desired level of \(\alpha = 0.05\).

p Family-Wide Error rate
2 0.049375
3 0.049171
4 0.049070
5 0.049010
10 0.048890
20 0.048830
50 0.048794

The problem with the Bonferroni Correction, however, is that it can be quite conservative when there is a correlation among the variables, particularly when doing a large number of tests. In a multivariate setting, the assumption of independence between variables is likely to be violated.

Consequence

When using the Bonferroni adjustment for many tests with correlated variables, the true family-wide error rate can be much less than \(\alpha\).  These tests are conservative and there is low power to detect the desired effects.


Legend
[1]Link
Has Tooltip/Popover
 Toggleable Visibility