12.3 - Principal Component Method

12.3 - Principal Component Method

We consider two different methods to estimate the parameters of a factor model:

  • Principal Component Method
  • Maximum Likelihood Estimation

A third method, the principal factor method, is also available but not considered in this class.

Principal Component Method

Let \(X_i\) be a vector of observations for the \(i^{th}\) subject:

\(\mathbf{X_i} = \left(\begin{array}{c}X_{i1}\\ X_{i2}\\ \vdots \\ X_{ip}\end{array}\right)\)

\(\mathbf{S}\) denotes our sample variance-covariance matrix and is expressed as:

\(\textbf{S} = \dfrac{1}{n-1}\sum\limits_{i=1}^{n}\mathbf{(X_i - \bar{x})(X_i - \bar{x})'}\)

We have p eigenvalues for this variance-covariance matrix as well as corresponding eigenvectors for this matrix.

 Eigenvalues of \(\mathbf{S}\):

\(\hat{\lambda}_1, \hat{\lambda}_2, \dots, \hat{\lambda}_p\)

Eigenvectors of \(\mathbf{S}\):

\(\hat{\mathbf{e}}_1, \hat{\mathbf{e}}_2, \dots, \hat{\mathbf{e}}_p\)

Recall that the variance-covariance matrix can be re-expressed in the following form as a function of the eigenvalues and the eigenvectors:

Spectral Decomposition of \(Σ\)

\(\Sigma = \sum_{i=1}^{p}\lambda_i \mathbf{e_{ie'_i}} \cong \sum_{i=1}^{m}\lambda_i \mathbf{e_{ie'_i}} = \left(\begin{array}{cccc}\sqrt{\lambda_1}\mathbf{e_1} & \sqrt{\lambda_2}\mathbf{e_2} &  \dots &  \sqrt{\lambda_m}\mathbf{e_m}\end{array}\right)  \left(\begin{array}{c}\sqrt{\lambda_1}\mathbf{e'_1}\\ \sqrt{\lambda_2}\mathbf{e'_2}\\ \vdots\\ \sqrt{\lambda_m}\mathbf{e'_m}\end{array}\right) = \mathbf{LL'}\)

The idea behind the principal component method is to approximate this expression. Instead of summing from 1 to p, we now sum from 1 to m, ignoring the last p - m terms in the sum, and obtain the third expression. We can rewrite this as shown in the fourth expression, which is used to define the matrix of factor loadings \(\mathbf{L}\), yielding the final expression in matrix notation.

Note! If standardized measurements are used, we replace \(\mathbf{S}\) with the sample correlation matrix \(\mathbf{R}\).

This yields the following estimator for the factor loadings:

\(\hat{l}_{ij} = \hat{e}_{ji}\sqrt{\hat{\lambda}_j}\)

This forms the matrix \(\mathbf{L}\) of factor loading in the factor analysis. This is followed by the transpose of \(\mathbf{L}\).  To estimate the specific variances, recall that our factor model for the variance-covariance matrix is

\(\boldsymbol{\Sigma} = \mathbf{LL'} + \boldsymbol{\Psi}\)

in matrix notation. \(\Psi\) is now going to be equal to the variance-covariance matrix minus \(\mathbf{LL'}\).

\( \boldsymbol{\Psi} = \boldsymbol{\Sigma} - \mathbf{LL'}\)

This in turn suggests that the specific variances, the diagonal elements of \(\Psi\), are estimated with this expression:

\(\hat{\Psi}_i = s^2_i - \sum\limits_{j=1}^{m}\lambda_j \hat{e}^2_{ji}\) 

We take the sample variance for the ith variable and subtract the sum of the squared factor loadings (i.e., the commonality).


Legend
[1]Link
Has Tooltip/Popover
 Toggleable Visibility