# 14.5.4 - Generalized Least Squares

14.5.4 - Generalized Least Squares

Weighted least squares can also be used to reduce autocorrelation by choosing an appropriate weighting matrix. In fact, the method used is more general than weighted least squares. The method of weighted least squares uses a diagonal matrix to help correct for non-constant variance. However, in a model with correlated errors, the errors have a more complicated variance-covariance structure (such as $$\Sigma(1)$$ given earlier for the AR(1) model). Thus the weighting matrix for the more complicated variance-covariance structure is non-diagonal and utilizes the method of generalized least squares, of which weighted least squares is a special case. In particular, when $$\mbox{Var}(\textbf{Y})=\mbox{Var}(\pmb{\epsilon})=\Omega$$, the objective is to find a matrix $$\Lambda$$ such that:

$$\begin{equation*} \mbox{Var}(\Lambda\textbf{Y})=\Lambda\Omega\Lambda^{\textrm{T}}=\sigma^{2}\textbf{I}_{n\times n}, \end{equation*}$$

The generalized least squares estimator (sometimes called the Aitken estimator) take$$s \Lambda=\sigma\Omega^{1/2}$$ and is given by

\begin{align*} \hat{\beta}_{\textrm{GLS}}&=\arg\min_{\beta}\|\Lambda(\textbf{Y}-\textbf{X}\beta)\|^{2} \\ &=(\textbf{X}^{\textrm{T}}\Omega^{-1}\textbf{X})^{-1}\textbf{X}^{\textrm{T}}\Omega\textbf{Y}. \end{align*}

There is no optimal way of choosing such a weighting matrix $$\Omega$$. $$\Omega$$ contains $$n(n+1)/2$$ parameters, which is too many parameters to estimate, so restrictions must be imposed. The estimate will heavily depend on these assumed restrictions and thus makes the method of generalized least squares difficult to use unless you are savvy (and lucky) enough to choose helpful restrictions.

 [1] Link ↥ Has Tooltip/Popover Toggleable Visibility