An ARCH (autoregressive conditionally heteroscedastic) model is a model for the variance of a time series. ARCH models are used to describe a changing, possibly volatile variance. Although an ARCH model could possibly be used to describe a gradually increasing variance over time, most often it is used in situations in which there may be short periods of increased variation. (Gradually increasing variance connected to a gradually increasing mean level might be better handled by transforming the variable.)
ARCH models were created in the context of econometric and finance problems having to do with the amount that investments or stocks increase (or decrease) per time period, so there’s a tendency to describe them as models for that type of variable. For that reason, the authors of our text suggest that the variable of interest in these problems might either be \(y_t=(x_{t}-x_{t-1})/x_{t-1}\), the proportion gained or lost since the last time, or \(log (x_t / x_{t-1})=log(x_t)-log(x_{t-1})\), the logarithm of the ratio of this time’s value to last time’s value. It’s not necessary that one of these be the primary variable of interest. An ARCH model could be used for any series that has periods of increased or decreased variance. This might, for example, be a property of residuals after an ARIMA model has been fit to the data.
The ARCH(1) Variance Model Section
Suppose that we are modeling the variance of a series y_{t} . The ARCH(1) model for the variance of model y_{t} is that conditional on y_{t-1} , the variance at time \(t\) is
(1) \(\text{Var}(y_t | y_{t-1}) = \sigma^2_t = \alpha_0 + \alpha_1 y^2_{t-1}\)
We impose the constraints \(\alpha_0\) ≥ 0 and \(\alpha_1\) ≥ 0 to avoid negative variance.
The variance at time t is connected to the value of the series at time \(t\) – 1. A relatively large value of \(y^2_{t-1}\) gives a relatively large value of the variance at time \(t\). This means that the value of y_{t} is less predictable at time \(t\) −1 than at times after a relatively small value of \(y^2_{t-1}\).
If we assume that the series has mean = 0 (this can always be done by centering), the ARCH model could be written as
(2) \( y_t = \sigma_t \epsilon_t, \)
with \( \sigma_t = \sqrt{\alpha_0 + \alpha_1y^2_{t-1}}, \)
and \( \epsilon_t \overset{iid}{\sim} (\mu=0,\sigma^2=1)\)
For inference (and maximum likelihood estimation) we would also assume that the \(\epsilon_t\) are normally distributed.
Possibly Useful Results
Two potentially useful properties of the useful theoretical property of the ARCH(1) model as written in equation line (2) above are the following:
- \(y^2_t\) has the AR(1) model \( y^2_t = \alpha_0 +\alpha_1y^2_{t-1}+\) error.
This model will be causal, meaning it can be converted to a legitimate infinite order MA only when \(\alpha^2_1 < \frac{1}{3}\)
- \(y_t\) is white noise when 0 ≤ \(\alpha_1\) ≤ 1.
Example 11-1 Section
The following plot is a time series plot of a simulated series (n = 300) for the ARCH model
\(\text{Var}(y_t | y_{t-1}) = \sigma^2_t = 5 + 0.5y^2_{t-1}. \)
There are a few periods of increased variation, most notably around \(t\) = 100.
The ACF of this series just plotted follows:
No correlations are significant, so the series looks to be white noise.
The PACF (following) of the squared values has a single spike at lag 1 suggesting an AR(1) model for the squared series.
Generalizations Section
An ARCH(m) process is one for which the variance at time \(t\) is conditional on observations at the previous m times, and the relationship is
\(\text{Var}(y_t | y_{t-1}, \dots, y_{t-m}) = \sigma^2_t = \alpha_0 + \alpha_1y^2_{t-1} + \dots + \alpha_my^2_{t-m}.\)
With certain constraints imposed on the coefficients, the y_{t} series squared will theoretically be AR(m).
A GARCH (generalized autoregressive conditionally heteroscedastic) model uses values of the past squared observations and past variances to model the variance at time \(t\). As an example, a GARCH(1,1) is
\(\sigma^2_t = \alpha_0 + \alpha_1 y^2_{t-1} + \beta_1\sigma^2_{t-1}\)
In the GARCH notation, the first subscript refers to the order of the y^{2} terms on the right side, and the second subscript refers to the order of the \(\sigma^2\) terms.
Identifying an ARCH/GARCH Model in Practice Section
The best identification tool may be a time series plot of the series. It’s usually easy to spot periods of increased variation sprinkled through the series. It can be fruitful to look at the ACF and PACF of both y_{t} and \(y^2_t\). For instance, if y_{t} appears to be white noise and \(y^2_t\) appears to be AR(1), then an ARCH(1) model for the variance is suggested. If the PACF of the \(y^2_t\) suggests AR(m), then ARCH(m) may work. GARCH models may be suggested by an ARMA type look to the ACF and PACF of \(y^2_t\). In practice, things won’t always fall into place as nicely as they did for the simulated example in this lesson. You might have to experiment with various ARCH and GARCH structures after spotting the need in the time series plot of the series.
In the book, read Example 5.4 (an AR(1)-ARCH(1) on p. 257-middle of p. 259), and Example 5.5 (GARCH(1,1) on p. 259-p.260). R code for will also be given in the homework for this week.
Example 11-2 Section
The following plot is a time series plot of a simulated series, \(x\), (n = 300) for the GARCH(1,1) model
\(\text{Var}(x_t | x_{t-1}) = \sigma^2_t = 5 + 0.5x^2_{t-1} + 0.5 \sigma^2_{t-1}\)
The ACF of the series below shows that the series looks to be white noise.
The ACF of the squared series follows an ARMA pattern because of both the ACF and PACF taper. This suggests a GARCH(1,1) model.
Let's use the fGarch package to fit a GARCH(1,1) model to x where we center the series to work with a mean of 0 as discussed above.
install.packages("fGarch") #If not already installed
library(fGarch)
y = x - mean(x) #center x ; mean(x) = 0.5423
x.g = garchFit(~garch(1,1), y, include.mean=F)
summary(x.g)
Here is part of the output:
Estimate | Std. Error | t value | Pr(>|t|) | |
---|---|---|---|---|
omega | 10.1994 | 3.2572 | 3.131 | 0.00174 ** |
alpha1 | 0.5093 | 0.1115 | 4.569 | 4.89e-06 *** |
beta1 | 0.3527 | 0.1040 | 3.389 | 0.00070 *** |
This suggests the following model for \(y_t = x_t - 0.5423\):
(3) \( y_t = \sigma_t \epsilon_t, \text{ with } \sigma_t = \sqrt{10.1994 + 0.5093 y^2_{t-1} + 0.3527 \sigma^2_{t-1}}, \text{ and } \epsilon_t \sim iid N(0,1) \)
The fGarch summary provides the Jarque Bera Test for the null hypothesis that the residuals are normally distributed and the familiar Ljung-Box Tests. Ideally all p-values are above 0.05.
Standardized Residuals Tests
Statistic | p-Value | |||
---|---|---|---|---|
Jarque-Bera Test | R | Chi^2 | 3.485699 | 0.175021 |
Shapiro-Wilk Test | R | W | 0.9908482 | 0.05851528 |
Ljung-Box Test | R | Q(10) | 8.119547 | 0.6171609 |
Ljung-Box Test | R | Q(15) | 10.69485 | 0.7739105 |
Ljung-Box Test | R | Q(20) | 11.64886 | 0.9276321 |
Ljung-Box Test | R^2 | Q(10) | 7.463461 | 0.6810856 |
Ljung-Box Test | R^2 | Q(15) | 8.89293 | 0.8830489 |
Ljung-Box Test | R^2 | Q(20) | 14.90667 | 0.7817228 |
Diagnostics all look okay.