Hello, dear friend, you can consult us at any time if you have any questions, add WeChat: THEend8_
GARCH 101: The Use of
ARCH/GARCH Models in Applied
Econometrics
T he great workhorse of applied econometrics is the least squares model.This is a natural choice,
because applied econometricians are typicallycalled upon to determine how much one variable will change in response
to a change in some other variable. Increasingly however, econometricians are
being asked to forecast and analyze the size of the errors of the model. In this case,
the questions are about volatility, and the standard tools have become the ARCH/
GARCH models.
The basic version of the least squares model assumes that the expected value
of all error terms, when squared, is the same at any given point. This assumption is
called homoskedasticity, and it is this assumption that is the focus of ARCH/
GARCH models. Data in which the variances of the error terms are not equal, in
which the error terms may reasonably be expected to be larger for some points or
ranges of the data than for others, are said to suffer from heteroskedasticity. The
standard warning is that in the presence of heteroskedasticity, the regression
coefficients for an ordinary least squares regression are still unbiased, but the
standard errors and confidence intervals estimated by conventional procedures will
be too narrow, giving a false sense of precision. Instead of considering this as a
problem to be corrected, ARCH and GARCH models treat heteroskedasticity as a
variance to be modeled. As a result, not only are the deficiencies of least squares
corrected, but a prediction is computed for the variance of each error term. This
prediction turns out often to be of interest, particularly in applications in finance.
The warnings about heteroskedasticity have usually been applied only to
cross-section models, not to time series models. For example, if one looked at the
y Robert Engle is the Michael Armellino Professor of Finance, Stern School of Business, New
York University, New York, New York, and Chancellor’s Associates Professor of Economics,
University of California at San Diego, La Jolla, California.
Journal of Economic Perspectives—Volume 15, Number 4—Fall 2001—Pages 157–168
cross-section relationship between income and consumption in household data,
one might expect to find that the consumption of low-income households is more
closely tied to income than that of high-income households, because the dollars of
savings or deficit by poor households are likely to be much smaller in absolute value
than high income households. In a cross-section regression of household consump-
tion on income, the error terms seem likely to be systematically larger in absolute
value for high-income than for low-income households, and the assumption of
homoskedasticity seems implausible. In contrast, if one looked at an aggregate time
series consumption function, comparing national income to consumption, it seems
more plausible to assume that the variance of the error terms doesn’t change much
over time.
A recent development in estimation of standard errors, known as “robust
standard errors,” has also reduced the concern over heteroskedasticity. If the
sample size is large, then robust standard errors give quite a good estimate of
standard errors even with heteroskedasticity. If the sample is small, the need for a
heteroskedasticity correction that does not affect the coefficients, and only asymp-
totically corrects the standard errors, can be debated.
However, sometimes the natural question facing the applied econometrician is
the accuracy of the predictions of the model. In this case, the key issue is the
variance of the error terms and what makes them large. This question often arises
in financial applications where the dependent variable is the return on an asset or
portfolio and the variance of the return represents the risk level of those returns.
These are time series applications, but it is nonetheless likely that heteroskedasticity
is an issue. Even a cursory look at financial data suggests that some time periods are
riskier than others; that is, the expected value of the magnitude of error terms at
some times is greater than at others. Moreover, these risky times are not scattered
randomly across quarterly or annual data. Instead, there is a degree of autocorre-
lation in the riskiness of financial returns. Financial analysts, looking at plots of
daily returns such as in Figure 1, notice that the amplitude of the returns varies over
time and describe this as “volatility clustering.” The ARCH and GARCH models,
which stand for autoregressive conditional heteroskedasticity and generalized autore-
gressive conditional heteroskedasticity, are designed to deal with just this set of
issues. They have become widespread tools for dealing with time series heteroske-
dastic models. The goal of such models is to provide a volatility measure—like a
standard deviation—that can be used in financial decisions concerning risk analy-
sis, portfolio selection and derivative pricing.
ARCH/GARCH Models
Because this paper will focus on financial applications, we will use financial
notation. Let the dependent variable be labeled rt, which could be the return on an
asset or portfolio. The mean value m and the variance h will be defined relative to
a past information set. Then, the return r in the present will be equal to the mean
158 Journal of Economic Perspectives
value of r (that is, the expected value of r based on past information) plus the
standard deviation of r (that is, the square root of the variance) times the error
term for the present period.