Tilburg University
Master Program
Summary TSA
Author: Supervisor:
Rick Smeets Kojevnikov, D
May 30, 2024
,Table of Contents
1 Stochastic Processes 3
1.1 Time Series Data . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2 Second-order Properties: Weak Stationarity . . . . . . . . . . 6
1.3 Construction of Stochastic Processes . . . . . . . . . . . . . . 8
2 Asymptotic Results 11
2.1 Linear Processes . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.2 Estimation of the Mean and the ACVF . . . . . . . . . . . . . 12
2.2.1 Estimation of µX . . . . . . . . . . . . . . . . . . . . . 12
2.2.2 Estimation of γX . . . . . . . . . . . . . . . . . . . . . 15
2.2.3 Estimation of v . . . . . . . . . . . . . . . . . . . . . . 18
3 ARMA Models 19
3.1 The Lag Operator, Lag Polynomials . . . . . . . . . . . . . . . 19
3.2 AR and MA Polynomials . . . . . . . . . . . . . . . . . . . . . 22
3.3 Causality and Invertibility of ARMA Processes . . . . . . . . . 24
4 Forecasts, the Wold Decomposition 29
4.1 Forecasting Stationary Time Series . . . . . . . . . . . . . . . 29
4.2 Forecasting From the Infinite Past . . . . . . . . . . . . . . . . 32
4.3 The Wold Decomposition . . . . . . . . . . . . . . . . . . . . . 36
5 Estimation of ARMA Models Part I 38
5.1 The ACF and PACF of an ARMA Process . . . . . . . . . . . 38
5.2 The Autocorrelation Function . . . . . . . . . . . . . . . . . . 38
5.3 The Partial Autocorrelation Function . . . . . . . . . . . . . . 40
5.4 Interpretation of ACF and PACF . . . . . . . . . . . . . . . . 43
6 Estimation of ARMA Models Part II 44
6.1 The Yule-Walker Estimator (Method of Moments) . . . . . . . 44
6.2 Maximum Likelihood Estimation . . . . . . . . . . . . . . . . 47
6.3 Diagnostic Checking and Order Selection . . . . . . . . . . . . 51
7 Models of Volatility 53
7.1 GARCH Models . . . . . . . . . . . . . . . . . . . . . . . . . . 53
7.2 Estimation of GARCH Models . . . . . . . . . . . . . . . . . . 59
7.3 Forecasting Volatility . . . . . . . . . . . . . . . . . . . . . . . 60
1
,8 Generalizations of the ARMA model 62
8.1 Integrated Processes . . . . . . . . . . . . . . . . . . . . . . . 62
8.1.1 Forecasting Integrated Processes . . . . . . . . . . . . . 64
8.2 Unit Root Tests . . . . . . . . . . . . . . . . . . . . . . . . . . 67
8.2.1 Unit Root in Autoregressions . . . . . . . . . . . . . . 68
8.2.2 Unit Root in Moving-Averages . . . . . . . . . . . . . . 71
8.3 Seasonal ARIMA Models . . . . . . . . . . . . . . . . . . . . . 72
8.3.1 Forecasting Seasonal ARIMA Processes . . . . . . . . . 75
2
, 1 Stochastic Processes
1.1 Time Series Data
In this course we are concerned with data whose observations are recorded
at discrete time intervals. Such a dataset is referred to as a time series.
Formally, we think about a regular time series as a particular realization
of a (discrete-time) stochastic process, i.e., the observation xt at time t is
a realization of a certain random variable Xt . Such modelling allows for
the unpredictable nature of future observations. Let T denote a set of time
points, e.g., T = Z = {. . . , −2, −1, 0, 1, 2, . . .} or T = N = {1, 2, 3, . . .}.
Definition 1.1. A stochastic process is a family of random variables {Xt :
t ∈ T } defined on a common probability space (Ω, F, P). Recall that Ω is
the sample space (possible outcomes), F is a collection of events (F ⊂ 2Ω ),
and P is a probability measure. A random variable X defined on this space
is a function X : Ω → R.
When the index set T is obvious, we write {Xt } for short. If {Xt } is a
stochastic process, Xt is a random variable for each t ∈ T . For example,
{Xt : t ∈ N} is an infinite sequence of random variables, and {Xt : t ∈ Z}
is a doubly infinite sequence of random variables. For a particular outcome
ω ∈ Ω, we obtain a realization of the whole process by varying the time index
t.
Definition 1.2. For a given outcome ω ∈ Ω, the function t 7→ Xt (ω) is called
a sample path of the stochastic process {Xt }. This means that for a specific
outcome ω, the sample path shows how the random variable Xt evolves over
time t.
Thus, {Xt } can be described as the collection of all possible sample paths or
trajectories, realized according to the underlying probability space. Hence-
forth, for a process {Xt } consisting of i.i.d. random variables with mean µ
and finite variance σ 2 , we write Xt ∼ IID(µ, σ 2 ).
3