Dynamic book notes
Carine Wildeboer
June 2022
Chapter 2, Large-Sample Theory
2.1 Review of Limit Theorems
p
{zn } converges in probability to constant α (zn −
→ α) if, for any ϵ > 0: lim P (|zn − α| > ϵ) = 0.
n→∞
a.s.
{zn } coverges almost surely to constant α (zn −−→ α) if: P ( lim zn = α) = 1.
n→∞
m.s.
{zn } converges in mean square to constant α (zn −−−→ α) if: lim E[(zn − α)2 ] = 0.
n→∞
Lemma 2.1 Convergence in distribution and in moments:
Let αsn be the s-th moment of zn and limn→∞ αsn = αs , where αs is finite. Then:
d
”zn −
→ z” ⇒ ”αs is the s-th moment of z.”
Lemma 2.2 Relationship among the four modes of convergence:
m.s. p m.s. p
(a) zn −−−→ α ⇒ zn −
→ α. So: zn −−−→ z ⇒ zn −
→ z.
a.s. p a.s. p
(b) zn −−→ α ⇒ zn −
→ α. So: zn −−→ z ⇒ zn −
→ z.
p d
(c) zn −
→ α ⇒ zn −
→ α.
Lemma 2.3 Preservation of convergence for continuous transformation:
Suppose a(·) is a vector-values continuous function, does not depend on n.
p p
(a) zn −
→ α ⇒ a(zn ) −
→ a(α). Or: p limn→∞ a(zn ) = a(p limn→∞ zn )
d d
(b) zn −
→ z ⇒ a(zn ) −
→ a(z).
Lemma 2.4:
d p d
(a) xn −
→ x, yn −
→ α ⇒ xn + yn −
→ x + α. Slutzky’s Theorem
d p p
(b) xn − → 0 ⇒ yn′ xn −
→ x, yn − → 0.
d p d
(c) xn −
→ x, An −
→ A ⇒ An x n −
→ Ax. Slutzky’s Theorem
d p d
(d) xn − → A ⇒ x′n A−1
→ x, An − → x′ A−1 x, where An and xn conformable and A nonsingular.
n xn −
Lemma 2.5 The Delta Method: √
p d
→ β and: n(xn − β) −
{xn } is a sequence of K-dim. rvs s.t. xn − → z and suppose a(·) : Rk → Rr
has continuous first derivatives with A(β) denoting the r × K matrix of first derivatives evaluated
∂a(β)
at β: A(β)(r×K) ≡ . Then:
∂β ′
√ d
n[a(xn ) − a(β)] −
→ A(β)z
1
, p
An estimator θ̂n is consistent for θ if: θ̂n −
→ θ.
Asymptotic bias: p limn→∞ θ̂n − θ2 .
√ d
A consistent estimator is asymptotically normal if n(θ̂n − θ) −
→ N (0, Σ).
p
Chebyshev’s weak LLN: lim E[z̄n ] = µ, lim V ar(z̄n ) = 0 ⇒ z̄n −
→ µ.
n→∞ n→∞
a.s.
Strong Law of Large Numbers: Let {zi } be iid with E[zi ] = µ. Then z̄n −−→ µ.
Lindeberg-Levy CLT: Let {zi } be iid with E[zi ] = µ and V ar(zi ) = Σ. Then:
n
√ 1 X d
n(z̄n − µ) = √ (zi − µ) −
→ N (0, Σ)
n i=1
2.2 Fundamental Concepts in Time-Series Analysis
Stochastic process: sequence of random variables.
Trend stationary: if the process is stationary after subtracting from it a function of time.
Difference stationary: if the process is not stationary, but its first difference, zi − zi−1 is sta-
tionary.
Covariance Stationary Processes
A stochastic process is weakly (or covariance) stationary if:
(i) E[zi ] does not depend on i
(ii) Cov(zi , zi−j ) exists, is finite, and depens only on j but not on i.
j-th order autocovariance: γj ≡ E[(Yt − E[Yt ])(Yt−j − E[Yt−j ])]
γj Cov(zi , zi−j )
ρi ≡ = .
γ0 V ar(zi )
White Noise Process
Process with zero mean and no serial correlation:
(i) E(zi ) = 0 and (ii) Cov(zi , zi−j ) = 0 for j ̸= 0
Ergodic Theorem: If process is stationary and ergodic with E[zi ] = µ. Then:
n
1 X a.s.
z̄n ≡ zi −−→ µ
n i=1
Vector process is martingale if: E[zi |zi−1 , ..., z1 ) = zi−1 for i ≥ 2.
Random walk: z1 = g1 , z2 = g1 + g2 , ..., zi = g1 + g2 + ... + gi
Martingale difference sequence: if process gi with E[gi ] = 0 has conditional expectation
on its past values equal to zero: E[gi |gi−1 , gi−2 , ..., g1 ) = 0 for i ≥ 2.
ARCH processes: example of martingale differences, autoregressive conditional heteroskedas-
tic process.
q Process is said to be ARCH(1) if:
gi = 2
ζ + αgi−1 · ϵi
Various classes of stochastic processes:
1. Stationary
2. Covariance Stationary
3. White Noise
4. Ergodicity
2