100% satisfaction guarantee Immediately available after payment Both online and in PDF No strings attached
logo-home
Summary Advanced Econometrics (FEB23016) $7.60   Add to cart

Summary

Summary Advanced Econometrics (FEB23016)

 59 views  1 purchase
  • Course
  • Institution

Comprehensive summary of Advanced Econometrics (econometrics EUR)

Preview 2 out of 14  pages

  • September 9, 2022
  • 14
  • 2021/2022
  • Summary
avatar-seller
Topic 1: System of Equations
Seemingly unrelated regressions (SUR)
𝑦!" = 𝑥!" 𝛽" + 𝑢!" ,

𝑦!# = 𝑥!# 𝛽# + 𝑢!# ,
for 𝑖 = 1, … , 𝑁, where, for 𝑔 = 1, … , 𝐺, 𝑥!$ = 0𝑥$",! , 𝑥$&,! , … , 𝑥$'! ,! 1 is 1 × 𝐾$ , and 𝛽$ is
𝐾$ × 1. 𝐾$ is the number of regressors in 𝑔-th equation and 𝑥!$ can be the same or different
over equations. If we define 𝑌! = (𝑦!" , … , 𝑦!# )( , 𝑢! = (𝑢!" , … , 𝑢!# )( , 𝛽 = (𝛽" , … , 𝛽# )′ and
𝑥!" ⋯ 0
𝑋! = 9 ⋮ ⋱ ⋮ =, then the system of equations can be written as 𝑌! = 𝑋! 𝛽 + 𝑢!
0 ⋯ 𝑥!#
System OLS estimator
The system OLS estimator is 𝛽>)*+) = (𝑁 ," ∑- ( ,"
!." 𝑋! 𝑋! ) (𝑁
," ∑- (
!." 𝑋! 𝑌! ). The system OLS
estimator is consistent for large 𝑁 and fixed 𝐺 if two conditions hold:
- SOLS.A1: 𝐸(𝑋!( 𝑢! ) = 0
- SOLS.A2: 𝐸(𝑋!( 𝑋! ) is nonsingular.
𝐸(𝑋!( 𝑢! ) = 0 ⟺ 𝐸B𝑥!$ (
𝑢!$ C = 0 for 𝑔 = 1, … , 𝐺. In each equation, errors and regressors are
uncorrelated. But errors in one equation, 𝑢!$ , can be correlated with regressors in another
equation 𝑥!/ for ℎ ≠ 𝑔.
𝐸(𝑋!( 𝑋! ) is nonsingular ⟺ 𝐸B𝑥!$ (
𝑥!$ C is nonsingular for all 𝑔. So, there is no multi-collinearity
in each equation
Normality and SOLS
Under assumptions SOLS.A1 and SOLS.A2, we have √𝑁B𝛽>)*+) − 𝛽C ∼ 𝑁(0, Σ), where
0
Σ = [𝐸(𝑋!( 𝑋! )]," 𝐸(𝑋!( 𝑢! 𝑢!( 𝑋! )[𝐸(𝑋!( 𝑋! )]," . The asymptotic variance is 𝑣𝑎𝑟B𝛽>)*+) C = -.
Consistent estimator of the variance is 𝑉P)*+) = (∑- ( ," -
!." 𝑋! 𝑋! ) (∑!." 𝑋! 𝑢
(
Q ! 𝑢Q!( 𝑋! )(∑- ( ,"
!." 𝑋! 𝑋! ) .
It is a robust variance estimator in the sense that:
- The unconditional variance of errors, 𝐸(𝑢! 𝑢!( ), is entirely unrestricted, so it allows cross
equation correlation as well as different error variances in each equation
- The conditional variance of errors, 𝐸(𝑢! 𝑢!( |𝑋! ), can depend on 𝑋! in an arbitrary unknown
fashion
Hypothesis testing
- Single coefficient 𝐻1 : 𝛽$2 = 0. 𝑡 = 𝛽>$2,)*+) /𝑠𝑒B𝛽>$2,)*+) C ∼ 𝑁(0,1), where 𝑠𝑒B𝛽>$2,)*+) C
is the square root of 𝑔𝑘-th diagonal element of 𝑉P)*+)
( ,"
- Multiple coefficients 𝐻1 : 𝑅𝛽 = 𝑟. Wald = B𝑅𝛽>)*+) − 𝑟C B𝑅𝑉P)*+) 𝑅( C B𝑅𝛽>)*+) − 𝑟C ∼
𝜒 & (𝑄), where 𝑄 is the number of restrictions
System OLS vs. equation-by-equation OLS
SOLS estimation of a SUR model without restrictions on parameters 𝛽$ is equivalent to OLS
equation by equation. Equation-by-equation OLS estimator cannot incorporate cross-
equation restrictions, while this is possible for SOLS
Systems with cross equation restrictions
If a regressor from different dependent variables should have the same parameter, this can
be done in two ways:
- Write the regressor and coefficient matrix in the usual way, construct the restriction
matrix 𝑅 such that 𝑅𝛽 = 0. Then compute the restricted estimator of 𝛽 as 𝛽a)*+) =
𝛽>)*+) − (𝑋 ( 𝑋)," 𝑅( (𝑅(𝑋 ( 𝑋)," 𝑅( )," B𝑅𝛽>)*+) C, where 𝛽>)*+) is the unrestricted estimator
- Write the regressor for the different dependent variables in the same column but in a
different row. The coefficient vector then needs one less coefficient. 𝛽>)*+) can be
computed using the standard formula

, Kronecker product ⨂
The Kronecker product is an operation on two matrices of arbitrary size resulting in a block
𝑎"" ⋯ 𝑎"3 𝑎"" 𝐵 ⋯ 𝑎"3 𝐵
matrix. If 𝐴 = 9 ⋮ ⋱ ⋮ =, then 𝐴 ⨂ 𝐵 = 9 ⋮ ⋱ ⋮ =
𝑎4" ⋯ 𝑥43 𝑎4" 𝐵 ⋯ 𝑥43 𝐵
System GLS estimator
For SGLS we need other assumptions:
- SGLS.A1: 𝐸(𝑋! ⨂ 𝑢! ) = 0. This means that each element of 𝑢! is uncorrelated with each
element of 𝑋! , i.e. errors and regressors are uncorrelated within and across equations
- SGLS.A2: Ω ≡ 𝐸(𝑢! 𝑢!( ) is positive definite (variance positive) and 𝐸(𝑋!( Ω," 𝑋! ) is
nonsingular (not satisfied if ∑#$." 𝑦!$ = constant)
If SGLS.A1 and SGLS.A2 hold, we can use SGLS. It can be obtained similarly as in the single
equation model: 𝛽>)#+) = (𝑁 ," ∑- ( ,"
!." 𝑋! Ω 𝑋! ) (𝑁
," ," ∑- ( ,"
!." 𝑋! Ω 𝑌! ), which is consistent
Normality and SGLS
Under assumptions SGLS.A1 and SGLS.A2, we have √𝑁B𝛽>)#+) − 𝛽C ∼ 𝑁(0, Σ)#+) ), where
Σ)#+) = [𝐸(𝑋!( Ω," 𝑋! )]," 𝐸(𝑋!( Ω," 𝑢! 𝑢!( Ω," 𝑋! )[𝐸(𝑋!( Ω," 𝑋! )]," . The asymptotic variance of
0
𝛽>)#+) is 𝑣𝑎𝑟B𝛽>)#+) C = "#$"-
Feasible GLS
SGLS requires Ω, which is in most applications not possible. Hence, we use FGLS estimation.
In FGLS we replace the unknown matrix Ω with a consistent estimator.
1. Apply SOLS estimation, and obtain the SOLS residuals 𝑢Q!
2. Consistently estimate Ω by Ω l = 𝑁 ," ∑- !." 𝑢Q ! 𝑢Q!(
,"
3. Given Ω l , the FGLS estimator of 𝛽 is 𝛽>5#+) = B𝑁 ," ∑- ( l ,"
!." 𝑋! Ω 𝑋! C B𝑁
," ∑- ( l ,"
!." 𝑋! Ω 𝑌! C
FGLS vs. SGLS
FGLS and SGLS are asymptotic equivalent. This implicates that:
- FGLS estimator is consistent as SGLS
- FGLS estimator follows normal distribution asymptotically
- In finite sample, especially with small sample size 𝑁, the actual distribution of FGLS is likely
to be non-normal
( l ," ," ( l ," ∗ ∗ ( l ," ( l ," ,"
𝑉P5#+) = B∑- -
!." 𝑋! Ω 𝑋! C B∑!." 𝑋! Ω 𝑢 Q ! 𝑢Q! Ω 𝑋! CB∑- !." 𝑋! Ω 𝑋! C is a consistent (and
> ∗ >
robust) estimator of 𝑣𝑎𝑟B𝛽5#+) C, where 𝑢Q! = 𝑌! − 𝑋! 𝛽5#+) is de FGLS residual
FGLS vs. SOLS
FGLS is asymptotically more efficient than SOLS if one additional assumption holds:
- SGLS.A3: 𝐸(𝑋!( Ω," 𝑢! 𝑢!( Ω," 𝑋! ) = 𝐸(𝑋!( Ω," 𝑋! ) (errors homoskedastic in each equation)
If the errors are heteroskedastic within each equation, e.g. 𝑣𝑎𝑟(𝑢!" ) = 0.3𝑥",! , then SGLS.A3
fails. Under SGLS.A1 – SGLS.A3, 𝑣𝑎𝑟B𝛽>5#+) C = (∑- ( ,"
!." 𝑋! Ω 𝑋! )
,"
is the asymptotic variance
> > >
of 𝛽5#+) . Since 𝑣𝑎𝑟B𝛽)*+) C − 𝑣𝑎𝑟B𝛽5#+) C is always positive semi-definite, FGLS is more
efficient than SOLS. It is not robust since it relies on SGLS.A3 (the robust variance allows
variance of 𝑢! to be any format). Hypothesis testing using FGLS are similar to SOLS. Choose
the appropriate variance estimator depending on the validity of SGLS.A3
FGLS vs. SOLS: summarized comparison
Two cases SOLS and FGLS are equivalent for SUR
- Ω l is a diagonal matrix (zero correlation between errors in different equations -> equations
are actually unrelated). In applications Ω l would not be diagonal unless we impose a
diagonal structure
- 𝑥!" = 𝑥!& = ⋯ = 𝑥!# for all 𝑖, that is, the same regressors show up in each equation (for
all observations)

The benefits of buying summaries with Stuvia:

Guaranteed quality through customer reviews

Guaranteed quality through customer reviews

Stuvia customers have reviewed more than 700,000 summaries. This how you know that you are buying the best documents.

Quick and easy check-out

Quick and easy check-out

You can quickly pay through credit card or Stuvia-credit for the summaries. There is no membership needed.

Focus on what matters

Focus on what matters

Your fellow students write the study notes themselves, which is why the documents are always reliable and up-to-date. This ensures you quickly get to the core!

Frequently asked questions

What do I get when I buy this document?

You get a PDF, available immediately after your purchase. The purchased document is accessible anytime, anywhere and indefinitely through your profile.

Satisfaction guarantee: how does it work?

Our satisfaction guarantee ensures that you always find a study document that suits you well. You fill out a form, and our customer service team takes care of the rest.

Who am I buying these notes from?

Stuvia is a marketplace, so you are not buying this document from us, but from seller LeonVerweij. Stuvia facilitates payment to the seller.

Will I be stuck with a subscription?

No, you only buy these notes for $7.60. You're not tied to anything after your purchase.

Can Stuvia be trusted?

4.6 stars on Google & Trustpilot (+1000 reviews)

67232 documents were sold in the last 30 days

Founded in 2010, the go-to place to buy study notes for 14 years now

Start selling
$7.60  1x  sold
  • (0)
  Add to cart