Econometrics Summary
Rules for log and ln
log 𝑏 (𝑐 ) = 𝑘 → 𝑏𝑘 = 𝑐
ln(𝑐 ) = log𝑒 (𝑐 ) = 𝑘 → 𝑒𝑘 = 𝑐
ln(𝑥𝑦) = ln(𝑥 ) + ln(𝑦) ln(𝑒) = 1 1
𝑥 𝑥 −1 =
ln ( ) = ln(𝑥 ) − ln(𝑦) ln(1) = 0 𝑥
𝑦 1 √𝑥 = 𝑥 1/2
ln ( ) = −ln(𝑥)
ln(𝑥 𝑦 ) = 𝑦 ∙ ln(𝑥) 𝑥
Expected value
𝐸 (𝑋 + 𝑌 ) = 𝐸 (𝑋 ) + 𝐸 (𝑌 )
𝐸 (𝑋𝑌) = 𝐸 (𝑋) ∙ 𝐸(𝑌) → need independence!
𝐸 (∑𝑛𝑖=1 𝑥𝑖 ) = ∑𝑛𝑖=1 𝐸(𝑥𝑖 )
𝐸 (𝑐 ∙ 𝑋) = 𝑐 ∙ 𝐸(𝑋)
Law of iterated expectations: 𝐸(𝛽̂1 ) = 𝐸 (𝐸(𝛽̂1 |𝑥)) = 𝐸 (𝛽1 ) = 𝛽1
Variance and covariance
𝜎𝑌2 = 𝑣𝑎𝑟(𝑌) = 𝐸 ((𝑌 − 𝑌̅ )2 ) = 𝐸 (𝑌 2 ) − 𝑌̅ 2
1 2
𝑆𝑌2 = ∑𝑛𝑖=1(𝑌𝑖 − 𝑌̅ )
𝑛−1
𝜎𝑋𝑌 = 𝑐𝑜𝑣(𝑋, 𝑌) = 𝐸((𝑋 − 𝑋̅)(𝑌 − 𝑌̅)) = 𝐸 (𝑋𝑌) − 𝑋̅ 𝑌̅
1
𝑆𝑋𝑌 = ∑𝑛𝑖=1((𝑋𝑖 − 𝑋̅)(𝑌𝑖 − 𝑌̅))
𝑛−1 Dependent
on scale
𝑣𝑎𝑟(𝑋 + 𝑌) = 𝑣𝑎𝑟(𝑋) + 𝑣𝑎𝑟(𝑌) + 2 ∙ 𝑐𝑜𝑣(𝑋, 𝑌)
𝑣𝑎𝑟(𝑐 ∙ 𝑋) = 𝑐 2 ∙ 𝑣𝑎𝑟(𝑋)
𝑣𝑎𝑟(𝑎 ∙ 𝑋 + 𝑏 ∙ 𝑌) = 𝑎2 ∙ 𝑣𝑎𝑟(𝑋) + 𝑏2 ∙ 𝑣𝑎𝑟(𝑌) + 2𝑎𝑏 ∙ 𝑐𝑜𝑣(𝑋, 𝑌)
𝑐𝑜𝑣(𝑋, 𝑋) = 𝑣𝑎𝑟(𝑋)
𝑐𝑜𝑣(𝑎 ∙ 𝑋, 𝑏 ∙ 𝑌) = 𝑎𝑏 ∙ 𝑐𝑜𝑣(𝑋, 𝑌)
𝑐𝑜𝑣(𝑎𝑋 + 𝑏𝑌 + 𝑐, 𝑤) = 𝑎 ∙ 𝑐𝑜𝑣(𝑋, 𝑤) + 𝑏 ∙ 𝑐𝑜𝑣(𝑌, 𝑤)
Estimators → are random!
We try to estimate a population parameter. This is usually unknown, except in a Monte Carlo
Analysis.
• Unbiasedness: 𝐸 (𝑋̅) = 𝜇
• Consistency: 𝑣𝑎𝑟(𝑋̅) → 0 as 𝑛 → ∞
AND the estimator is asymptotically (“as 𝑛 → ∞”) unbiased!
Simple regression
𝑌𝑖 = 𝛽0 + 𝛽1 𝑋𝑖 + 𝑢𝑖 (population)
→ 𝛽1 measures the unit change in 𝑌, per unit change in 𝑋
We estimate 𝛽0 and 𝛽1 by min ∑𝑒𝑖2
𝑌̂𝑖 = 𝛽̂0 + 𝛽̂1 𝑋𝑖 (fitted value)
𝑒𝑖 = 𝑢̂𝑖 = 𝑌𝑖 − 𝑌̂𝑖 = 𝑌𝑖 − 𝛽̂0 − 𝛽̂1 𝑋𝑖
2
min ∑𝑒𝑖2 = min ∑(𝑌𝑖 − 𝛽̂0 − 𝛽̂1 𝑋𝑖 )
1. Take the first derivative with respect to 𝛽0 and/or 𝛽1
2. Set equal to 0 and solve for 𝛽0 or 𝛽1
𝛽̂0 = 𝑌̅ − 𝛽̂1 𝑋̅
𝑠 𝑠𝑎𝑚𝑝𝑙𝑒𝑐𝑜𝑣(𝑌,𝑋) ∑(𝑋𝑖−𝑋̅)(𝑌𝑖−𝑌̅ )
𝛽̂1 = 𝑠𝑌𝑋
2 = 𝑠𝑎𝑚𝑝𝑙𝑒𝑣𝑎𝑟(𝑋) =
𝑋 ∑(𝑋 −𝑋̅)2
𝑖
Least Squares Assumptions
1) 𝜀𝑖 is a random variable with 𝐸 (𝜀𝑖 |𝑋) = 0
2) (𝑌𝑖 , 𝑋𝑖 ) are i.i.d.
3) Large outliers are unlikely → finite nonzero 4th moments → kurtosis is finite
1