I like that it is very detailed but its easy to lose track and there is no index / table of contents.
What do the yellow markings stand for?
Seller
Follow
Madikan
Reviews received
Content preview
Summary Econometrics Class 1
Linear Algebra
Regressand and Regressors
● In econometrics one has a dependent variable (the regresand) y and explanatory variables
𝑥! , … , 𝑥" (the regressor)
o 𝑦 = [𝑦! ⋮ 𝑦# ]𝑎𝑛𝑑 𝑋 = [𝑥!! … 𝑥!" ⋮ ⋱ ⋮ 𝑥#! … 𝑥#" ]
o In the following data matrix has n observations of the dependent variable
o And the n observations of the k explanatory variables
Introduction of a system of linear equations
● In econometrics one typically has a system of linear equations
o Underneath you can find a system of 3 linear equations with 4 variables
▪ {𝑦! = 𝑥!! 𝛽 + 𝑥!$ 𝛽 + 𝑥!% 𝛽 + 𝜀! 𝑦$ = 𝑥$! 𝛽 + 𝑥$$ 𝛽 + 𝑥$% 𝛽 + 𝜀$ 𝑦% =
𝑥%! 𝛽 + 𝑥%$ 𝛽 + 𝑥%% 𝛽 + 𝜀%
● You can write this system in a more ordered way, by using matrix notation
o Matrix notation:
▪ 𝑦 = 𝑋𝑏 + 𝜀
o Where:
▪ “n”
observations:
rows of the matrix X and elements of the vector y
▪ “k” variables: columns of the matrix X and the vector y
▪ “b”: k unknown parameters
▪ 𝜀: n error terms
● Definition matrix:
o It is a table of real numbers consisting of m rows and n columns, is denotes as 𝑚 × 𝑛
matrix
▪ A row vector is a matrix with only one row
▪ A column vector is a matrix with only one column
Basic matrix operations
● Matrix addition
o (𝐴 + 𝐵)𝑖𝑗 = 𝐴𝑖𝑗 + 𝐵𝑖𝑗
o Where, commutativity and associativity apply:
▪ A+B=B+A
▪ (A + B) + C = A + (B + C)
● Scalar product of a matrix
o (𝑘𝐴)𝑖𝑗 = 𝑘𝐴𝑖𝑗
o The following properties hold:
▪ (k + l)A = kA + lA
▪ “k”(A + B) = kA + kB
▪ “k”(lA) = klA
● Matrix multiplication
o 𝑐)* = (𝐴𝐵))* = 𝐴) 𝐵* = ∑+#,! 𝐴)# 𝐵#*
o Where i= row and j= column
o The matrix product is associate and distributive with respect to addition:
▪ (AB)C = A(BC)
▪ A(A + C) = AB + AC
1
,Summary Econometrics Class 1
▪ (A + B)C = AC + BC
● Transpose
o Transpose swaps rows and columns
o The transpose of (𝑘 × 𝑙)- matrix A is denotes as 𝐴+ or A’ and is the (𝑙 × 𝑘)- matrix
where: (𝐴+ ))* = (𝐴)*)
o The following properties hold:
▪ (𝐴+ )+ = 𝐴
▪ (𝐴 + 𝐵)+ = 𝐴+ + 𝐵+
▪ (𝐴𝐵)+ = 𝐵+ 𝐴+ (keep in mind the order!)
● Vector
o If x is a 𝑛 × 1 column vector, then 𝑥 + is a 1 × 𝑛 row vector (transpose is applied here)
o 𝑥 + 𝑥 = ∑#),! 𝑥)$
● Norm of a vector
o The (Euclidean) norm of a vector x is defined as
▪ ‖𝑥‖ = (𝑥 + 𝑥)!/$ = (∑#),! 𝑥)$ )!/$
o This is often used to minimize a sum of squares
Special matrices
● Null matrix
o Is a matrix completely filled with zeros
● Square matrix
o Is a matrix with an equal number of rows and columns
o And a square matrix of order 1 is simply a number
● Symmetrix matrix
o Is a square matrix that coincides with its transpose
o 𝐴+ = 𝐴
● Diagonal matrix
o Is a square matrix with scalars on it the main diagonal and zeros elsewhere?
● Unit matrix
o Is a square (and diagonal) matrix with ones on the maid diagonal and a zero
elsewhere
o For dimension n it is written as 𝐼#
▪ 𝐼% = [1 0 0 0 1 0 0 0 1 ]
● Upper triangle matrix
o Is a square matrix with underneath the main diagonal zeros everywhere
● Lower triangle matrix
o Is a square matrix with above the main diagonal zeros everywhere
Linear independence and the rank of a matrix
● Definition linear independence
o The (row or column) vectors 𝑎! , . . , 𝑎. are linear independent if any linear
combination of 𝑎! , . . , 𝑎. (except of the zero combination is non-zero, i.e. if:
▪ ∑.*,! 𝜆* 𝑎* = 𝜆! 𝑎! + 𝜆$ 𝑎$ + ⋯ + 𝜆. 𝑎. ≠ 𝑂.
o For any combination of scalars 𝜆! , … , 𝜆. of which at least one is different from 0.
▪ 𝑂. is the 𝑙 × 1 null matrix
● If not, 𝐴! , … , 𝐴. are linearly dependent
o With other words if no combinations can be made to construct a vector in the matrix
▪ Then there’s linear independence
2
,Summary Econometrics Class 1
● We will see that it is important not to have linear dependent in the columns of X for
regression models
● Example:
o [6 1 2 ], ⌈0 1 0 ⌉, [3 0 1 ]
▪ These vectors are linear dependent
● Since, first column is equal 2nd columns + 2 times 3rd column
● Definition rank
o It is the maximum number of linear independent row or columns
● Important properties
o For a matrix A of order 𝑘 × 𝑙: rank(A) ≤ min (k,l)
o Rank(A) = rank(A𝐴+ ) = rank(𝐴+ A)
o Rank(AB) ≤ min(rank(A), rank(B))
● Example:
o What is the rank of: [2 − 1 0 − 1 1 1 ]?
▪ There can not be any linear combination written, so the rank is 2
o What is the rank of: [2 − 1 1 − 4 2 − 2 ]?
▪ Row 2 is a linear combination of row 1, by -2 times row2
▪ So, the rank is equal to 1
Inverse of a matrix
● Remember!
o That the inverse of a matrix only exists when the matrix is linear independent, with
other words full rank
o Hence, no row or column is a linear transformation of one other row or column
● See on how to calculate the inverse of the matrix in math 2 course
● Inverse of a non-singular (𝑛 × 𝑛)- matrix
o With other words, a matrix with a full rank
o Is denoted by 𝑋 /!
o Important property
▪ 𝑋 /! 𝑋 = 𝑋𝑋 /! = 𝐼#
● Inverse matrix times the regular matrix is equal the unit matrix
● Determinant of the square matrix
o In general case if a square matrix has a determinant different from 0. Then it means
that the matrix is invertible
o Example how to calculate 2*2 matrix determinant
▪
o If you have 2 matrices that are both invertible, the following properties hold:
▪ (𝐴/! )+ = (𝐴+ )/!
▪ (𝐴𝐵)/! = 𝐵/! 𝐴/! (keep in mind the order!)
Pseudo-inverse of a matrix
● Explanation on why this matrix exists
o Sometimes there exist no inverse for example in an overdetermined system
▪ Picture for overdetermined system:
● Is when you have data all over the place, and you try to fit a linear
model through this data
3
, Summary Econometrics Class 1
● But you realize that there is no linear model that drives through all
the points
o So new approach is using the pseudo-inverse
● Definition pseudo-inverse of a matrix X
o 𝑋 0 = (𝑋 + 𝑋)/! 𝑋 +
o A pseudo inverse of m*n matrix is defined by the unique n*m matrix satisfying the
following 4 criteria’s
▪ 𝑋𝑋 0 𝑋 = 𝑋
▪ 𝑋 0 𝑋𝑋 0 = 𝑋 0
▪ (𝑋𝑋 0 )+ = 𝑋𝑋 0
▪ (𝑋 0 𝑋)+ = 𝑋 0 𝑋
Positive (semi-) definite
● Definition:
o A symmetric n*n matrix B is positive semi-definite
▪ If, for each n*1 vector 𝑎(≠ 0# ), quadratic form 𝑎+ 𝐵𝐴 ≥ 0
o It is positive definite if 𝑎+ 𝐵𝐴 > 0 for each vector 𝑎(≠ 0# )
▪ Positive semi-definite matrices are important to determine issues such as the
smallest covariance matrix among different estimators
● It’s used to see which of (say) two estimators has the lowest covariance
o Example when it is used:
▪ Suppose you have a model that is estimated by
▪ 𝛽123)#425 .647+ 7894267 𝑎𝑛𝑑 𝛽.647+ 4:71.9+6 36;)4+)1#
▪ You can show that Cov(Beta_ols) and Cov(Beta_lad) is positive semi definite
● This means that covariance matrix for Beta_lab is larger than that for
Beta_ols
● Hence, Beta_ols is also more efficient!
4
The benefits of buying summaries with Stuvia:
Guaranteed quality through customer reviews
Stuvia customers have reviewed more than 700,000 summaries. This how you know that you are buying the best documents.
Quick and easy check-out
You can quickly pay through credit card or Stuvia-credit for the summaries. There is no membership needed.
Focus on what matters
Your fellow students write the study notes themselves, which is why the documents are always reliable and up-to-date. This ensures you quickly get to the core!
Frequently asked questions
What do I get when I buy this document?
You get a PDF, available immediately after your purchase. The purchased document is accessible anytime, anywhere and indefinitely through your profile.
Satisfaction guarantee: how does it work?
Our satisfaction guarantee ensures that you always find a study document that suits you well. You fill out a form, and our customer service team takes care of the rest.
Who am I buying these notes from?
Stuvia is a marketplace, so you are not buying this document from us, but from seller Madikan. Stuvia facilitates payment to the seller.
Will I be stuck with a subscription?
No, you only buy these notes for $11.03. You're not tied to anything after your purchase.