SOLUTIONS MANUAL for Introduction to Econometrics, Global Edition 4th Edition James H. Stock; Mark Watson - (GET DOWNLOAD LINK FOR MULTIPLE FILES + EXCEL)
Econometrics Summary - ENDTERM UVA EBE
All for this textbook (10)
Written for
Universiteit van Amsterdam (UvA)
Economie
Introduction to Econometrics
All documents for this subject (1)
Seller
Follow
DirkV
Content preview
Summary Econometrics
Chapter 2 Review of probability
Random variable
A random variable is a numerical summary of a random outcome. The number of times your
computer crashes while you are writing a term paper is random and takes on a numerical
value, so it is a random variable.
Some random variables are discrete and some are continuous. As their names suggest, a
discrete random variable takes on only a secrete set of values, like 0, 1, 2, …, whereas a
continuous random variable takes on a continuum of possible values.
Probability distribution
The probability distribution of a discrete random variable is the list of all possible values of
the variable and the probability that each value will occur. These probabilities sum to 1.
For example, let M be the number of times your computer crashes while you are writing a
term paper. The probability distribution of the random variable M is the list of probabilities of
each possible outcome: The probability that M = 0, denoted Pr(M = 0), is the probability of no
computer crashes; Pr(M = 1) is the probability of a single computer crash; and so forth.
Population/sample characteristics
In a simple random sampling, in which n objects are selected at random from a population
(the population of commuting days) and each member of the population (each day) is equally
likely to be included in the sample.
Expectation/average
The expected value of a random variable Y, denoted E(Y), is the long-run average value of
the random variable over many repeated trials or occurrences. The expected value of a
discrete random variable is computed as a weighted average of the possible outcomes of that
random variable, where the weights are the probabilities of that outcome. The expected value
of Y is also called the expectation of Y or the mean of Y and is denoted µY.
Key concept 2.1 Expected Value and the Mean
Suppose the random variable Y takes on k possible values, y1,…,yk, where y1 denotes the
first value, y2 denotes the second value, and so forth, and that the probability that Y takes on
y1 is p1, the probability that Y takes on y2 is p2, and so forth. The expected value of Y,
denoted E(Y), is
k
E(Y) = y1p1 + y2p2 + … + ykpk ¿ ∑ YiPi
i=1
k
Where the notation ∑ YiPi means ‘the sum of YiPi for I running from 1 to k.’ The expected
i=1
value of Y is also called the mean of Y or the expectation of Y and is denoted µY.
Variance
The variance and standard deviation measure the dispersion or the ‘spread’ of a probability
distribution. The variance of a random variable Y, denoted var(Y), is the expected value of the
square of the deviation of Y from its mean:
Var(Y) = E[(Y - µY)²].
,Because the variance involves the square of Y, the units of the variance are the units of the
square of Y, which makes the variance awkward to interpret. It is therefore common to
measure the spread by the standard deviation, which is the square root of the variance and is
denoted ơY. The standard deviation has the same units as Y. These definitions are summarized
in Key Concept 2.2.
Key concept 2.2 Variance and Standard Deviation
The variance of the discrete random variable Y, denoted ơY², is
k
ơY² = var(Y) = E[(Y - µY)²] = ∑ (Yi−µY ) ² Pi
i=1
The standard deviation of Y is ơY, the square root of the variance. The units of the standard
deviation are the same as the units of Y.
Covariance
One measure of the extent to which two random variables move together is their covariance.
The covariance between X and Y is the expected value E[(X - µX)(Y-µY)], where µX, is the
mean of X and µY is the mean of Y. The covariance is denoted cov(X,Y) or ơXY. If X can take
on l values and Y can take on k values, then the covariance is given by the formula
k l
Cov(X, Y) = ơXY = E[(X - µX)(Y-µY)] =∑ ∑ ( Xj−µX )( Yi−µY ) Pr (X= Xj , Y =Yi)
i=1 j=1
To interpret this formula, suppose that when X is greater than its mean (so that X - µX is
positive), then Y tends to be greater than its mean (so that tends to be greater than its mean
(so that Y - µY is positive), and when X is less than its mean (so that X - µX < 0), then Y
tends to be less than its mean (so that X < 0), then Y tends to be less than its mean (so that Y -
µY < 0). In both cases, the product (X - µX) x (Y - µY) tends to be positive, so the covariance
is positive. In contrast, if X and Y move in opposite directions (so that X is large when Y is
small, and vice versa), then the covariance is negative. Finally, if X and Y are independent,
then the covariance is zero.
Correlation
Because the covariance is the product of X and Y, deviated from their means, its units are,
awkwardly, the units of X multiplied by the units of Y. This ‘units’ problem can make
numerical values of the covariance difficult to interpret. The correlation is an alternative
measure of dependence between X and Y that solves the ‘units’ problem of the covariance.
Specifically, the correlation between X and Y is the covariance between X and Y divided by
their standard deviations:
cov (X , Y ) ơXY
Corr(X, Y) = =
√ var X var (Y )
( ) ơXơY
Because the units of the numerator in this Equation are the same as those of the denominator,
the units cancel and the correlation is unitless. The random variables X and Y are said to be
uncorrelated if corr(X, Y) = 0.
The correlation always is between -1 and 1.
, Chapter 3 Review of statistics
Estimator/Estimates
The sample average is a natural way to estimate µY, but it is not the only way. For
example, another way to estimate µY is simply to use the first observation, Y1. Both and
Y1 are functions of the data that are designed to estimate µY; using the terminology in Key
Concept 3.1, both are estimators of µY. When evaluated in repeated samples, and Y1 take
on different values (they produce different estimates) from one sample to the next. Thus the
estimators and Y1 both have sampling distributions. There are, in fact, many situations of
µY, of which and Y1 are two examples.
There are many possible estimators, so what makes one estimator ‘better’ than another?
Because estimators are random variables, this question can be phrased more precisely: What
are desirable characteristics of the sampling distribution of an estimator? In general, we would
like an estimator that gets as close as possible to the unknown true value, at least in some
average sense; in other words, we would like the sampling distribution of an estimator to be as
tightly centered on the unknown values as possible. This observation leads to three specific
desirable characteristics of an estimator: unbiasedness (a lack of bias), consistency, and
efficiency.
Key Concept 3.1 Estimators and Estimates
An estimator is a function of a sample of data to be drawn randomly from a population. An
estimate is the numerical value of the estimator when it is actually computed using data from
a specific sample. An estimator is a random variable because of randomness in selecting the
sample, while an estimate is a nonrandom number.
Unbiasedness
Suppose you evaluate an estimator many times over repeated randomly drawn samples. It is
reasonable to hope that, on average, you would get the right answer. Thus a desirable property
of an estimator is that the mean of its sampling distribution equals µY; if so, the estimator is
said to be unbiased.
To state this concept mathematically, let ûY denote some estimator of µY, such as or Y1.
The estimator ûY is unbiased if E(ûY) = µY, where E(ûY) is the mean of the sampling
distribution of ûY; otherwise, ûY is biased.
Consistency
Another desirable property of an estimator µY is that, when the sample size is large, the
uncertainty about the value of µY arising from random variations in the sample is very small.
Stated more precisely, a desirable property of ûY is that the probability that is within a small
interval of the true value µY approaches 1 as the sample size increases, that is, ûY is
consistent for µY.
Efficiency
Suppose you have two candidate estimators, ûY and ũY, both of which are unbiased. How
might you choose between them? One way to do so is to choose the estimator with the tightest
sampling distribution. This suggests choosing between ûY and ũY by picking the estimator
with the smallest variance. If ûY has a smaller variance than ũY, then ûY is said to be more
efficient than ũY. The terminology ‘efficiency’ stems from the notion that if ûY has a smaller
variance than ũY, then it uses the information in the data more efficiently than does ũY.
The benefits of buying summaries with Stuvia:
Guaranteed quality through customer reviews
Stuvia customers have reviewed more than 700,000 summaries. This how you know that you are buying the best documents.
Quick and easy check-out
You can quickly pay through credit card or Stuvia-credit for the summaries. There is no membership needed.
Focus on what matters
Your fellow students write the study notes themselves, which is why the documents are always reliable and up-to-date. This ensures you quickly get to the core!
Frequently asked questions
What do I get when I buy this document?
You get a PDF, available immediately after your purchase. The purchased document is accessible anytime, anywhere and indefinitely through your profile.
Satisfaction guarantee: how does it work?
Our satisfaction guarantee ensures that you always find a study document that suits you well. You fill out a form, and our customer service team takes care of the rest.
Who am I buying these notes from?
Stuvia is a marketplace, so you are not buying this document from us, but from seller DirkV. Stuvia facilitates payment to the seller.
Will I be stuck with a subscription?
No, you only buy these notes for $4.81. You're not tied to anything after your purchase.