100% satisfaction guarantee Immediately available after payment Both online and in PDF No strings attached
logo-home
Self-Study Questions Chapter 12 with Solutions Chris Brooks - 3rd Edition $3.21
Add to cart

Answers

Self-Study Questions Chapter 12 with Solutions Chris Brooks - 3rd Edition

1 review
 2 purchases
  • Course
  • Institution
  • Book

Here are the exercises from Chapter 12 together with the solutions.

Preview 2 out of 5  pages

  • November 23, 2017
  • 5
  • 2017/2018
  • Answers
  • Unknown

1  review

review-writer-avatar

By: pimoudewesselink • 4 year ago

avatar-seller
1. Explain why the linear probability model is inadequate as a specification for limited dependent
variable estimation.

1. While the linear probability model (LPM) is simple to estimate and intuitive to
interpret, it is fatally flawed as a method to deal with binary dependent variables. There
are several problems that we may encounter:


• There is nothing in the model to ensure that the fitted probabilities will lie between zero
and one.


• Even if we truncated the probabilities so that they take plausible values, this will still
result in too many observations for which the estimated probabilities are exactly zero
or one.




• It is simply not plausible to say that the probability of the event occurring is exactly zero
or exactly one.


• Since the dependent variable only takes one of two values, for given (fixed in repeated
samples) values of the explanatory variables, the disturbance term will also only take
on one of two values. Hence the error term cannot plausibly be assumed to be normally
distributed.




• Since the disturbances change systematically with the explanatory variables, they will
also be heteroscedastic.


2. Compare and contrast the probit and logit specifications for binary choice variables.

2. Both the logit and probit model approaches are able to overcome the limitation of the LPM
that it can produce estimated probabilities that are negative or greater than one. They do this
by using a function that effectively transforms the regression model so that the fitted values
are bounded within the (0,1) interval. Visually, the fitted regression model will appear as an S-
shape rather than a straight line, as was the case for the LPM. The only difference between the
two approaches is that under the logit approach, the cumulative logistic function is used to
transform the model, so that the probabilities are bounded between zero and one. But with the
probit model, the cumulative normal distribution is used instead. For the majority of the
applications, the logit and probit models will give very similar characterisations of the data
because the densities are very similar.




3. (a) Describe the intuition behind the maximum likelihood estimation technique used for limited
dependent variable models.

, 3.(a) When maximum likelihood is used as a technique to estimate limited dependent variable
models, the general intuition is the same as for any other model: a log-likelihood function is
formed and then the parameter values are taken to maximise it. The form of this LLF will
depend upon whether the logit or probit model is used; further technical details on the
estimation are given in the appendix to Chapter 11.
(b) Why do we need to exercise caution when interpreting the coefficients of a probit or logit model?

(b) It is tempting, but incorrect, to state that a 1-unit increase in x2i, for example, causes a 2%
increase in the probability that the outcome corresponding to yi = 1 will be realised. This would
have been the correct interpretation for the linear probability model. But for logit and probit
models, this interpretation would be incorrect because the form of the function is not Pi = 1 +
2xi + ui, for example, but rather Pi =F(x2i), where F represents the (non-linear) logistic or
cumulative normal function. To obtain the required relationship between changes in x2i and Pi,
we would need to differentiate F with respect to x2i and it turns out that this derivative is
2F(x2i). So in fact, a 1-unit increase in x2i will cause a 2F(x2i) increase in probability. Usually,
these impacts of incremental changes in an explanatory variable are evaluated by setting each
of them to their mean values.
(c) How can we measure whether a logit model that we have estimated fits the data well or not?

(c) While it would be possible to calculate the values of the standard goodness of fit measures
such as RSS, R2 or adjusted R2 for linear dependent variable models, these cease to have any
real meaning. If calculated in the usual fashion, these will be misleading because the fitted
values from the model can take on any value but the actual values will only be either 0 and 1.
The model has effectively made the correct prediction if the predicted probability for a
particular entity i is greater than the unconditional probability that y = 1, whereas R2 or
adjusted R2 will not give the model full credit for this. Two goodness of fit measures that are
commonly reported for limited dependent variable models are:


• The percentage of yi values correctly predicted, defined as 100 times the number of
observations predicted correctly divided by the total number of observations.
Obviously, the higher this number, the better the fit of the model. Although this
measure is intuitive and easy to calculate, Kennedy (2003) suggests that it is not ideal,
since it is possible that a naïve predictor could do better than any model if the sample
is unbalanced between 0 and 1. For example, suppose that yi =1 for 80% of the
observations. A simple rule that the prediction is always 1 is likely to outperform any
more complex model on this measure but is unlikely to be very useful.


• A measure known as ‘pseudo-R2’, defined as 1  LLF LLF0 where LLF is the
maximised value of the log-likelihood function for the logit or probit model and LLF0
is the value of the log-likelihood function for a restricted model where all of the slope
parameters are set to zero (i.e. the model contains only an intercept). Since the
likelihood is essentially a joint probability, its value must be between zero and one, and
therefore taking its logarithm to form the LLF must result in a negative number. Thus,
as the model fit improves, LLF will become less negative and therefore pseudo-R2 will
rise. This definition of pseudo-R2 is also known as McFadden’s R2.


(d) What is the difference, in terms of the model setup, in binary choice versus multiple choice
problems?

The benefits of buying summaries with Stuvia:

Guaranteed quality through customer reviews

Guaranteed quality through customer reviews

Stuvia customers have reviewed more than 700,000 summaries. This how you know that you are buying the best documents.

Quick and easy check-out

Quick and easy check-out

You can quickly pay through credit card or Stuvia-credit for the summaries. There is no membership needed.

Focus on what matters

Focus on what matters

Your fellow students write the study notes themselves, which is why the documents are always reliable and up-to-date. This ensures you quickly get to the core!

Frequently asked questions

What do I get when I buy this document?

You get a PDF, available immediately after your purchase. The purchased document is accessible anytime, anywhere and indefinitely through your profile.

Satisfaction guarantee: how does it work?

Our satisfaction guarantee ensures that you always find a study document that suits you well. You fill out a form, and our customer service team takes care of the rest.

Who am I buying these notes from?

Stuvia is a marketplace, so you are not buying this document from us, but from seller claudiughiuzan. Stuvia facilitates payment to the seller.

Will I be stuck with a subscription?

No, you only buy these notes for $3.21. You're not tied to anything after your purchase.

Can Stuvia be trusted?

4.6 stars on Google & Trustpilot (+1000 reviews)

64450 documents were sold in the last 30 days

Founded in 2010, the go-to place to buy study notes for 15 years now

Start selling
$3.21  2x  sold
  • (1)
Add to cart
Added