1. Explain why the linear probability model is inadequate as a specification for limited dependent
variable estimation.
1. While the linear probability model (LPM) is simple to estimate and intuitive to
interpret, it is fatally flawed as a method to deal with binary dependent variables. There
are several problems that we may encounter:
• There is nothing in the model to ensure that the fitted probabilities will lie between zero
and one.
• Even if we truncated the probabilities so that they take plausible values, this will still
result in too many observations for which the estimated probabilities are exactly zero
or one.
• It is simply not plausible to say that the probability of the event occurring is exactly zero
or exactly one.
• Since the dependent variable only takes one of two values, for given (fixed in repeated
samples) values of the explanatory variables, the disturbance term will also only take
on one of two values. Hence the error term cannot plausibly be assumed to be normally
distributed.
• Since the disturbances change systematically with the explanatory variables, they will
also be heteroscedastic.
2. Compare and contrast the probit and logit specifications for binary choice variables.
2. Both the logit and probit model approaches are able to overcome the limitation of the LPM
that it can produce estimated probabilities that are negative or greater than one. They do this
by using a function that effectively transforms the regression model so that the fitted values
are bounded within the (0,1) interval. Visually, the fitted regression model will appear as an S-
shape rather than a straight line, as was the case for the LPM. The only difference between the
two approaches is that under the logit approach, the cumulative logistic function is used to
transform the model, so that the probabilities are bounded between zero and one. But with the
probit model, the cumulative normal distribution is used instead. For the majority of the
applications, the logit and probit models will give very similar characterisations of the data
because the densities are very similar.
3. (a) Describe the intuition behind the maximum likelihood estimation technique used for limited
dependent variable models.
, 3.(a) When maximum likelihood is used as a technique to estimate limited dependent variable
models, the general intuition is the same as for any other model: a log-likelihood function is
formed and then the parameter values are taken to maximise it. The form of this LLF will
depend upon whether the logit or probit model is used; further technical details on the
estimation are given in the appendix to Chapter 11.
(b) Why do we need to exercise caution when interpreting the coefficients of a probit or logit model?
(b) It is tempting, but incorrect, to state that a 1-unit increase in x2i, for example, causes a 2%
increase in the probability that the outcome corresponding to yi = 1 will be realised. This would
have been the correct interpretation for the linear probability model. But for logit and probit
models, this interpretation would be incorrect because the form of the function is not Pi = 1 +
2xi + ui, for example, but rather Pi =F(x2i), where F represents the (non-linear) logistic or
cumulative normal function. To obtain the required relationship between changes in x2i and Pi,
we would need to differentiate F with respect to x2i and it turns out that this derivative is
2F(x2i). So in fact, a 1-unit increase in x2i will cause a 2F(x2i) increase in probability. Usually,
these impacts of incremental changes in an explanatory variable are evaluated by setting each
of them to their mean values.
(c) How can we measure whether a logit model that we have estimated fits the data well or not?
(c) While it would be possible to calculate the values of the standard goodness of fit measures
such as RSS, R2 or adjusted R2 for linear dependent variable models, these cease to have any
real meaning. If calculated in the usual fashion, these will be misleading because the fitted
values from the model can take on any value but the actual values will only be either 0 and 1.
The model has effectively made the correct prediction if the predicted probability for a
particular entity i is greater than the unconditional probability that y = 1, whereas R2 or
adjusted R2 will not give the model full credit for this. Two goodness of fit measures that are
commonly reported for limited dependent variable models are:
• The percentage of yi values correctly predicted, defined as 100 times the number of
observations predicted correctly divided by the total number of observations.
Obviously, the higher this number, the better the fit of the model. Although this
measure is intuitive and easy to calculate, Kennedy (2003) suggests that it is not ideal,
since it is possible that a naïve predictor could do better than any model if the sample
is unbalanced between 0 and 1. For example, suppose that yi =1 for 80% of the
observations. A simple rule that the prediction is always 1 is likely to outperform any
more complex model on this measure but is unlikely to be very useful.
• A measure known as ‘pseudo-R2’, defined as 1 LLF LLF0 where LLF is the
maximised value of the log-likelihood function for the logit or probit model and LLF0
is the value of the log-likelihood function for a restricted model where all of the slope
parameters are set to zero (i.e. the model contains only an intercept). Since the
likelihood is essentially a joint probability, its value must be between zero and one, and
therefore taking its logarithm to form the LLF must result in a negative number. Thus,
as the model fit improves, LLF will become less negative and therefore pseudo-R2 will
rise. This definition of pseudo-R2 is also known as McFadden’s R2.
(d) What is the difference, in terms of the model setup, in binary choice versus multiple choice
problems?