100% satisfaction guarantee Immediately available after payment Both online and in PDF No strings attached
logo-home
Part 2 summary Business Analytics: Week 5 - Week 7 $6.49   Add to cart

Summary

Part 2 summary Business Analytics: Week 5 - Week 7

4 reviews
 134 views  14 purchases
  • Course
  • Institution
  • Book

Part 2 of the summary for the course 'Business Analytics'. Includes all the reading material for week 5, 6, and 7. Week 5 --> Read chapter 4.1-4.3 Week 6 --> Read chapter 5.1-5.2 Week 7 --> Read chapter 8.1-8.2.2

Preview 3 out of 17  pages

  • No
  • 4.1 | 4.2 | 4.3 | 5.1 | 5.2 | 8.1 | 8.2.1 | 8.2.2
  • December 7, 2020
  • 17
  • 2020/2021
  • Summary

4  reviews

review-writer-avatar

By: Ádámv1 • 3 year ago

review-writer-avatar

By: siezenisjoelle • 3 year ago

review-writer-avatar

By: tomascepas7 • 3 year ago

review-writer-avatar

By: timothykwok • 3 year ago

avatar-seller
Summary Endterm Business Analytics week 5 - week 7

WEEK 5 CHAPTERS
In this chapter, we study approaches for predicting qualitative responses, a process that is known as
classification​.

4.1 An Overview of Classification

Classification problem example​:
A person arrives at the emergency room with a set of symptoms that could possibly be
attributed to one of three medical conditions. Which of the three conditions does the individual have?

Just as in the regression setting, in the classification setting we have a set of training observations
(​x1,y1​),...,(​xn,yn​) that we can use to build a classifier. We want our classifier to perform well not only
on the training data, but also on test observations that were not used to train the classifier.

4.2 Why Not Linear Regression?

Suppose we have three possible diagnoses.




Linear regression would not be appropriate in this case, because of the qualitative responses. The
difference between for example stroke and epileptic seizure, would not be the same as the difference
between drug overdose and epileptic seizure. Each of these codings would produce fundamentally
different linear models that would ultimately lead to different sets of predictions on test observations.

Only if the response variable’s values did take on a natural ordering, such as mild, moderate, and
severe, and we felt the gap between mild and moderate was similar to the gap between moderate and
severe, then a 1, 2, 3 coding would be reasonable.

For a ​binary​ (two level) qualitative response, the situation is better. For instance, perhaps there are
only two possibilities for the patient’s medical condition: stroke and drug overdose. We could then
use the dummy variable approach where stroke = 0 and drug overdose = 1.
For a binary response with a 0/1 coding as above, regression by ​least squares​ does make sense; it can
be shown that the ​Xˆ β o​ btained using linear regression is in fact an estimate of Pr(drug overdose|X) in
this special case. However, if we use linear regression, some of our estimates might be outside the
[0,1] interval (e.g. below 0), making them hard to interpret as probabilities! Nevertheless, the
predictions provide an ordering and can be interpreted as crude probability estimates. Curiously, it
turns out that the classifications that we get if we use linear regression to predict a binary response
will be the same as for the linear discriminant analysis (LDA).




1

,4.3 Logistic Regression
4.3.1 The Logistic Regression Model

How should we model the relationship between ​p(​ X)=Pr(Y=1|X) and X? (For convenience we are
using the generic 0/1 coding for the response)​.

To avoid the problem that ​p​ will be lower than 0 or higher than 1, we must model ​p​(X) using a
function that gives outputs between 0 and 1 for all values of X. Many functions meet this description.
In logistic regression, we use the ​logistic function




To fit the model, we use a method called ​maximum likelihood​, which we discuss in the next section.
The logistic function will always produce an ​S-shaped curve​ of this form, and so regardless of the
value of X, we will obtain a sensible prediction.

After a bit of manipulation of the above formula, we find that




The quantity ​p(​ X)/[1−p(X)] is called the ​odds​, and can take on any value between 0 and ∞. Values of
the odds close to 0 and ∞ indicate very low and very high probabilities of default, respectively. By
taking the logarithm of both sides of the above formula, we arrive




The left-hand side is called the ​log-odds​ or ​logit​. In a logistic regression model, increasing X by one
unit changes the log odds by β1, or equivalently it multiplies the odds by eβ1. The amount that ​p(​ X)
changes due to a one-unit change in X will depend on the current value of X. But regardless of the
value of X, if β1 is positive then increasing X will be associated with increasing ​p(​ X), and if β1 is
negative then increasing X will be associated with decreasing ​p​(X)

4.3.2 Estimating the Regression Coefficients

We could use (non-linear) least squares to fit the model, but the more general method of ​maximum
likelihood​ is preferred, since it has better statistical properties.
The basic intuition behind using maximum likelihood to fit a logistic regression model is as follows:
we seek estimates for β0 and β1 such that the predicted probability ​ˆp(xi)​ of default for each
individual, corresponds as closely as possible to the individual’s observed default status. In other
words, we try to find ˆβ0 and ˆβ1 such that plugging these estimates into the model for ​p​(X), yields a
number close to one for all individuals who defaulted, and a number close to zero for all individuals
who did not. This intuition can be formalized using a mathematical equation called a ​likelihood
function​:




2

, The estimates ˆβ0 and ˆβ1 are chosen to maximize this likelihood function.

4.3.3 Making Predictions

Once the coefficients have been estimated, it is a simple matter to compute the probability of Y for
any given X. For example, the probability below is less than 1%




One can use qualitative predictors with the logistic regression model using the dummy variable
approach explained in 4.2.

4.3.4 Multiple Logistic Regression

We now consider the problem of predicting a binary response using multiple predictors. By analogy
with the extension from simple to multiple linear regression in Chapter 3, we can generalize the
simple logistics regression formula as follows:




where ​X = (X1,...,Xp)​ are ​p​ predictors. This equation can be rewritten as




Again, we use the maximum likelihood method to estimate the coefficients.

Confounding
A phenomenon where the results obtained using one predictor may be quite different from those
obtained using multiple predictors, especially when there is correlation among the predictors.

Table 4.2 only has student status as a predictor. Table 4.3 has 2 predictors, credit card balance and
students status.




3

The benefits of buying summaries with Stuvia:

Guaranteed quality through customer reviews

Guaranteed quality through customer reviews

Stuvia customers have reviewed more than 700,000 summaries. This how you know that you are buying the best documents.

Quick and easy check-out

Quick and easy check-out

You can quickly pay through credit card or Stuvia-credit for the summaries. There is no membership needed.

Focus on what matters

Focus on what matters

Your fellow students write the study notes themselves, which is why the documents are always reliable and up-to-date. This ensures you quickly get to the core!

Frequently asked questions

What do I get when I buy this document?

You get a PDF, available immediately after your purchase. The purchased document is accessible anytime, anywhere and indefinitely through your profile.

Satisfaction guarantee: how does it work?

Our satisfaction guarantee ensures that you always find a study document that suits you well. You fill out a form, and our customer service team takes care of the rest.

Who am I buying these notes from?

Stuvia is a marketplace, so you are not buying this document from us, but from seller jtimmermans. Stuvia facilitates payment to the seller.

Will I be stuck with a subscription?

No, you only buy these notes for $6.49. You're not tied to anything after your purchase.

Can Stuvia be trusted?

4.6 stars on Google & Trustpilot (+1000 reviews)

67163 documents were sold in the last 30 days

Founded in 2010, the go-to place to buy study notes for 14 years now

Start selling
$6.49  14x  sold
  • (4)
  Add to cart