100% tevredenheidsgarantie Direct beschikbaar na betaling Zowel online als in PDF Je zit nergens aan vast
logo-home
Part 1 (midterm) summary Business Analytics: Week 1 - Week 3 €5,48   In winkelwagen

Samenvatting

Part 1 (midterm) summary Business Analytics: Week 1 - Week 3

4 beoordelingen
 376 keer bekeken  17 keer verkocht

Summary for the midterm of the course 'Business Analytics'. Includes all the reading material for week 1, 2, and 3. Week 1 --> Read chapter 2.1-2.2. Week 2 --> Read chapter 10.1, 10.3 Week 3 --> Read chapter 3.1-3.3, 3.5 Also, check out my free summary of the knowledge clips from week ...

[Meer zien]
Laatste update van het document: 3 jaar geleden

Voorbeeld 3 van de 22  pagina's

  • Nee
  • 2.1, 2.2, 10.1, 10.3, 3.1, 3.2, 3.3, 3.5
  • 11 november 2020
  • 1 december 2020
  • 22
  • 2020/2021
  • Samenvatting
book image

Titel boek:

Auteur(s):

  • Uitgave:
  • ISBN:
  • Druk:
Alle documenten voor dit vak (3)

4  beoordelingen

review-writer-avatar

Door: Ádámv1 • 3 jaar geleden

review-writer-avatar

Door: tomascepas7 • 3 jaar geleden

review-writer-avatar

Door: lolaboogerd • 3 jaar geleden

review-writer-avatar

Door: daniellabonilla98 • 3 jaar geleden

avatar-seller
jtimmermans
Summary midterm Business Analytics week 1 - week 3

WEEK 1 CHAPTERS

2.1 What Is Statistical Learning?

Input variables​: e.g. advertising budgets. Typically denoted using the symbol ​X​, with a subscript to
distinguish them. ​X1: ​TV budget, ​X2:​ The Radio Budget etc.
→ also called predictors, independent variables, features, or sometimes just variables

Output variable​: e.g. sales. Typically denoted using the symbol ​Y
→ also called variable response or dependent variable

We assume that there is some relationship between ​Y​ and ​X = (X1, X2, ..., Xp)​, which can be written
in the very general form:

Y = f(X) + ϵ

Here ​f i​ s some fixed but unknown function of ​X1…, Xp​, and ϵ is ​random error term, ​which is
independent of ​X​ and has mean zero. ​f​ represents the systematic information that ​X​ provides about ​Y.

In essence, ​statistical learning ​refers to a set of approaches for estimating ​f.

2.1.1 Why Estimate f

Prediction
To predict ​Y​, when inputs ​X​ are readily available and the error term averages to zero.


f^​ represents our estimate for ​f​. ​Often treated as a black box, not typically concerned with the exact
form of ​f^, p​ rovided that it yields accurate predictions for ​Y​. ​Ŷ​ represents the resulting prediction
for ​Y.

The accuracy of ​Ŷ​ depends on two quantities:
● Reducible error​: ​f^ ​will not be a perfect estimate for ​f, ​and this inaccuracy will introduce
some error. It is reducible because we can potentially improve the accuracy of ​f^ ​ by using the
most appropriate statistical learning technique.
● Irreducible errors​: ​Y i​ s also a function of ​ϵ w
​ hich cannot be predicted by using ​X. ​Therefore
this error also affects the accuracy of our predictions. No matter how well we estimate ​f ​, we
cannot reduce the error introduced by ϵ​ .
○ Why is the irreducible error larger than zero?
■ Unmeasured variables
■ Unmeasurable variation




1

,E(Y - Ŷ)² ​represents the average or expected value​, of the squared difference between the predicted
and actual value of ​Y.​
Var(ϵ)​ ​represents the ​variance​ associated with the error term.

Inference
To ​understand the relationship between ​X a​ nd ​Y,​ to understand how ​Y c​ hanges as a function of
X1,..., Xp.​ Now ​f ​cannot be treated as a black box, because we need to know its exact form.
● Which predictors are associated with the response?
● What is the relationship between the response and each predictor?
● Can the relationship between Y and each predictor be adequately summarized using a linear
equation, or is the relationship more complicated?

Linear models​ allow for relatively simple and interpretable inferences, but may not yield as accurate
predictions as some other approaches.

2.1.2 How do we estimate f?

Training data​: observations that will be used to train our method how to estimate ​f.
Let ​Xij​ represent the value of the ​j​th predictor, or input, for observation ​i​, where ​i=1,2,..., n​ and
j=1,2,...,p.​ Correspondingly, let ​yi​ represent the response variable for the​ i​th observation. Then our
training data consist of ​{(x1,y1), (x2,y2),..., (xn,yn)} w​ here ​xi = (xi1, xi2,..., xip)T.

Parametric methods
Involve a two-step model-based approach.
​ xample assumption linear model:
1. Make an ​assumption​ about the functional form/shape of ​f. E


Only the ​p + 1 ​coefficients (β) have to be estimated.
2. Now we need a procedure that uses the training data to fit/train the model. We want to find
values such that


The most common approach is ​(ordinary) least squares.

The potential disadvantage of a parametric approach is that the model we choose will usually not
match the true unknown form of ​f. ​We can try to address this problem by choosing ​flexible ​models
that can fit many different possible functional forms for ​f.
→ ​Flexible models ​can lead to ​overfitting ​the data. This means they follow the errors or ​noise​ too
closely.




2

, Non-parametric methods
Do not make explicit assumptions about the functional form of ​f. ​They seek an estimate of ​f t​ hat gets
as close to the data points as possible without being too rough or wiggly.

Advantage​: have the potential to accurately fit a wider range of possible shapes for ​f. ​Avoid the
danger of estimate ​f ​ being very different from the true ​f.
Disadvantage: ​they do not reduce the problem or estimating ​f t​ o a small number of parameters, a very
large number of observations is required in order to obtain an accurate estimate for ​f.

Thin-plate spline​ can be used to estimate ​f. ​It does not impose any pre-specified model on ​f. ​It instead
attempts to produce an estimate for ​f ​that is as close as possible to the observed data, subject to the fit
being ​smooth.

2.1.3 The trade-off between prediction accuracy and model interpretability
Why would we ever choose a more restrictive method (linear) instead of a very flexible approach (thin
plated)? → When we are mainly interested in inference, restrictive models are much more
interpretable
Restrictive approach is more interpretable because it is less complex. Flexible approach however can
be more complex to interpret.




Trade-off between flexibility and interpretability of different statistical learning methods:
Lasso: ​relies upon the linear model but uses an alternative fitting procedure for estimating the
coefficients. More restrictive and therefore less flexible and more interpretable.
Least squares linear regression: ​relatively inflexible but quite interpretable.
Generalized additive model (GAMs):​ extend the linear model to allow for certain non-linear
relationships. More flexible than linear regression and less interpretable.
Bagging, boosting, and support vector machines:​ highly flexible approaches that are harder to
interpret.

2.1.4 Supervised versus Unsupervised Learning

Supervised learning​: for each observation of the predictor measurements there is an associated
response to the predictors.
Unsupervised learning​: when we lack a response variable (Y) that can supervise our analysis.
E.g. ​Cluster analysi​s: the goal is to ascertain whether the observations fall into relatively
distinct groups

3

Voordelen van het kopen van samenvattingen bij Stuvia op een rij:

Verzekerd van kwaliteit door reviews

Verzekerd van kwaliteit door reviews

Stuvia-klanten hebben meer dan 700.000 samenvattingen beoordeeld. Zo weet je zeker dat je de beste documenten koopt!

Snel en makkelijk kopen

Snel en makkelijk kopen

Je betaalt supersnel en eenmalig met iDeal, creditcard of Stuvia-tegoed voor de samenvatting. Zonder lidmaatschap.

Focus op de essentie

Focus op de essentie

Samenvattingen worden geschreven voor en door anderen. Daarom zijn de samenvattingen altijd betrouwbaar en actueel. Zo kom je snel tot de kern!

Veelgestelde vragen

Wat krijg ik als ik dit document koop?

Je krijgt een PDF, die direct beschikbaar is na je aankoop. Het gekochte document is altijd, overal en oneindig toegankelijk via je profiel.

Tevredenheidsgarantie: hoe werkt dat?

Onze tevredenheidsgarantie zorgt ervoor dat je altijd een studiedocument vindt dat goed bij je past. Je vult een formulier in en onze klantenservice regelt de rest.

Van wie koop ik deze samenvatting?

Stuvia is een marktplaats, je koop dit document dus niet van ons, maar van verkoper jtimmermans. Stuvia faciliteert de betaling aan de verkoper.

Zit ik meteen vast aan een abonnement?

Nee, je koopt alleen deze samenvatting voor €5,48. Je zit daarna nergens aan vast.

Is Stuvia te vertrouwen?

4,6 sterren op Google & Trustpilot (+1000 reviews)

Afgelopen 30 dagen zijn er 85651 samenvattingen verkocht

Opgericht in 2010, al 14 jaar dé plek om samenvattingen te kopen

Start met verkopen
€5,48  17x  verkocht
  • (4)
  Kopen