100% tevredenheidsgarantie Direct beschikbaar na betaling Zowel online als in PDF Je zit nergens aan vast
logo-home
Complete WEEK2 note: Machine Learning & Learning Algorithms(BM05BAM) €10,99   In winkelwagen

College aantekeningen

Complete WEEK2 note: Machine Learning & Learning Algorithms(BM05BAM)

 16 keer bekeken  0 keer verkocht

THIS IS A COMPLETE NOTE FROM ALL BOOKS + LECTURE! Save your time for internships, other courses by studying over this note! Are you a 1st/2nd year of Business Analytics Management student at RSM, who want to survive the block 2 Machine Learning module? Are you overwhelmed with 30 pages of re...

[Meer zien]

Voorbeeld 3 van de 17  pagina's

  • 12 maart 2024
  • 17
  • 2023/2024
  • College aantekeningen
  • Jason roos
  • Alle colleges
book image

Titel boek:

Auteur(s):

  • Uitgave:
  • ISBN:
  • Druk:
Alle documenten voor dit vak (7)
avatar-seller
ArisMaya
ISLR : Chapter 5 : Resampling Methods
Resampling methods are an indispensable tool in modern statistics.
They involve repeatedly drawing samples from a training set and refitting a model of
interest on each sample in order to obtain additional information about the fitted model.

Two cheaper alternative to resampling methods: cross-validation and bootstrap

Cross-validation involves a process of evaluating a model’s performance: model
assessment, and a process of selecting the proper level of flexibility for a model: model
selection.

Bootstrap is most commonly used to provide a measure of accuracy of a parameter
estimate or statistical learning method.

5.1: Cross-validation
The test error is the average error that results from using a statistical learning method to
predict the response on a new observation. However, it is hard to have designated test
set.

Cross validation address the issue by estimating the test error by holding out a subset of
the training observations from the fitting process and then apply the method on the held
out observations.

5.1.1 The validation set approach
Validation set approach is a simple strategy that involves randomly dividing the data set
into training and validation/hold-out set.




Process
1. A set of n observations are randomly split into training set and a validation set
2. The model is fit on the training set and the model is used to predict the responses
for the observations in the validation set.
3. The resulting validation set error rate provides the estimate of the test error rate.

Drawbacks
1. High variance in the validation estimate of the test error rate for each validation
set, due to the dependencies on the observations in the validation set. It limits the
conclusion of model selection.
a. The decision on division is random, not sequential, which produces
inconsistencies in the estimates.
2. Only a subset of observations for train data are used to fit the model. Validation
set error may tend to overestimate the test error rate of the entire data set due to
simply less number of observations for train data.

,Cross validation (2 approaches) address these two issues of high variance and test error
rate overestimation.

5.1.2 Leave-One-Out Cross Validation (LOOCV)
The method also splits the set of observations into two part. However, instead of
creating two subsets of comparable size, a single observation is used for the validation
and rest of them (n-1) are trained.




Process
1. Aa set of n data points is repeatedly split into training set (n-1) and a validation
set.
2. Fit the train data and test on the 1 validation set.
3. The test error is estimated by averaging the n resulting validation error rate (i.e.
MSE1,MSE2… MSEn).



Advantages
1. Less bias than validation data set approach as it repeatedly fit the statistical
learning method with n-1 number of training data, almost as many as the entire
data set.
a. Prevents overestimating test error as much as the validation set approach.
2. Performing LOOCV multiple times always yield the same results: no randomness
in the training/validation set splits.
Disadvantage
1. Potential to be expensive to implement: time consuming
a. With leas square linear/polynomial regression, a shortcut reduces the
LOOCV, with the following formula. However it means that it would be too
expensive for any other methods.



hi is the leverage defined, where the value lies between 1/n and 1, and reflects the amount
that an observation influences its own fit.



5.1.3 K-Fold Cross-Validation
K-fold cross validation involves randomly dividing the set of observations into K folds of
approximately equal size.

, Process
1. Dividing the entire set into K groups/folds of non-overlapping groups.
a. Typically, one perform k-fold CV using k = 5/k = 10.
2. The first fold is treated as a validation set, and the method will be fit on the n-k
folds.
3. The mean validation error is computed on the observations in the held-out fold.
4. This is repeated k times, which a different group of observation is treated as a
validation set.
5. The k-fold CV estimate is computed by averaging these values.




Example: K-fold cross validation for K-nearest neighbors
1. Choose a grid for k: These are candidate values of 'k' (number of neighbors) for
the K-NN algorithm that you will evaluate.
2. Create K CV folds: Divide the training set into 'K' distinct subsets.
3. Iterate over each value of k: Repeat the following for each candidate value of k:
4. Iterate over each fold K:
A. Train the model: Use all subsets except the Kth one to train the k-NN
model
B. Predict the Kth subset: Predict the outcomes for the Kth subset using the
trained model
C. Calculate test metrics: Measure the model's performance (e.g., accuracy) in
predicting the Kth subset
5. Combine performance metrics: Calculate the CV error for the value of k over all K
folds
6. Finalize model: Select the value of k with the best performance in terms of CV
error

Advantage
- Computational cost reduction, which enables it to be applied to almost any
statistical learning method, including computationally intensive fitting procedures.
- Much lower variability in CV estimates as the result of the variability in how the
observations are divided into k folds.
- Still more case number for training than validation set, thus less bias.

Purpose
When we employ CV, we are interested in either

Voordelen van het kopen van samenvattingen bij Stuvia op een rij:

Verzekerd van kwaliteit door reviews

Verzekerd van kwaliteit door reviews

Stuvia-klanten hebben meer dan 700.000 samenvattingen beoordeeld. Zo weet je zeker dat je de beste documenten koopt!

Snel en makkelijk kopen

Snel en makkelijk kopen

Je betaalt supersnel en eenmalig met iDeal, creditcard of Stuvia-tegoed voor de samenvatting. Zonder lidmaatschap.

Focus op de essentie

Focus op de essentie

Samenvattingen worden geschreven voor en door anderen. Daarom zijn de samenvattingen altijd betrouwbaar en actueel. Zo kom je snel tot de kern!

Veelgestelde vragen

Wat krijg ik als ik dit document koop?

Je krijgt een PDF, die direct beschikbaar is na je aankoop. Het gekochte document is altijd, overal en oneindig toegankelijk via je profiel.

Tevredenheidsgarantie: hoe werkt dat?

Onze tevredenheidsgarantie zorgt ervoor dat je altijd een studiedocument vindt dat goed bij je past. Je vult een formulier in en onze klantenservice regelt de rest.

Van wie koop ik deze samenvatting?

Stuvia is een marktplaats, je koop dit document dus niet van ons, maar van verkoper ArisMaya. Stuvia faciliteert de betaling aan de verkoper.

Zit ik meteen vast aan een abonnement?

Nee, je koopt alleen deze samenvatting voor €10,99. Je zit daarna nergens aan vast.

Is Stuvia te vertrouwen?

4,6 sterren op Google & Trustpilot (+1000 reviews)

Afgelopen 30 dagen zijn er 83662 samenvattingen verkocht

Opgericht in 2010, al 14 jaar dé plek om samenvattingen te kopen

Start met verkopen
€10,99
  • (0)
  Kopen