100% satisfaction guarantee Immediately available after payment Both online and in PDF No strings attached
logo-home
Complete WEEK2 note: Machine Learning & Learning Algorithms(BM05BAM) R216,17
Add to cart

Class notes

Complete WEEK2 note: Machine Learning & Learning Algorithms(BM05BAM)

 17 views  0 purchase
  • Course
  • Institution
  • Book

THIS IS A COMPLETE NOTE FROM ALL BOOKS + LECTURE! Save your time for internships, other courses by studying over this note! Are you a 1st/2nd year of Business Analytics Management student at RSM, who want to survive the block 2 Machine Learning module? Are you overwhelmed with 30 pages of re...

[Show more]

Preview 3 out of 17  pages

  • March 12, 2024
  • 17
  • 2023/2024
  • Class notes
  • Jason roos
  • All classes
avatar-seller
ISLR : Chapter 5 : Resampling Methods
Resampling methods are an indispensable tool in modern statistics.
They involve repeatedly drawing samples from a training set and refitting a model of
interest on each sample in order to obtain additional information about the fitted model.

Two cheaper alternative to resampling methods: cross-validation and bootstrap

Cross-validation involves a process of evaluating a model’s performance: model
assessment, and a process of selecting the proper level of flexibility for a model: model
selection.

Bootstrap is most commonly used to provide a measure of accuracy of a parameter
estimate or statistical learning method.

5.1: Cross-validation
The test error is the average error that results from using a statistical learning method to
predict the response on a new observation. However, it is hard to have designated test
set.

Cross validation address the issue by estimating the test error by holding out a subset of
the training observations from the fitting process and then apply the method on the held
out observations.

5.1.1 The validation set approach
Validation set approach is a simple strategy that involves randomly dividing the data set
into training and validation/hold-out set.




Process
1. A set of n observations are randomly split into training set and a validation set
2. The model is fit on the training set and the model is used to predict the responses
for the observations in the validation set.
3. The resulting validation set error rate provides the estimate of the test error rate.

Drawbacks
1. High variance in the validation estimate of the test error rate for each validation
set, due to the dependencies on the observations in the validation set. It limits the
conclusion of model selection.
a. The decision on division is random, not sequential, which produces
inconsistencies in the estimates.
2. Only a subset of observations for train data are used to fit the model. Validation
set error may tend to overestimate the test error rate of the entire data set due to
simply less number of observations for train data.

,Cross validation (2 approaches) address these two issues of high variance and test error
rate overestimation.

5.1.2 Leave-One-Out Cross Validation (LOOCV)
The method also splits the set of observations into two part. However, instead of
creating two subsets of comparable size, a single observation is used for the validation
and rest of them (n-1) are trained.




Process
1. Aa set of n data points is repeatedly split into training set (n-1) and a validation
set.
2. Fit the train data and test on the 1 validation set.
3. The test error is estimated by averaging the n resulting validation error rate (i.e.
MSE1,MSE2… MSEn).



Advantages
1. Less bias than validation data set approach as it repeatedly fit the statistical
learning method with n-1 number of training data, almost as many as the entire
data set.
a. Prevents overestimating test error as much as the validation set approach.
2. Performing LOOCV multiple times always yield the same results: no randomness
in the training/validation set splits.
Disadvantage
1. Potential to be expensive to implement: time consuming
a. With leas square linear/polynomial regression, a shortcut reduces the
LOOCV, with the following formula. However it means that it would be too
expensive for any other methods.



hi is the leverage defined, where the value lies between 1/n and 1, and reflects the amount
that an observation influences its own fit.



5.1.3 K-Fold Cross-Validation
K-fold cross validation involves randomly dividing the set of observations into K folds of
approximately equal size.

, Process
1. Dividing the entire set into K groups/folds of non-overlapping groups.
a. Typically, one perform k-fold CV using k = 5/k = 10.
2. The first fold is treated as a validation set, and the method will be fit on the n-k
folds.
3. The mean validation error is computed on the observations in the held-out fold.
4. This is repeated k times, which a different group of observation is treated as a
validation set.
5. The k-fold CV estimate is computed by averaging these values.




Example: K-fold cross validation for K-nearest neighbors
1. Choose a grid for k: These are candidate values of 'k' (number of neighbors) for
the K-NN algorithm that you will evaluate.
2. Create K CV folds: Divide the training set into 'K' distinct subsets.
3. Iterate over each value of k: Repeat the following for each candidate value of k:
4. Iterate over each fold K:
A. Train the model: Use all subsets except the Kth one to train the k-NN
model
B. Predict the Kth subset: Predict the outcomes for the Kth subset using the
trained model
C. Calculate test metrics: Measure the model's performance (e.g., accuracy) in
predicting the Kth subset
5. Combine performance metrics: Calculate the CV error for the value of k over all K
folds
6. Finalize model: Select the value of k with the best performance in terms of CV
error

Advantage
- Computational cost reduction, which enables it to be applied to almost any
statistical learning method, including computationally intensive fitting procedures.
- Much lower variability in CV estimates as the result of the variability in how the
observations are divided into k folds.
- Still more case number for training than validation set, thus less bias.

Purpose
When we employ CV, we are interested in either

The benefits of buying summaries with Stuvia:

Guaranteed quality through customer reviews

Guaranteed quality through customer reviews

Stuvia customers have reviewed more than 700,000 summaries. This how you know that you are buying the best documents.

Quick and easy check-out

Quick and easy check-out

You can quickly pay through EFT, credit card or Stuvia-credit for the summaries. There is no membership needed.

Focus on what matters

Focus on what matters

Your fellow students write the study notes themselves, which is why the documents are always reliable and up-to-date. This ensures you quickly get to the core!

Frequently asked questions

What do I get when I buy this document?

You get a PDF, available immediately after your purchase. The purchased document is accessible anytime, anywhere and indefinitely through your profile.

Satisfaction guarantee: how does it work?

Our satisfaction guarantee ensures that you always find a study document that suits you well. You fill out a form, and our customer service team takes care of the rest.

Who am I buying this summary from?

Stuvia is a marketplace, so you are not buying this document from us, but from seller ArisMaya. Stuvia facilitates payment to the seller.

Will I be stuck with a subscription?

No, you only buy this summary for R216,17. You're not tied to anything after your purchase.

Can Stuvia be trusted?

4.6 stars on Google & Trustpilot (+1000 reviews)

53340 documents were sold in the last 30 days

Founded in 2010, the go-to place to buy summaries for 14 years now

Start selling
R216,17
  • (0)
Add to cart
Added