Garantie de satisfaction à 100% Disponible immédiatement après paiement En ligne et en PDF Tu n'es attaché à rien
logo-home
Complete WEEK2 note: Machine Learning & Learning Algorithms(BM05BAM) €10,99   Ajouter au panier

Notes de cours

Complete WEEK2 note: Machine Learning & Learning Algorithms(BM05BAM)

 16 vues  0 fois vendu
  • Cours
  • Établissement
  • Book

THIS IS A COMPLETE NOTE FROM ALL BOOKS + LECTURE! Save your time for internships, other courses by studying over this note! Are you a 1st/2nd year of Business Analytics Management student at RSM, who want to survive the block 2 Machine Learning module? Are you overwhelmed with 30 pages of re...

[Montrer plus]

Aperçu 3 sur 17  pages

  • 12 mars 2024
  • 17
  • 2023/2024
  • Notes de cours
  • Jason roos
  • Toutes les classes
avatar-seller
ISLR : Chapter 5 : Resampling Methods
Resampling methods are an indispensable tool in modern statistics.
They involve repeatedly drawing samples from a training set and refitting a model of
interest on each sample in order to obtain additional information about the fitted model.

Two cheaper alternative to resampling methods: cross-validation and bootstrap

Cross-validation involves a process of evaluating a model’s performance: model
assessment, and a process of selecting the proper level of flexibility for a model: model
selection.

Bootstrap is most commonly used to provide a measure of accuracy of a parameter
estimate or statistical learning method.

5.1: Cross-validation
The test error is the average error that results from using a statistical learning method to
predict the response on a new observation. However, it is hard to have designated test
set.

Cross validation address the issue by estimating the test error by holding out a subset of
the training observations from the fitting process and then apply the method on the held
out observations.

5.1.1 The validation set approach
Validation set approach is a simple strategy that involves randomly dividing the data set
into training and validation/hold-out set.




Process
1. A set of n observations are randomly split into training set and a validation set
2. The model is fit on the training set and the model is used to predict the responses
for the observations in the validation set.
3. The resulting validation set error rate provides the estimate of the test error rate.

Drawbacks
1. High variance in the validation estimate of the test error rate for each validation
set, due to the dependencies on the observations in the validation set. It limits the
conclusion of model selection.
a. The decision on division is random, not sequential, which produces
inconsistencies in the estimates.
2. Only a subset of observations for train data are used to fit the model. Validation
set error may tend to overestimate the test error rate of the entire data set due to
simply less number of observations for train data.

,Cross validation (2 approaches) address these two issues of high variance and test error
rate overestimation.

5.1.2 Leave-One-Out Cross Validation (LOOCV)
The method also splits the set of observations into two part. However, instead of
creating two subsets of comparable size, a single observation is used for the validation
and rest of them (n-1) are trained.




Process
1. Aa set of n data points is repeatedly split into training set (n-1) and a validation
set.
2. Fit the train data and test on the 1 validation set.
3. The test error is estimated by averaging the n resulting validation error rate (i.e.
MSE1,MSE2… MSEn).



Advantages
1. Less bias than validation data set approach as it repeatedly fit the statistical
learning method with n-1 number of training data, almost as many as the entire
data set.
a. Prevents overestimating test error as much as the validation set approach.
2. Performing LOOCV multiple times always yield the same results: no randomness
in the training/validation set splits.
Disadvantage
1. Potential to be expensive to implement: time consuming
a. With leas square linear/polynomial regression, a shortcut reduces the
LOOCV, with the following formula. However it means that it would be too
expensive for any other methods.



hi is the leverage defined, where the value lies between 1/n and 1, and reflects the amount
that an observation influences its own fit.



5.1.3 K-Fold Cross-Validation
K-fold cross validation involves randomly dividing the set of observations into K folds of
approximately equal size.

, Process
1. Dividing the entire set into K groups/folds of non-overlapping groups.
a. Typically, one perform k-fold CV using k = 5/k = 10.
2. The first fold is treated as a validation set, and the method will be fit on the n-k
folds.
3. The mean validation error is computed on the observations in the held-out fold.
4. This is repeated k times, which a different group of observation is treated as a
validation set.
5. The k-fold CV estimate is computed by averaging these values.




Example: K-fold cross validation for K-nearest neighbors
1. Choose a grid for k: These are candidate values of 'k' (number of neighbors) for
the K-NN algorithm that you will evaluate.
2. Create K CV folds: Divide the training set into 'K' distinct subsets.
3. Iterate over each value of k: Repeat the following for each candidate value of k:
4. Iterate over each fold K:
A. Train the model: Use all subsets except the Kth one to train the k-NN
model
B. Predict the Kth subset: Predict the outcomes for the Kth subset using the
trained model
C. Calculate test metrics: Measure the model's performance (e.g., accuracy) in
predicting the Kth subset
5. Combine performance metrics: Calculate the CV error for the value of k over all K
folds
6. Finalize model: Select the value of k with the best performance in terms of CV
error

Advantage
- Computational cost reduction, which enables it to be applied to almost any
statistical learning method, including computationally intensive fitting procedures.
- Much lower variability in CV estimates as the result of the variability in how the
observations are divided into k folds.
- Still more case number for training than validation set, thus less bias.

Purpose
When we employ CV, we are interested in either

Les avantages d'acheter des résumés chez Stuvia:

Qualité garantie par les avis des clients

Qualité garantie par les avis des clients

Les clients de Stuvia ont évalués plus de 700 000 résumés. C'est comme ça que vous savez que vous achetez les meilleurs documents.

L’achat facile et rapide

L’achat facile et rapide

Vous pouvez payer rapidement avec iDeal, carte de crédit ou Stuvia-crédit pour les résumés. Il n'y a pas d'adhésion nécessaire.

Focus sur l’essentiel

Focus sur l’essentiel

Vos camarades écrivent eux-mêmes les notes d’étude, c’est pourquoi les documents sont toujours fiables et à jour. Cela garantit que vous arrivez rapidement au coeur du matériel.

Foire aux questions

Qu'est-ce que j'obtiens en achetant ce document ?

Vous obtenez un PDF, disponible immédiatement après votre achat. Le document acheté est accessible à tout moment, n'importe où et indéfiniment via votre profil.

Garantie de remboursement : comment ça marche ?

Notre garantie de satisfaction garantit que vous trouverez toujours un document d'étude qui vous convient. Vous remplissez un formulaire et notre équipe du service client s'occupe du reste.

Auprès de qui est-ce que j'achète ce résumé ?

Stuvia est une place de marché. Alors, vous n'achetez donc pas ce document chez nous, mais auprès du vendeur ArisMaya. Stuvia facilite les paiements au vendeur.

Est-ce que j'aurai un abonnement?

Non, vous n'achetez ce résumé que pour €10,99. Vous n'êtes lié à rien après votre achat.

Peut-on faire confiance à Stuvia ?

4.6 étoiles sur Google & Trustpilot (+1000 avis)

78075 résumés ont été vendus ces 30 derniers jours

Fondée en 2010, la référence pour acheter des résumés depuis déjà 14 ans

Commencez à vendre!
€10,99
  • (0)
  Ajouter