THIS IS A COMPLETE NOTE FROM ALL BOOKS + LECTURE!
Save your time for internships, other courses by studying over this note!
Are you a 1st/2nd year of Business Analytics Management student at RSM, who want to survive the block 2 Machine Learning module? Are you overwhelmed with 30 pages of re...
Machine learning = field of study that gives computers the ability to learn without being
explicitly programmed.
Types of Machine learning system : added to the ISRL chapter 2
Main Challenges of Machine Learning:
Generalization
Generalization problem could be caused by sampling bias, overfitting.
It is crucial to use a training set that is representative of the cases you want to generalize
to. This is often harder than it sounds: if the sample is too small, you will have sampling
noise (i.e., nonrepresentative data as a result of chance/outlier/data errors)
However, even very large samples can be nonrepresentative if the sampling method is
flawed. This is called sampling bias.
Regularization: Constraining a model to make it simpler and reduce the risk of overfitting.
The amount of regularization to apply during learning can be controlled by a
hyperparameter. A hyperparameter is a parameter of a learning algorithm (not of the
model
- It must be set prior to training and remains constant during training.
- If you set the regularization hyperparameter to a very large value, you will get an
almost flat model (a slope close to zero)
Hyperparameter vs Parameter
- A model parameter is
o estimated during model training.
o internally optimized
- A hyperparameter must be
o specified before model training.
o optimized externally.
Concept drift: It happens when the relationship that model estimate changes after
training the model, due to the external conceptual change in the circumstance.
Testing and Validating:
A better option than testing on the new data is to split your data into two sets: the
training set and the test set, which allow you to test the performance before moving on
the actual practice. As these names imply, you train your model using the training set,
and you test it using the test set.
- It is common to use 80% of the data for training and hold out 20% for testing.
However, this depends on the size of the dataset:
, The error rate on new cases is called the generalization error (or out-of-sample error), and
by evaluating your model on the test set, you get an estimate of this error.
Hyperparameter Tuning and model selection
Suppose you are hesitating between two types of models (say, a linear model and a
polynomial model): how can you decide between them?
When you want to compare just two different models: just train models on the same
train data and compare the generalization performance with test data.
When you want to find the best performing hyperparameter among 100 options: You
cannot do the same.
- When you measure the generalization error multiple times on the test set, you
adapt the model and hyperparameters to produce the best model for that
particular set so it won’t perform as well on the new data.
A common solution to this problem is called holdout validation: you simply hold out part
of the training set to evaluate several candidate models and select the best one. The new
held-out set is called the validation set (or sometimes the development set, or dev set).
Process
1. You train multiple models with various hyperparameters on the reduced training
set.
2. You select the model that performs best on the validation set (holdout validation
process)
a. if the model performs poorly on the train-dev set, then it must have overfit
the training set, so you should try to simplify or regularize the model, get
more training data, and clean up the training data.
3. You train the best model on the full training set, including the validation set
4. Test the generalization error on the test set.
Validation set should not be too small: then model evaluations will be imprecise
Validation set should not be too large: remaining training set will be much smaller, which
would change the performance result after training on the full training set.
One way to solve this problem is Cross validation that uses small validation sets. Each
model is evaluated once per validation set after it is trained on the rest of the data. By
averaging the evaluations of the mode, you get much more accurate measure of
performance.
- It also means that training time is multiplied by the number of validation sets.
No Free Lunch Theorem
David Wolpert demonstrated that if you make absolutely no assumption about the data,
then there is no reason to prefer one model over any other. This is called the No Free
Lunch (NFL) theorem.
The benefits of buying summaries with Stuvia:
Guaranteed quality through customer reviews
Stuvia customers have reviewed more than 700,000 summaries. This how you know that you are buying the best documents.
Quick and easy check-out
You can quickly pay through credit card or Stuvia-credit for the summaries. There is no membership needed.
Focus on what matters
Your fellow students write the study notes themselves, which is why the documents are always reliable and up-to-date. This ensures you quickly get to the core!
Frequently asked questions
What do I get when I buy this document?
You get a PDF, available immediately after your purchase. The purchased document is accessible anytime, anywhere and indefinitely through your profile.
Satisfaction guarantee: how does it work?
Our satisfaction guarantee ensures that you always find a study document that suits you well. You fill out a form, and our customer service team takes care of the rest.
Who am I buying these notes from?
Stuvia is a marketplace, so you are not buying this document from us, but from seller ArisMaya. Stuvia facilitates payment to the seller.
Will I be stuck with a subscription?
No, you only buy these notes for $13.40. You're not tied to anything after your purchase.