MACHINE LEARNING SV 2019
HC1 - INTRODUCTION
WHAT IS MACHINE LEARNING?
The problem of induction (= knowledge):
• Deductive reasoning: “all men are mortal, Socrates is a man, therefore Socrates is mortal” =
reasoning à discrete, unambiguous, provable, known rules
• Inductive reasoning: “the sun has risen in the east every day of my life, so it’ll do so again
tomorrow” à fuzzy, ambiguous, experimental, unknown rules
Machine learning = provides systems the ability to automatically learn and improve from
experience without being explicitly programmed. Uses offline learning (train your model once).
1. Where do we use ML
o inside other software (unlock phone with face/using voice commands)
o in analytics/data mining (find typical clusters of users/predict spikes web traffic)
o in science/statistics (if any model can predicts A from B, there must be some relation)
2. What makes a good ML problem?
o We can’t solve it explicitly, but approximate solutions are fine (recommending movies)
o Limited reliability, predictability, interpretability is fine
3. What problems does ML solve?
o Explicit solutions are expensive. Best solution may depend on the circumstances or
change over time.
o We don’t know exactly how our actions influence the
world and can’t fully observe it à need to guess
Intelligent agent
• Online learning: acting and learning at the same time
• Reinforcement learning: online learning in a world based on
delayed feedback
Offline learning = separate LEARNING and ACTING (most of the course)
1. Take a fixed dataset of examples (aka instances) à train a model to learn from these
examples à test the model to see if it works
SUPERVISED MACHINE LEARNING TASKS
Supervised Unsupervised
Give the model examples of input and output Only inputs provided
(what you want it to do)
Learn to predict the output for an unseen input Find any pattern that explains something about the data
more difficult, but can be useful because labeling data is
very expensive
linear models, tree models and kNN models clustering, density estimation and generative modeling
1
,Supervised learning = give the model examples of what you want it to do. Types of models: linear
models, tree models and kNN models.
The two spaces of machine learning:
1. Model space: you search this space for a good model
to fit your data (every point is a model)
a. Discrete: tree models
b. Continuous: between every two models,
there is always another model, no matter
how close they are together
2. Feature space: plot/look at your data
CLASSIFICATION
Classification = assign a class to each example.
1. Create a dataset: An example of a problem: ham/spam emails.
o Feature = things you measure about your instances (how many
times ‘viagra’)
o label = what we try to predict (spam/ham)
o instance = the examples you give to the model (one email)
2. Pass that dataset to learner algorithm à the algorithm comes up with a model (aka
classifier). It makes a guess whether it’s spam or not.
3. Examples of classification:
o Optical Character Recognition (OCR): reading handwritten text/digits
o Playing chess
o Self-driving car, such as ALVINN (1995)
4. Examples of classifiers:
o Linear classifier
o Decision tree classifier: watch out for overfitting
o K-Nearest Neighbors (lazy classifier): doesn’t learn/build model, just memorizes the
data à when new example, it looks at the k nearest instances in your dataset à
takes majority vote
5. Variations:
o Features: usually numerical/categorical
o Binary classification: two classes (male/female)
o Multiclass classification: more than two classes
o Multilabel classification: > two classes and none, some or all of them may be true
o Class probabilities/scores: the classifier reports a probability for each class
Loss function = the performance of the model on the data; the lower the better!! (The bigger the
loss function, the worse the model is!). E.g.: for classification the number of misclassified examples.
Overfitting = if your training loss goes down but your validation loss stays the same. Split your test
and training data. The aim is to minimize the loss on your test data (NOT on the training data).
REGRESSION
Regression = assign a number to each example (instance).
• Loss function for regression, a.k.a. the mean-squared-erros (MSE) loss:
a. P: stands for the parameters that define the line.
b. Residuals: We take the difference between the model prediction and the target
value from the data. We square, and then sum all residuals. The closer to zero, the
better. (it is squared, because otherwise the negatives might cancel out).
2
, UNSUPERVISED MACHINE LEARNING TASKS
Unsupervised learning: all you have is the data as is, without the examples. Much more difficult but
can be useful because labeling data is very expensive.
• Clustering: You cluster your data (subsets), such as k-means clustered
• Density estimation: Given your data, which values/examples are more likely than other
examples? (à modelling the probability distribution behind your data)
o Discrete feature space: The model produces a probability (sum of all answers over
the whole feature space should be 1)
o Continuous feature space: model produces numeric feature, and the answer should
be a probability density (and all answers should integrate to 1).
• Generative modeling: Building a model from which you can sample new examples
WHAT ISN’T MACHINE LEARNING?
OTHER FIELDS THAT ARE RELATED TO ML / NOT ML
• AI: automated reasoning, planning
• Data science: gathering data, harmonizing data,
interpreting data
• Data mining: finding common clickstreams in web
logs/fraud in transactions
o More ML than DM: spam classification, prediction
stock prices, learning to control a robot
• Information retrieval: building search engines
• Statistics: analyzing research results, experiment design,
courtroom evidence
o More ML than Stats: spam classification, movie
recommendation
o Difference between ML and S: Statistics aim to get at the truth, whereas machine
learning tries to come up with something that works regardless of whether it’s true!!
§ “the machine learning approach measuring models purely for predictive
accuracy on a large test set, has a lot of benefits and makes the business of
statistics a lot simple and more effective.”
o Deep learning is a subfield of ML
OFFLINE MACHINE LEARNING BASIC RECIPE:
1. Abstract (part of) your problem to a standard task.
o Classification, Regression, Clustering, Density estimation, Generative Modeling
2. Choose your instances and their features. For supervised learning, choose a target.
3. Choose your model class. Linear models, Decision Trees, kNN (choose how many surrounding points)
4. Search for a good model. Choose a loss function, choose a search method to minimize the loss.
3