This document presents an in depth summary of the Introduction to Data Science course within the Cognitive Science and Artificial Intelligence bachelor program at Tilburg University.
This summary covers the basic topics of Machine Learning and Data Science, such as Supervised and Supervised learni...
● Supervised Learning: when you have input variables (x) and output variable (y) and
teach an algorithm to map a function from inputs to outputs. Aim is for the algo to be able
to predict output variable (y) from new input data (x). Supervised cause the process of
an algo learning from the training dataset can be thought of as a teacher supervising the
learning process. We know the correct answers (classes,labels), the algorithm iteratively
makes predictions on the training data and is corrected by the teacher. Learning stops
when the algorithm achieves an acceptable level of performance.
○ Classification: task where the output variable is a category, such as color
(red,blue,green) or diagnosis (ill, not ill). The model trained from the data defines
a decision boundary that separates the data
■ Logistic Regression, Neural Networks (Multi-Layer Perceptron), Naive
Bayes, KNN, Decision Trees, Linear SVMs, Kernel SVMs, Ensemble
Learning (e.g. Random Forests, Gradient Boosting)
■ Types of classifiers:
● Instance based classifiers: Use observations directly without
models, e.g. K nearest neighbors
● Generative: p(x|y), build a generative statistical model, rely on all
points to learn the generative model, e.g. Bayes classifiers
● Discriminative: p(y|x) , directly estimate a decision rule/boundary,
mainly care about the boundary, e.g. Decision trees
○ Regression: task where the output variable is a real value, such as “dollars” or
“weight”. The model fits the data to describe the relation between 2 features or
between a feature (e.g., height) and the label (e.g., yes/no)
■ Linear, Polynomial Regression, NN (MLP) Regression, Bayesian Ridge
Regression, KNN Regression, Decision Trees Regression, Linear SVM
Regression, Kernel SVM Regression, Ensemble Learning (e.g. Random
Forests Regression, Gradient Boosting Regression)
● Unsupervised Learning: when only have input data (x) and no corresponding output
variables. Aim here is to model the underlying structure or distribution in the data in order
to learn more about the data. Unsupervised cause there is no correct answers and there
, is no teacher. Algorithms are left to their own to discover and present the interesting
structure in the data.
○ Clustering: where want to discover the inherent groupings in the data, such as
grouping listeners by music genre preferences.
○ Association: where you to discover rules that describe large portions of data,
such as people that listen (x) also tend to listen (y).
● Supervised: all data is labeled and the algorithms learn to predict the output from the
input data.
● Unsupervised: all data is unlabeled and the algorithms learn to inherent structure from
the input data.
● Semi-supervised: some data is labeled but most of it is unlabeled and a mixture of
supervised and unsupervised techniques can be used.
Data preparation
● Scaling: method used to normalize the range of independent variables or features of
data. Methods:
○ Min-max → the simplest method and consists in rescaling the range of features
to scale the range in [0, 1] or [−1, 1]
○ Mean normalization
○ Standardization → makes the values of each feature in the data have zero-mean
and unit-variance; determine the distribution mean and standard deviation for
each feature, then subtract the mean from each feature, then divide the values
(mean is already subtracted) of each feature by its standard deviation
○ Scale to unit length → scale the components of a feature vector such that the
complete vector has length one; means dividing each component by the
Euclidean length of the vector
● Missing values: take them off because missing data can (1) introduce a substantial
amount of bias, (2) make the handling and analysis of the data more arduous, and (3)
create reductions in efficiency
● Data balance: class imbalance, when each of your classes have a different number of
examples; only do it if really care about the class in minorance; lead to hard-to-interpret
accuracy
, ○ Undersampling and oversampling
● Binning: make the model more robust and prevent overfitting, however, it has a cost to
the performance; everytime you bin something, you sacrifice information and make your
data more regularized; trade-off between performance and overfitting is the key point of
the binning process
● Log Transform: helps to handle skewed data and after transformation, the distribution
becomes more approximate to normal; in most of the cases the magnitude order of the
data changes within the range of the data; it also decreases the effect of the outliers,
due to the normalization of magnitude differences and the model become more robust;
data must have only positive values
● Unsupervised Feature Reduction:
○ Variance-based → variance or few unique values
○ Covariance-based → remove correlated features
○ PCA → remove linear subspaces
● Model: an equation that links the values of some features to the predicted value of the
target variable; finding the equation (and coefficients in it) is called ‘building a model’
● Feature selection vs. extraction: feature selection reduces the number of features by
selecting the important ones; feature extraction reduces the number of features by
means of a mathematical operation
● Evaluation metrics:
○ Accuracy → is the ratio of number of correct predictions to the total number of
input samples
○ Logarithmic Loss → works by penalising the false classifications; works well for
multi-class classification; here the classifier must assign probability to each class
for all the samples; nearer to 0 = higher accuracy, away from 0 = lower accuracy
○ Confusion matrix → a table showing correct predictions (the diagonal) and the
types of incorrect predictions made (what classes incorrect predictions were
assigned)
○ Precision → measure of a classifier’s exactness; is the number of positive
predictions divided by the total number of positive class values predicted; low
precision = large FP
The benefits of buying summaries with Stuvia:
Guaranteed quality through customer reviews
Stuvia customers have reviewed more than 700,000 summaries. This how you know that you are buying the best documents.
Quick and easy check-out
You can quickly pay through credit card or Stuvia-credit for the summaries. There is no membership needed.
Focus on what matters
Your fellow students write the study notes themselves, which is why the documents are always reliable and up-to-date. This ensures you quickly get to the core!
Frequently asked questions
What do I get when I buy this document?
You get a PDF, available immediately after your purchase. The purchased document is accessible anytime, anywhere and indefinitely through your profile.
Satisfaction guarantee: how does it work?
Our satisfaction guarantee ensures that you always find a study document that suits you well. You fill out a form, and our customer service team takes care of the rest.
Who am I buying these notes from?
Stuvia is a marketplace, so you are not buying this document from us, but from seller massimilianogarzoni. Stuvia facilitates payment to the seller.
Will I be stuck with a subscription?
No, you only buy these notes for $9.10. You're not tied to anything after your purchase.