100% tevredenheidsgarantie Direct beschikbaar na betaling Zowel online als in PDF Je zit nergens aan vast
logo-home
Summary Deep Learning 2020 €4,99
In winkelwagen

Samenvatting

Summary Deep Learning 2020

7 beoordelingen
 432 keer bekeken  48 keer verkocht

Summary for Tilburg University's Deep Learning course (academic year ). All 6 lectures are discussed: introduction, Multi-Layer Perceptron, Convolutional Neural Networks, Optimizers and Regularization, Sequence Modeling and Generative Models.

Voorbeeld 3 van de 21  pagina's

  • 9 april 2020
  • 21
  • 2019/2020
  • Samenvatting
Alle documenten voor dit vak (3)

7  beoordelingen

review-writer-avatar

Door: andrst • 9 maanden geleden

review-writer-avatar

Door: meertjemerie • 2 jaar geleden

Good summary, however materials are bit different from 2022 spring semester due to a different teacher probably.

review-writer-avatar

Door: cornslob • 3 jaar geleden

review-writer-avatar

Door: okankaya • 3 jaar geleden

review-writer-avatar

Door: Emile • 3 jaar geleden

review-writer-avatar

Door: martintielemans1 • 4 jaar geleden

review-writer-avatar

Door: gthuer • 4 jaar geleden

avatar-seller
dc070498
Tilburg University Deep Learning Data Science and Society



Deep Learning
Lecture 1: Introduction and the perceptron
Deep Learning: Deep learning is part of a larger family of machine learning methods based
on artificial neural networks.
- Machine learning relies heavily on statistical science
- Deep Learning is based on multi-layer neural networks

Derivative: The amount by which a function is changing at one given point. A function's
derivative can be used to search for the maxima and minima of the function by searching for
places where its slope is zero.

Neural Network: Neural networks are a set of algorithms that are designed to recognize
patterns. The patterns they recognize are numerical, contained in vectors, into which all
real-world data (e.g. images, sound, text) must be translated.
- The goal is to arrive at the point of least error as fast as possible (best parameters for
making correct predictions)
- Error is the difference between the output you get and the expected real value
- Each step for a neural network involves a guess, an error measurement and a slight
update in its weights, as it slowly learns about the most important features.
- Neural networks have a kind of universality (they can compute any function)

What is the perceptron? It is a simple algorithm used to perform binary classification. It is a
linear classifier; an algorithm that classifies input by separating two categories with a
straight line. You give it some inputs, and it outputs one of two possible classes (1 or 0).
X1 + X2 + X3 = input features
W1 + W2 + W3 = weights
Σ = sum X + weights
= threshold
Y = output value

W is what we multiply X
with. Every input gets its
own weight. We multiply
the input value with the
corresponding weight and
sum the results of each of
If sum of inputs is bigger or equal to 0 → output 1 the X-values.
Else, output 0




1

,Tilburg University Deep Learning Data Science and Society

The threshold can be changed by
adjusting the bias. Bias acts as
activation function centered
around zero. It determines the
location of the threshold.



W0 = bias = default value is 1



Rosenblatt (1958): The perceptron was intended to be a machine by Rosenblatt, rather than
a program. While the perceptron initially seemed promising, it was quickly proved that
perceptrons could not be trained to recognize many patterns. A feedforward network with
more than one layer (multilayer perceptron) has greater processing power.
- Single layer perceptrons are only capable of learning linearly separable patterns!

Stacking Perceptrons: Same input but it is used for discovering different patterns. You need
more than one neuron. You can use a weight matrix if you have more than one perceptron.

Backpropagation: Central method by which neural networks learn. It is the messenger
telling the network whether the network made a mistake when it made a prediction. A
neural network propagates the signal of the input data forward through its parameters
towards the moment of decision, and then backpropagates information about the error, in
reverse through the network, so that it can modify the parameters.
- The network makes a guess about data, using its parameters
- The network’s is measured with a loss function
- The error is backpropagated to adjust the wrong-headed parameters (weights)

ImageNet: a dataset made of more than 15 million high-resolution images labeled with 22
thousand classes.

CPU versus GPU: A CPU is sometimes called the brains of a computer while a GPU acts as a
specialized microprocessor. A CPU is good at handling multiple tasks, but a GPU can handle a
few specific tasks very fast. Deep nets can be trained quickly by using GPU (ImageNet e.g.).




2

, Tilburg University Deep Learning Data Science and Society

Linear Algebra:




Transpose: The transpose of a matrix is an operator which flips a matrix over its diagonal,
that is it switches the row and column indices of the matrix

Matrix multiplication:




Lecture 2: Multi-Layer Perceptron (MLP) and Activation Functions
The single layer perceptron: The main limitation of the perceptron is that it can only
separate the space linearly (using one single line). The XOR-problem is an example of a
problem that can’t be solved by using a single layer perceptron (as you need two lines).
Thus, you need to use two separate perceptrons / two lines. Any input that lays between the
two lines belongs to one class, and the rest to the second class.




The strategy discussed above will solve the problem, but the update rule of the perceptron
doesn’t apply to this configuration.

A multilayer perceptron (MLP) has more than one perceptron. They are composed of an
input layer to receive the signal, an output layer that makes a decision or prediction, and in
between those two, an arbitrary number of hidden layers that are the true computational
engine of the network. MLPs can approximate any continuous function.

Training MLP: Training involves adjusting the weights and biases in order to minimize error.
Backpropagation is used to make those weight and bias changes relative to the error, and
the error itself can be measured in a several of ways (loss functions).




3

Voordelen van het kopen van samenvattingen bij Stuvia op een rij:

Verzekerd van kwaliteit door reviews

Verzekerd van kwaliteit door reviews

Stuvia-klanten hebben meer dan 700.000 samenvattingen beoordeeld. Zo weet je zeker dat je de beste documenten koopt!

Snel en makkelijk kopen

Snel en makkelijk kopen

Je betaalt supersnel en eenmalig met iDeal, creditcard of Stuvia-tegoed voor de samenvatting. Zonder lidmaatschap.

Focus op de essentie

Focus op de essentie

Samenvattingen worden geschreven voor en door anderen. Daarom zijn de samenvattingen altijd betrouwbaar en actueel. Zo kom je snel tot de kern!

Veelgestelde vragen

Wat krijg ik als ik dit document koop?

Je krijgt een PDF, die direct beschikbaar is na je aankoop. Het gekochte document is altijd, overal en oneindig toegankelijk via je profiel.

Tevredenheidsgarantie: hoe werkt dat?

Onze tevredenheidsgarantie zorgt ervoor dat je altijd een studiedocument vindt dat goed bij je past. Je vult een formulier in en onze klantenservice regelt de rest.

Van wie koop ik deze samenvatting?

Stuvia is een marktplaats, je koop dit document dus niet van ons, maar van verkoper dc070498. Stuvia faciliteert de betaling aan de verkoper.

Zit ik meteen vast aan een abonnement?

Nee, je koopt alleen deze samenvatting voor €4,99. Je zit daarna nergens aan vast.

Is Stuvia te vertrouwen?

4,6 sterren op Google & Trustpilot (+1000 reviews)

Afgelopen 30 dagen zijn er 53340 samenvattingen verkocht

Opgericht in 2010, al 14 jaar dé plek om samenvattingen te kopen

Start met verkopen
€4,99  48x  verkocht
  • (7)
In winkelwagen
Toegevoegd