Those are the notes I made during the whole course, it is not a summary! As long as you understand this document you do not need to watch any lecture. With those notes I have passed the course with a 7.5.
this output is the representation of the input object, and so it the embedding of it
some embedding spaces have an useful structure in which you can navigate in certain directions with
changing features according to direction = navigable space, so direction corresponds to certain semantic
features in object
Deep Learning 98
, embedding of images
embeddings of words
distributional hypothesis
"you shall know a word by the company it keeps"
"if units of text have similar vectors in a text freq matrix, then they tend to have similar meaning"
⇒ we can use context information to define embeddings for words,
one vector representation for each word
generally, we can train them (see lecture on word2vec)
illustration idea
why use graphs as input to ML?
classification, regression, clustering of nodes/edges/whole graphs
recommender systems
document modeling: entity and document similarity (use concepts from a graph)
alignment of graphs (which nodes are similar)
link prediction and error detection
linking text and semi-structured knowledge to graphs
graph embedding techniques
one big challenge with embedding graphs
we can not straightforwardly put graph into neural network because of its structure, not a linear
structure
there are traditional ML on graphs
often having problems with scalability
often need manual feature engineering: task specific
embedding knowledge graphs in vector space
for each node in graph → create vector in vector space → easily feed into ML algo
preserve info
unsupervised: task and dataset independent
efficient computation
low dimensional representation
Deep Learning 99
, 2 major visions on how this should be done
Preserve topology
keeping neighbors close in embedding space: e.g. europe-germany
Preserve similarity
keeping similar nodes close together: e.g. europe-africa
2 major targets
improve original data
knowledge graph completion
link prediction
anomaly detection
downstream ML tasks
classification/regression/clustering/K-NN
then used as part of a larger process
QA/dialog systems
Translation
image segmentation
how to go from graph to embedding? 3 major approaches for propositionalization
translation: any relation in knowledge graph is directly mapped to what should happen in the
embedded space. So we want to see the same relations in the embedded space
has a lot of approaches, but all come down to same thing
TransE is the basic one, and all the other ones are the complex version of it
= translation embedding
take every edge of graph → if this exists in knowledge graph → we want to have it in our
embedded space with edge embedding, vector as translation form that one to that one
so adding embedding to head has to give tail
we want to minimalize distance between sum of the h and l (as close as possible) to t in
knowledge graph S
Deep Learning 100
, this is not completely sufficient, seems to work but the problem is that we only get positive
information ⇒ over optimized
so not only minimizing but also penalizing when wrong relations are given: (maximize bad
triple)
become less scalable because they do more and more stuff
one hop away from each other = current entity → one hop → next entity
tensor/matrix factorization
make a 3d matrix of all relations in the graph and factorize it [factorizing = make a lower
dimensional representation of the 3d matrix also with lower matrix dimensionality]
Deep Learning 101
Voordelen van het kopen van samenvattingen bij Stuvia op een rij:
Verzekerd van kwaliteit door reviews
Stuvia-klanten hebben meer dan 700.000 samenvattingen beoordeeld. Zo weet je zeker dat je de beste documenten koopt!
Snel en makkelijk kopen
Je betaalt supersnel en eenmalig met iDeal, creditcard of Stuvia-tegoed voor de samenvatting. Zonder lidmaatschap.
Focus op de essentie
Samenvattingen worden geschreven voor en door anderen. Daarom zijn de samenvattingen altijd betrouwbaar en actueel. Zo kom je snel tot de kern!
Veelgestelde vragen
Wat krijg ik als ik dit document koop?
Je krijgt een PDF, die direct beschikbaar is na je aankoop. Het gekochte document is altijd, overal en oneindig toegankelijk via je profiel.
Tevredenheidsgarantie: hoe werkt dat?
Onze tevredenheidsgarantie zorgt ervoor dat je altijd een studiedocument vindt dat goed bij je past. Je vult een formulier in en onze klantenservice regelt de rest.
Van wie koop ik deze samenvatting?
Stuvia is een marktplaats, je koop dit document dus niet van ons, maar van verkoper MeldaMalkoc. Stuvia faciliteert de betaling aan de verkoper.
Zit ik meteen vast aan een abonnement?
Nee, je koopt alleen deze samenvatting voor €4,39. Je zit daarna nergens aan vast.