100% satisfaction guarantee Immediately available after payment Both online and in PDF No strings attached
logo-home
Summary of paper A Metric Learning Reality Check $7.67
Add to cart

Summary

Summary of paper A Metric Learning Reality Check

 13 views  0 purchase
  • Course
  • Institution

This is a summary of the paper A Metric Learning Reality Check for the course Seminar of Computer Vision by Deep Learning in TU Delft

Preview 2 out of 5  pages

  • July 5, 2024
  • 5
  • 2023/2024
  • Summary
avatar-seller
A Metric Learning Reality
Check
Deep metric learning papers from the past four years have consistently claimed
great advances in accuracy, often more than doubling the performance of
decade-old methods. This paper demonstrates the flaws in that.


Why metric learning is important
Metric learning attempts to map data to an embedding space, where similar
data are close together and dissimilar data are far apart.
This can be achieved by means of embedding and classification losses.
Embedding losses operate on the relationships between samples in a batch,
ensuring that similar samples are close together in the embedding space.
Classification loss involves a weight matrix that converts the embedding space
into class logits (scores), which are used to predict the class of the samples

Use of embeddings during test time:
During testing, embeddings are preferred over logits or softmax values,
especially in task like information retrival (e.g. image search). Here, the goal is
to find data most similar to a query. This is because embeddings capture the
similarity between data points directly.
Open-Set Classification:
In scenarios where the test set classes are different from the training set
classes, embeddings are useful for nearest neighobors voting or distance
thresholding. e.g. face verification and person re-identification.

Cases Where Classification Loss is Not Applicable
Lack of explicit labels. Instead, relative similariteis between samples are used.
This is where embedding losses come in, as there are no explicit labels to use
classification loss.


Embedding Losses



A Metric Learning Reality Check 1

, A classic pair based method is the contrastive loss which attempts to make the
distance between positive pairs below some threshold and the distance
between negative pairs above some threshold.
The theoretical downside is that the same distance threshold is applied to all
pairs, even though there may be a large variance in their similarities and
dissimilarities.
The triplet margin loss addresses this issue. Using an anchor, positive and
negative sample where the anchor is more similar to the positive than the
negativee. The triplet margin loss attempts to make the anchor-positive
distance smaller than the anchor-negative distances. It allows to account for
variance.


Classification Losses
Based on the inclusion of a weight matrix, where each column corresponds to a
particular class. Training consists of matrix multiplying the weights with
embedding vectors to obtain logits, and then applying a loss function to the
logits.

Pair and Triplet Mining
Mining is the process of finding the best pairs or triplets to train on. There are
two broad approaches to mining: offline and online. Offline is performed before
batch construction, so that each batch is made to contain the most informative
samples. This might be accomplished by storing lists of hard negatives, doing
nearest neighbors search before each epoch.
In contrast online mining finds hard pairs or triplets within each randomly
selected batch. Using all possible pairs or triplets is an alternative but has two
weaknesses.

1. Practically, it can consume a lot of memory

2. Theoretically it has the tendency to include a large number of easy
negatives and positives, causing performance to plateau quickly.


Advanced Training methods
To obtain higher accuracy, many recent papers have gone beyond loss
functions or mining techniques. For example, several recent methods
incorporate generator networks in their training procedure.




A Metric Learning Reality Check 2

The benefits of buying summaries with Stuvia:

Guaranteed quality through customer reviews

Guaranteed quality through customer reviews

Stuvia customers have reviewed more than 700,000 summaries. This how you know that you are buying the best documents.

Quick and easy check-out

Quick and easy check-out

You can quickly pay through credit card or Stuvia-credit for the summaries. There is no membership needed.

Focus on what matters

Focus on what matters

Your fellow students write the study notes themselves, which is why the documents are always reliable and up-to-date. This ensures you quickly get to the core!

Frequently asked questions

What do I get when I buy this document?

You get a PDF, available immediately after your purchase. The purchased document is accessible anytime, anywhere and indefinitely through your profile.

Satisfaction guarantee: how does it work?

Our satisfaction guarantee ensures that you always find a study document that suits you well. You fill out a form, and our customer service team takes care of the rest.

Who am I buying these notes from?

Stuvia is a marketplace, so you are not buying this document from us, but from seller guillemribes. Stuvia facilitates payment to the seller.

Will I be stuck with a subscription?

No, you only buy these notes for $7.67. You're not tied to anything after your purchase.

Can Stuvia be trusted?

4.6 stars on Google & Trustpilot (+1000 reviews)

52510 documents were sold in the last 30 days

Founded in 2010, the go-to place to buy study notes for 14 years now

Start selling
$7.67
  • (0)
Add to cart
Added