100% satisfaction guarantee Immediately available after payment Both online and in PDF No strings attached
logo-home
Computer Vision & 3D Image Processing (5LSH0) lecture slides summary $9.44   Add to cart

Summary

Computer Vision & 3D Image Processing (5LSH0) lecture slides summary

3 reviews
 139 views  6 purchases
  • Course
  • Institution

Extensive summary (86 pages) of the Computer Vision & 3D Image Processing (5LSH0) course lecture slides. - Module 1: Feature Extraction and Matching - Module 2: Classification – clustering - Module 3: Classification – supervised - Module 4: Introduction to Deep Learning ...

[Show more]
Last document update: 3 year ago

Preview 4 out of 87  pages

  • December 30, 2020
  • January 26, 2021
  • 87
  • 2020/2021
  • Summary

3  reviews

review-writer-avatar

By: kw1 • 3 year ago

review-writer-avatar

By: brechtvangils • 3 year ago

review-writer-avatar

By: JohnSengers • 3 year ago

avatar-seller
Computer Vision & 3D Image Processing
5LSH0 Lecture Summary
2020-2021




Jarl Lemmens
j.l.a.lemmens@student.tue.nl

,Module 0 – Introduction
Some cool applications of the VCA group:

- Create a 3D model of complex spaces, with multiple levels and hallways.
- Create 3D models of extremely large spaces such as a cathedral.
- Person / object re-identification with multiple cameras so the trajectory can be estimated.
- Synthesizing traffic signs from street-view imagery, by generating realistic examples and
retrain detectors.
- Accurate localization by image matching (take a picture somewhere, compare it with a
database of city pictures, and match features to an exact location).

A computer receives an image as a large matrix of (RGB) values. The goal of computer
vision is to make the computer understand what can be seen in these values. Which set of
values correspond to a certain object or activity, and from which position did the camera
capture that environment.

Some computer vision application examples:
- Optical character recognition (OCR), which converts scanned docs to text. Think of the
scanning option in Google translate.
- Object detection. Detect faces, humans, cars or any other object of interest in an image.
- Activity detection and classification. Detect when fights, burglary or other abnormal
behavior occurs in surveillance images.
- Guiding doctors in diagnosis, therapy and surgery. For example a network that ‘reads’ an
image of a melanoma and outputs whether it is malignant (bad) or benign (not so bad).
- Allow robots to see and interpret its surroundings.
- Enable autonomous driving, by detecting driving lanes, traffic signs, other vehicles etc.
- Special effects: motion capture. (use a human face to capture facial motions and translate
these to an animal’s face)
- 3D modeling of environment. For example the 3d option in Google Maps.

Overview of the topics:
- Module 1: Feature Extraction and Matching p.3
- Module 2: Classification – clustering p.13
- Module 3: Classification – supervised p.17
- Module 4: Introduction to Deep Learning p.22
- Module 5: Classification Using Convolutional Neural Networks p.28
- Module 6: Object Detection using Deep Learning p.44
- Module 7: Object tracking p.47
- Module 8: Person re-id p.52
- Module 9: Camera model, Projection matrix and 3D Geometry p.57
- Module 10: 3D Reconstruction, data fusion and SLAM p.64
- Module 11: Structure from Motion p.77
- Module 12: Segmentation using Convolutional Neural Networks p.80
- Module 13: Behavior Analysis p.84

,Module 1 – Feature Extraction and Matching
Color spaces

The most common color system is the RGB system (red-green-blue), however, other
systems exist as well.

HSV – Hue, Saturation, Value (also called intensity)
CMYK – Cyan, Magenta, Yellow, Key (which is black)
YUV – Luma (brightness), and Chrominance (color)




These different color spaces can be converted from and to each other. For example RGB to
HSV (which might be the most used conversion of them all):

𝑉 = max(𝑅, 𝐺, 𝐵)
max(𝑅, 𝐺, 𝐵) − min(𝑅, 𝐺, 𝐵)
𝑆=
max(𝑅, 𝐺, 𝐵)
(𝐺 − 𝐵)
0+ , 𝑖𝑓 max(𝑅, 𝐺, 𝐵) = 𝑅
max(𝑅, 𝐺, 𝐵) − min(𝑅, 𝐺, 𝐵)
(𝐵 − 𝑅)
𝐻 = 60 × 2 + , 𝑖𝑓 max(𝑅, 𝐺, 𝐵) = 𝐺
max(𝑅, 𝐺, 𝐵) − min(𝑅, 𝐺, 𝐵)
(𝑅 − 𝐺)
4+ , 𝑖𝑓 max(𝑅, 𝐺, 𝐵) = 𝐵
{ max(𝑅, 𝐺, 𝐵) − min(𝑅, 𝐺, 𝐵)

Hue is a value between 0 and 360, so if the equation returns greater than 360, or smaller
than 0, add or subtract 360 until it is within the correct range.

Feature points

Feature points are used for
image alignment (think of
panorama images), 3D
reconstruction, motion
tracking, object recognition,
image matching and retrieval,
robot navigation and more.

, Feature points, feature descriptors, or just features are small parts of information about an
image. This can be a mathematical operation, or a structure like edges/shapes etc. This
converts an image into an efficient vector description. A good feature is invariant to
transformations. (Invariance = The property of remaining unchanged regardless of changes
in the conditions of measurement).
Geometric invariance: translation, rotation, scale..
Photometric invariance: brightness, exposure..




Features should be/have:
- Discriminative: should be able to attenuate important nuances
- Descriptive power: allow rich descriptions
- Sufficient in quantity: hundreds or thousands in one image.
- Relatively low computation cost: real-time performance should be achievable
- Generality: exploit features in various images types.

Canny Edge Detector

The 3 main objectives for edge detection:
- Optimal edge pixel detection without false edges
(reduce noise responses).
- Good localization of the edges (minimal error distance).
- Single response per edge (one pixel (width) per edge).

A canny detector contains 4 steps.

1) Remove noise by filtering the image with a Gaussian filter.

Take a box filer (for
example a 3x3 box
with all 1 coefficients,
and a 1/9 factor) to
get the average value
per box. Slide this
box over the image,
and the center cell of
the box will get the
new averaged value.


Instead of the box filter, often a gaussian filter is used, this will result in a blurred image. This

The benefits of buying summaries with Stuvia:

Guaranteed quality through customer reviews

Guaranteed quality through customer reviews

Stuvia customers have reviewed more than 700,000 summaries. This how you know that you are buying the best documents.

Quick and easy check-out

Quick and easy check-out

You can quickly pay through credit card or Stuvia-credit for the summaries. There is no membership needed.

Focus on what matters

Focus on what matters

Your fellow students write the study notes themselves, which is why the documents are always reliable and up-to-date. This ensures you quickly get to the core!

Frequently asked questions

What do I get when I buy this document?

You get a PDF, available immediately after your purchase. The purchased document is accessible anytime, anywhere and indefinitely through your profile.

Satisfaction guarantee: how does it work?

Our satisfaction guarantee ensures that you always find a study document that suits you well. You fill out a form, and our customer service team takes care of the rest.

Who am I buying these notes from?

Stuvia is a marketplace, so you are not buying this document from us, but from seller jarllemmens. Stuvia facilitates payment to the seller.

Will I be stuck with a subscription?

No, you only buy these notes for $9.44. You're not tied to anything after your purchase.

Can Stuvia be trusted?

4.6 stars on Google & Trustpilot (+1000 reviews)

82388 documents were sold in the last 30 days

Founded in 2010, the go-to place to buy study notes for 14 years now

Start selling
$9.44  6x  sold
  • (3)
  Add to cart