Summary of NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis
5 views 0 purchase
Course
CS4245
Institution
Technische Universiteit Delft (TU Delft)
This is a summary of the paper NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis for the course Seminar of Computer Vision by Deep Learning in TU Delft
NeRF: Representing Scenes as
Neural Radiance Fields for
View Synthesis
Introduction
Represent a static scene as a continuous 5D function that outputs the radiance
emitted in each direction (theta, phi) at each point (x,y,z) in space. This paper’s
method optimizes a deep fully-connected neural network without any
convolutional layers to represent this function by regressing from a single 5D
coordinate (x, y, z, theta, phi) to a single volume density and view-dependent
RGB color.
To render this neural radiance field (NeRF) from a particular viewpoint we:
1. march camera rays through the scene to generate a sampled set of 3D
points
2. Use those points and their corresponding 2D viewing directions as input to
the neural network to produce an output set of colors and densities
3. Use classical volume rendering techniques to accumulate those colors and
densities into a 2D image.
This process is naturally differentiable so gradient descent can be used.
Technical Contributions:
An approach for representing continuous scenes with complex geometry
and materials as 5D neural radiance fields, parametrized as basic MLP
networks.
NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis 1
, A differentiable rendering procedure based on classical volume rendering
techniques, which we use to optimize these representations from standard
RGB images.
Neural Radiance Field Scene Representation
Represent a continuous scene as a 5D vector-valued function whose input is a
3D location (x,y,z) and 2D viewing direction (theta, phi), and whose output is an
emitted color c = (r,g,b) and volume density sigma.
This continuous 5D scene representation is approximated with an MLP network
and optimize its weights to map from each input 5D coordinates to its
corresponding volume density and directional emitted color.
A visualization of view-dependent emitted radiance. Our neural radiance field
representation outputs RGB color as a 5D function of both spatial position x and
viewing direction d. Here, we visualize example directional color distri- butions
for two spatial locations in our neural representation of the Ship scene. In (a)
and (b), we show the appearance of two fixed 3D points from two different
camera positions: one on the side of the ship (orange insets) and one on
the surface of the water (blue insets). Our method predicts the changing
NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis 2
The benefits of buying summaries with Stuvia:
Guaranteed quality through customer reviews
Stuvia customers have reviewed more than 700,000 summaries. This how you know that you are buying the best documents.
Quick and easy check-out
You can quickly pay through credit card or Stuvia-credit for the summaries. There is no membership needed.
Focus on what matters
Your fellow students write the study notes themselves, which is why the documents are always reliable and up-to-date. This ensures you quickly get to the core!
Frequently asked questions
What do I get when I buy this document?
You get a PDF, available immediately after your purchase. The purchased document is accessible anytime, anywhere and indefinitely through your profile.
Satisfaction guarantee: how does it work?
Our satisfaction guarantee ensures that you always find a study document that suits you well. You fill out a form, and our customer service team takes care of the rest.
Who am I buying these notes from?
Stuvia is a marketplace, so you are not buying this document from us, but from seller guillemribes. Stuvia facilitates payment to the seller.
Will I be stuck with a subscription?
No, you only buy these notes for $8.02. You're not tied to anything after your purchase.