NeRF represents a scene with learned, continuous volumetric radiance field $F_\theta$ defined over a bounded 3D volume. In a NeRF, $F_\theta$ is a multilayer perceptron (MLP) that takes as input a 3D position $x = (x, y, z)$ and unit-norm viewing direction $d = (dx, dy, dz)$, and produces as output a density $\sigma$ and color $c = (r, g, b)$. The weights of the multilayer perceptron that parameterize $F_\theta$ are optimized so as to encode the radiance field of the scene. Volume rendering is used to compute the color of a single pixel.
Source: NeRF: Representing Scenes as Neural Radiance Fields for View SynthesisPaper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
Novel View Synthesis | 145 | 22.59% |
3D Reconstruction | 52 | 8.10% |
Neural Rendering | 31 | 4.83% |
Text to 3D | 27 | 4.21% |
3D Generation | 25 | 3.89% |
Depth Estimation | 19 | 2.96% |
Pose Estimation | 18 | 2.80% |
Semantic Segmentation | 18 | 2.80% |
Image Generation | 17 | 2.65% |
Component | Type |
|
---|---|---|
🤖 No Components Found | You can add them if they exist; e.g. Mask R-CNN uses RoIAlign |