RGB-D Reconstruction
8 papers with code • 0 benchmarks • 0 datasets
Benchmarks
These leaderboards are used to track progress in RGB-D Reconstruction
Most implemented papers
Efficient Plane-Based Optimization of Geometry and Texture for Indoor RGB-D Reconstruction
We propose a novel approach to reconstruct RGB-D indoor scene based on plane primitives.
DeepDeform: Learning Non-rigid RGB-D Reconstruction with Semi-supervised Data
Applying data-driven approaches to non-rigid 3D reconstruction has been difficult, which we believe can be attributed to the lack of a large-scale training corpus.
Fusion-Aware Point Convolution for Online Semantic 3D Scene Segmentation
Online semantic 3D segmentation in company with real-time RGB-D reconstruction poses special challenges such as how to perform 3D convolution directly over the progressively fused 3D geometric data, and how to smartly fuse information from frame to frame.
DeepDeform: Learning Non-Rigid RGB-D Reconstruction With Semi-Supervised Data
Applying data-driven approaches to non-rigid 3D reconstruction has been difficult, which we believe can be attributed to the lack of a large-scale training corpus.
Depth-supervised NeRF: Fewer Views and Faster Training for Free
Crucially, SFM also produces sparse 3D points that can be used as "free" depth supervision during training: we add a loss to encourage the distribution of a ray's terminating depth matches a given 3D keypoint, incorporating depth uncertainty.
Neural Surface Reconstruction of Dynamic Scenes with Monocular RGB-D Camera
We propose Neural-DynamicReconstruction (NDR), a template-free method to recover high-fidelity geometry and motions of a dynamic scene from a monocular RGB-D camera.
High-Quality RGB-D Reconstruction via Multi-View Uncalibrated Photometric Stereo and Gradient-SDF
Fine-detailed reconstructions are in high demand in many applications.
LiveNVS: Neural View Synthesis on Live RGB-D Streams
Based on the RGB-D input stream, novel views are rendered by projecting neural features into the target view via a densely fused depth map and aggregating the features in image-space to a target feature map.