Paper

Neural Implicit 3D Shapes from Single Images with Spatial Patterns

Neural implicit functions have achieved impressive results for reconstructing 3D shapes from single images. However, the image features for describing 3D point samplings of implicit functions are less effective when significant variations of occlusions, views, and appearances exist from the image. To better encode image features, we study a geometry-aware convolutional kernel to leverage geometric relationships of point samplings by the proposed \emph{spatial pattern}, i.e., a structured point set. Specifically, the kernel operates at 2D projections of 3D points from the spatial pattern. Supported by the spatial pattern, the 2D kernel encodes geometric information that is crucial for 3D reconstruction tasks, while traditional ones mainly consider appearance information. Furthermore, to enable the network to discover more adaptive spatial patterns for further capturing non-local contextual information, the kernel is devised to be deformable manipulated by a spatial pattern generator. Experimental results on both synthetic and real datasets demonstrate the superiority of the proposed method. Pre-trained models, codes, and data are available at https://github.com/yixin26/SVR-SP.

Results in Papers With Code
(↓ scroll down to see all results)