3260 papers • 126 benchmarks • 313 datasets
Finding a meaningful correspondence between two or more shapes is one of the most fundamental shape analysis tasks. The problem can be generally stated as: given input shapes S1,S2,...,SN, find a meaningful relation (or mapping) between their elements. Under different contexts, the problem has also been referred to as registration, alignment, or simply, matching. Shape correspondence is a key algorithmic component in tasks such as 3D scan alignment and space-time reconstruction, as well as an indispensable prerequisite in diverse applications including attribute transfer, shape interpolation, and statistical modeling.
(Image credit: Papersgraph)
These leaderboards are used to track progress in 3d-shape-representation
Use these libraries to find 3d-shape-representation models and implementations
No subtasks available.
This work presents two complementary approaches for learning elementary structures: (i) patch deformation learning and (ii) point translation learning, which can be extended to abstract structures of higher dimensions for improved results.
This paper demonstrates that learning the basis from data can both improve robustness and lead to better accuracy in challenging settings, and proposes the first end-to-end trainable functional map-based correspondence approach in which both the basis and the descriptors are learned from data.
This work presents a new deep learning approach for matching deformable shapes by introducing Shape Deformation Networks which jointly encode 3D shapes and correspondences, and shows that this method is robust to many types of perturbations, and generalizes to non-human shapes.
The key of the approach is to exploit an orientation estimation module with a domain adaptive discriminator to align the orientations of point cloud pairs, which significantly alleviates the mispredictions of symmetrical parts and design a self-ensembling framework for unsupervised point cloud shape correspondence.
We present Diff3F as a simple, robust, and class-agnostic feature descriptor that can be computed for untextured input shapes (meshes or point clouds). Our method distills diffusion features from image foundational models onto input shapes. Specifically, we use the input shapes to produce depth and normal maps as guidance for conditional image synthesis. In the process, we produce (diffusion) features in 2D that we subsequently lift and aggregate on the original surface. Our key observation is that even if the conditional image generations obtained from multi-view rendering of the input shapes are inconsistent, the associated image features are robust and, hence, can be directly aggregated across views. This produces semantic features on the input shapes, without requiring additional data or training. We perform extensive experiments on multiple benchmarks (SHREC'19, SHREC'20, FAUST, and TOSCA) and demonstrate that our features, being semantic instead of geometric, produce reliable correspondence across both isometric and non-isometrically related shape families. Code is available at https://github.com/niladridutt/Diffusion-3D-Features.
CorrNet3D is the first unsupervised and end-to-end deep learning-based framework to drive the learning of dense correspondence between 3D shapes by means of deformation-like reconstruction to overcome the need for annotated data.
Experiments on challenging, real-world imagery from ScanNet show that ROCA signif-icantly improves on state of the art, from 9.5% to 17.6% in retrieval-aware CAD alignment accuracy.
Deep Point Correspondence’s novelty lies in its lack of a decoder component, which uses latent similarity and the input coordinates themselves to construct the point cloud and determine correspondence, replacing the coordinate regression done by the decoder.
Adding a benchmark result helps the community track progress.