3260 papers • 126 benchmarks • 313 datasets
Training a linear classifier(e.g. SVM) on the representations learned in an unsupervised manner on the pretrained(e.g. ShapeNet) dataset.
(Image credit: Papersgraph)
These leaderboards are used to track progress in unsupervised-3d-point-cloud-linear-evaluation-4
No benchmarks available.
Use these libraries to find unsupervised-3d-point-cloud-linear-evaluation-4 models and implementations
No datasets available.
No subtasks available.
A novel framework, namely 3D Generative Adversarial Network (3D-GAN), which generates 3D objects from a probabilistic space by leveraging recent advances in volumetric convolutional networks and generative adversarial nets, and a powerful 3D shape descriptor which has wide applications in 3D object recognition.
The proposed SO-Net, a permutation invariant architecture for deep learning with orderless point clouds, demonstrates performance that is similar with or better than state-of-the-art approaches in recognition tasks such as point cloud reconstruction, classification, object part segmentation and shape retrieval.
Recent deep networks that directly handle points in a point set, e.g., PointNet, have been state-of-the-art for supervised learning tasks on point clouds such as classification and segmentation. In this work, a novel end-to-end deep auto-encoder is proposed to address unsupervised learning challenges on point clouds. On the encoder side, a graph-based enhancement is enforced to promote local structures on top of PointNet. Then, a novel folding-based decoder deforms a canonical 2D grid onto the underlying 3D object surface of a point cloud, achieving low reconstruction errors even for objects with delicate structures. The proposed decoder only uses about 7% parameters of a decoder with fully-connected neural networks, yet leads to a more discriminative representation that achieves higher linear SVM classification accuracy than the benchmark. In addition, the proposed decoder structure is shown, in theory, to be a generic architecture that is able to reconstruct an arbitrary point cloud from a 2D grid. Our code is available at http://www.merl.com/research/license#FoldingNet
This paper leverages 3D self-supervision for learning downstream tasks on point clouds with fewer labels and demonstrates that its approach outperforms the state-of-the-art.
This paper shows that this method outperforms previous pre-training methods in object classification, and both part-based and semantic segmentation tasks, and even when it pre-train on a single dataset (ModelNet40), improves accuracy across different datasets and encoders.
Experimental results demonstrate that, compared with supervised learning methods, the learned self-supervised representation facilitates various models to attain comparable or even better performances while capable of generalizing pre-trained models to downstream tasks, including 3D shape classification, 3D object detection, and 3D semantic segmentation.
Adding a benchmark result helps the community track progress.