3260 papers • 126 benchmarks • 313 datasets
This task aims to solve inherent problems in raw point clouds: sparsity, noise, and irregularity.
(Image credit: Papersgraph)
These leaderboards are used to track progress in point-cloud-reconstruction-6
No benchmarks available.
Use these libraries to find point-cloud-reconstruction-6 models and implementations
No datasets available.
No subtasks available.
The main contribution is in extending the loss function of YOLO v2 to include the yaw angle, the 3D box center in Cartesian coordinates and the height of the box as a direct regression problem, which enables real-time performance, which is essential for automated driving.
The proposed SO-Net, a permutation invariant architecture for deep learning with orderless point clouds, demonstrates performance that is similar with or better than state-of-the-art approaches in recognition tasks such as point cloud reconstruction, classification, object part segmentation and shape retrieval.
This paper proposes PointMixer, a universal point set operator that facilitates information sharing among unstructured 3D points by simply replacing token-mixing MLPs with a softmax function, which can be broadly used in the network as inter-set mixing, intra- set mixing, and pyramid mixing.
3D-LMNet, a latent embedding matching approach for 3D reconstruction, is proposed, which outperform state-of-the-art approaches on the task of single-view3D reconstruction on both real and synthetic datasets while generating multiple plausible reconstructions, demonstrating the generalizability and utility of the approach.
An efficient and effective dense hybrid recurrent multi-view stereo net with dynamic consistency checking, namely D^{2}$HC-RMVSNet, for accurate dense point cloud reconstruction and dynamically aggregate geometric consistency matching error among all the views is proposed.
It is demonstrated that jointly training for both reconstruction and segmentation leads to improved performance in both the tasks, when compared to training for each task individually.
A novel differentiable projection module, called ‘CAPNet’, is introduced to obtain 2D masks from a predicted 3D point cloud reconstruction, and significantly outperform the existing projection based approaches on a large-scale synthetic dataset.
This work introduces DensePCR, a deep pyramidal network for point cloud reconstruction that hierarchically predicts point clouds of increasing resolution, and proposes an architecture that first predicts a low-resolution point cloud, and then hierarchically increases the resolution by aggregating local and global point features to deform a grid.
This paper proposes a method to reconstruct the complete 3D shape of an object from a single RGB image, with robustness to occlusion, and shows improvements for reconstruction of non-occluded and partially occluded objects by providing the predicted complete silhouette as guidance.
This paper proposes an effective and efficient pyramid multi-view stereo (MVS) net with self-adaptive view aggregation for accurate and complete dense point cloud reconstruction and establishes a new state-of-the-art on the DTU dataset with significant improvements in the completeness and overall quality.
Adding a benchmark result helps the community track progress.