3260 papers • 126 benchmarks • 313 datasets
Point cloud super-resolution is a fundamental problem for 3D reconstruction and 3D data understanding. It takes a low-resolution (LR) point cloud as input and generates a high-resolution (HR) point cloud with rich details
(Image credit: Papersgraph)
These leaderboards are used to track progress in point-cloud-super-resolution-2
Use these libraries to find point-cloud-super-resolution-2 models and implementations
No subtasks available.
A data-driven point cloud upsampling technique to learn multi-level features per point and expand the point set via a multi-branch convolution unit implicitly in feature space, which shows that its upsampled points have better uniformity and are located closer to the underlying surfaces.
A new point cloud upsampling network called PU-GAN, which is formulated based on a generative adversarial network (GAN), to learn a rich variety of point distributions from the latent space and upsample points over patches on object surfaces.
Qualitative and quantitative experiments show that the method significantly outperforms the state-of-the-art learning-based and optimazation-based approaches, both in terms of handling low-resolution inputs and revealing high-fidelity details.
A novel model called NodeShuffle is proposed, which uses a Graph Convolutional Network (GCN) to better encode local point information from point neighborhoods to improve state-of-the-art upsampling methods.
A novel method called “Meta-PU” is proposed to first support point cloud upsampling of arbitrary scale factors with a single model and even outperforms the existing methods trained for a specific scale factor only.
This paper addresses the problem of generating uniform dense point clouds to describe the underlying geometric structures from given sparse point clouds. Due to the irregular and unordered nature, point cloud densification as a generative task is challenging. To tackle the challenge, we propose a novel deep neural network based method, called PUGeo-Net, that learns a $3\times 3$ linear transformation matrix $\bf T$ for each input point. Matrix $\mathbf T$ approximates the augmented Jacobian matrix of a local parameterization and builds a one-to-one correspondence between the 2D parametric domain and the 3D tangent plane so that we can lift the adaptively distributed 2D samples (which are also learned from data) to 3D space. After that, we project the samples to the curved surface by computing a displacement along the normal of the tangent plane. PUGeo-Net is fundamentally different from the existing deep learning methods that are largely motivated by the image super-resolution techniques and generate new points in the abstract feature space. Thanks to its geometry-centric nature, PUGeo-Net works well for both CAD models with sharp features and scanned models with rich geometric details. Moreover, PUGeo-Net can compute the normal for the original and generated points, which is highly desired by the surface reconstruction algorithms. Computational results show that PUGeo-Net, the first neural network that can jointly generate vertex coordinates and normals, consistently outperforms the state-of-the-art in terms of accuracy and efficiency for upsampling factor $4\sim 16$.
In this paper, we propose TP-NoDe, a novel Topology-aware Progressive Noising and Denoising technique for 3D point cloud upsampling. TP-NoDe revisits the traditional method of upsampling of the point cloud by introducing a novel perspective of adding local topological noise by incorporating a novel algorithm Density-Aware k nearest neighbour (DA-kNN) followed by denoising to map noisy perturbations to the topology of the point cloud. Unlike previous methods, we progressively upsample the point cloud, starting at a 2 × upsampling ratio and advancing to a desired ratio. TP-NoDe generates intermediate upsampling resolutions for free, obviating the need to train different models for varying upsampling ratios. TP-NoDe mitigates the need for task-specific training of upsampling networks for a specific upsampling ratio by reusing a point cloud denoising framework. We demonstrate the supremacy of our method TP-NoDe on the PU-GAN dataset and compare it with state-of-the-art upsampling methods. The code is publicly available at https://github.com/Akash-Kumbar/TP-NoDe.
In this paper, we introduce ASUR3D, a novel methodology for the arbitrary-scale upsampling of 3D point clouds employing Local Occupancy Representation. Our proposed implicit occupancy representation enables efficient point classification, effectively discerning points belonging to the surface from non-surface points. Learning an implicit representation of open surfaces, enables one to capture the better local neighbourhood representation, leading to finer refinement and reconstruction with enhanced preservation of intricate geometric details. Leveraging this capability, we can accurately sample an arbitrary number of points on the surface, facilitating precise and flexible upsampling. We demonstrate the effectiveness of ASUR3D on PUGAN and PU1K benchmark datasets. Our proposed method achieves state-of-the-art results on all benchmarks and for all evaluation metrics. Additionally, we demonstrate the efficacy of our methodology on self-proposed heritage data generated through photogrammetry, further confirming its effectiveness in diverse scenarios. The code is publicly available at https://github.com/Akash-Kumbar/ASUR3D.
This work presents an approach for compressing point cloud geometry by leveraging a lightweight super-resolution network, allowing it to obtain more accurate interpolation patterns by accessing a broader range of neighboring voxels at an acceptable computational cost.
Adding a benchmark result helps the community track progress.