3260 papers • 126 benchmarks • 313 datasets
3D point cloud segmentation is the process of classifying point clouds into multiple homogeneous regions, the points in the same region will have the same properties. The segmentation is challenging because of high redundancy, uneven sampling density, and lack explicit structure of point cloud data. This problem has many applications in robotics such as intelligent vehicles, autonomous mapping and navigation. Source: 3D point cloud segmentation: A survey
(Image credit: Papersgraph)
These leaderboards are used to track progress in point-cloud-segmentation-3
Use these libraries to find point-cloud-segmentation-3 models and implementations
No subtasks available.
This paper designs a novel type of neural network that directly consumes point clouds, which well respects the permutation invariance of points in the input and provides a unified architecture for applications ranging from object classification, part segmentation, to scene semantic parsing.
A hierarchical neural network that applies PointNet recursively on a nested partitioning of the input point set and proposes novel set learning layers to adaptively combine features from multiple scales to learn deep point set features efficiently and robustly.
This work proposes SortNet, as part of the Point Transformer, which induces input permutation invariance by selecting points based on a learned score, to extract local and global features and relate both representations by introducing the local-global attention mechanism.
This work proposes a new neural network module suitable for CNN-based high-level tasks on point clouds, including classification and segmentation called EdgeConv, which acts on graphs dynamically computed in each layer of the network.
Inspired by the outstanding 2D shape descriptor SIFT, a module called PointSIFT is designed that encodes information of different orientations and is adaptive to scale of shape, which outperforms state-of-the-art method on standard benchmark datasets.
This paper presents a very simple but efficient algorithm for 3D line segment detection from large scale unorganized point cloud based on point cloud segmentation and 2D line detection.
Stratified Transformer is proposed that is able to capture long-range contexts and demonstrates strong generalization ability and high performance and first-layer point embedding to aggregate local information, which facilitates convergence and boosts performance.
This paper presents a comprehensive review of recent progress in deep learning methods for point clouds, covering three major tasks, including 3D shape classification, 3D object detection and tracking, and 3D point cloud segmentation.
GDANet introduces Geometry-Disentangle Module to dynamically disentangle point clouds into the contour and flat part of 3D objects, respectively denoted by sharp and gentle variation components and exploits Sharp-Gentle Complementary Attention Module that regards the features from sharp and Gentle variation components as two holistic representations.
Spatially-Adaptive Convolution (SAC) is proposed to adopt different filters for different locations according to the input image to improve LiDAR point-cloud segmentation and outperform all previous published methods by at least 3.7% mIoU on the SemanticKITTI benchmark with comparable inference speed.
Adding a benchmark result helps the community track progress.