3260 papers • 126 benchmarks • 313 datasets
Detect Line Segments and their connecting Junctions in a single perspective image.
(Image credit: Papersgraph)
These leaderboards are used to track progress in wireframe-parsing-5
No benchmarks available.
Use these libraries to find wireframe-parsing-5 models and implementations
No datasets available.
No subtasks available.
This work presents a conceptually simple yet effective algorithm that significantly outperforms the previous state-of-the-art wireframe and line extraction algorithms and proposes a new metric for wireframe evaluation that penalizes overlapped line segments and incorrect line connectivities.
This paper presents a fast and parsimonious parsing method to accurately and robustly detect a vectorized wireframe in an input image with a single forward pass, and is thus called Holistically-Attracted Wireframe Parser (HAWP).
This work proposes the first joint detection and description of line segments in a single deep network, which is highly discriminative, while remaining robust to viewpoint changes and occlusions.
It is shown qualitatively that the SRW-Net handles complex room geometries better than previous Room Layout Estimation algorithms while quantitatively out-performing the baseline in non-semantic Wireframe Detection.
This article presents Holistically-Attracted Wireframe Parsing (HAWP), a method for geometric analysis of 2D images containing wireframes formed by line segments and junctions. HAWP utilizes a parsimonious Holistic Attraction (HAT) field representation that encodes line segments using a closed-form 4D geometric vector field. The proposed HAWP consists of three sequential components empowered by end-to-end and HAT-driven designs: 1) generating a dense set of line segments from HAT fields and endpoint proposals from heatmaps, 2) binding the dense line segments to sparse endpoint proposals to produce initial wireframes, and 3) filtering false positive proposals through a novel endpoint-decoupled line-of-interest aligning (EPD LOIAlign) module that captures the co-occurrence between endpoint proposals and HAT fields for better verification. Thanks to our novel designs, HAWPv2 shows strong performance in fully supervised learning, while HAWPv3 excels in self-supervised learning, achieving superior repeatability scores and efficient training (24 GPU hours on a single GPU). Furthermore, HAWPv3 exhibits a promising potential for wireframe parsing in out-of-distribution images without providing ground truth labels of wireframes.
This work designs a general frame-event feature fusion network to extract and fuse the detailed image textures and low-latency event edges and utilizes the state-of-the-art wireframe parsing networks to detect line segments on the fused feature map.
Adding a benchmark result helps the community track progress.