3260 papers • 126 benchmarks • 313 datasets
This task has no description! Would you like to contribute one?
(Image credit: Papersgraph)
These leaderboards are used to track progress in weakly-supervised-segmentation-1
No benchmarks available.
Use these libraries to find weakly-supervised-segmentation-1 models and implementations
No subtasks available.
A novel and simple neural network module is proposed, termed OrigamiNet, that can augment any CTC-trained, fully convolutional single line text recognizer, to convert it into a multi-line version by providing the model with enough spatial capacity to be able to properly collapse a 2D input signal into 1D without losing information.
An end-to-end-trainable attention module for convolutional neural network (CNN) architectures built for image classification that is able to bootstrap standard CNN architectures for the task of image classification, demonstrating superior generalisation over 6 unseen benchmark datasets.
Object attention maps generated by image classifiers are usually used as priors for weakly-supervised segmentation approaches. However, normal image classifiers produce attention only at the most discriminative object parts, which limits the performance of weakly-supervised segmentation task. Therefore, how to effectively identify entire object regions in a weakly-supervised manner has always been a challenging and meaningful problem. We observe that the attention maps produced by a classification network continuously focus on different object parts during training. In order to accumulate the discovered different object parts, we propose an online attention accumulation (OAA) strategy which maintains a cumulative attention map for each target category in each training image so that the integral object regions can be gradually promoted as the training goes. These cumulative attention maps, in turn, serve as the pixel-level supervision, which can further assist the network in discovering more integral object regions. Our method (OAA) can be plugged into any classification network and progressively accumulate the discriminative regions into integral objects as the training process goes. Despite its simplicity, when applying the resulting attention maps to the weakly-supervised semantic segmentation task, our approach improves the existing state-of-the-art methods on the PASCAL VOC 2012 segmentation benchmark, achieving a mIoU score of 66.4% on the test set. Code is available at https://mmcheng.net/oaa/.
High uncertainty is introduced as a criterion to localize non-discriminative regions that do not affect classifier decision, and is described with original Kullback-Leibler (KL) divergence losses evaluating the deviation of posterior predictions from the uniform distribution.
This paper proposes SegGini, a weakly supervised segmentation method using graphs that can utilize weak multiplex annotations, i.e. inexact and incomplete annotations, to segment arbitrary and large images, scaling from tissue microarray to whole slide image (WSI).
This work study the learning dynamics of deep segmentation networks trained on inaccurately annotated data and proposes a new method for segmentation from noisy annotations with two key elements, which outperforms standard approaches on a medical-imaging segmentation task where noises are synthesized to mimic human annotation errors.
SPADA is introduced, a framework for fuel map delineation that addresses the challenges associated with LC segmentation using sparse annotations and domain adaptation techniques for semantic segmentation, and outperforms state-of-the-art semantic segmentations approaches as well as third-party products.
This work proposes Constrained CNN (CCNN), a method which uses a novel loss function to optimize for any set of linear constraints on the output space of a CNN, and demonstrates the generality of this new learning framework.
Adding a benchmark result helps the community track progress.