3260 papers • 126 benchmarks • 313 datasets
This task has no description! Would you like to contribute one?
(Image credit: Papersgraph)
These leaderboards are used to track progress in weakly-supervised-instance-segmentation-4
Use these libraries to find weakly-supervised-instance-segmentation-4 models and implementations
No datasets available.
No subtasks available.
A self-ensembling framework where instance segmentation and semantic correspondence are jointly guided by a structured teacher in addition to the bounding box supervision, which shows a symbiotic relationship where the two tasks mutually benefit from each other.
The core idea is to redesign the loss of learning masks in instance segmentation, with no modification to the segmentation network itself, and demonstrates that the redesigned mask loss can yield surprisingly high-quality instance masks with only box annotations.
The existing instance segmentation models developed for full mask supervision can be seamlessly trained with point-based supervision collected via the proposed point annotation scheme, which is approximately 5 times faster than annotating full object masks, making high-quality instance segmentations more accessible in practice.
This paper addresses the challenging image-level supervised instance segmentation task by exploiting class peak responses to enable a classification network for instance mask extraction, and reports state-of-the-art results on popular benchmarks, including PASCAL VOC 2012 and MS COCO.
A weakly supervised model that jointly performs both semantic- and instance-segmentation and is able to segment both “thing” and “stuff” classes, and thus explain all the pixels in the image.
The proposed deep model integrates MIL into a fully supervised instance segmentation network, and can be derived by the objective consisting of two terms, i.e., the unary term and the pairwise term.
Weakly supervised semantic instance segmentation with only image-level supervision, instead of relying on expensive pixel-wise masks or bounding box annotations, is an important problem to alleviate the data-hungry nature of deep learning. In this article, we tackle this challenging problem by aggregating the image-level information of all training images into a large knowledge graph and exploiting semantic relationships from this graph. Specifically, our effort starts with some generic segment-based object proposals (SOP) without category priors. We propose a multiple instance learning (MIL) framework, which can be trained in an end-to-end manner using training images with image-level labels. For each proposal, this MIL framework can simultaneously compute probability distributions and category-aware semantic features, with which we can formulate a large undirected graph. The category of background is also included in this graph to remove the massive noisy object proposals. An optimal multi-way cut of this graph can thus assign a reliable category label to each proposal. The denoised SOP with assigned category labels can be viewed as pseudo instance segmentation of training images, which are used to train fully supervised models. The proposed approach achieves state-of-the-art performance for both weakly supervised instance segmentation and semantic segmentation. The code is available at https://github.com/yun-liu/LIID.
This work utilizes higher-level information from the behavior of a trained object detector, by seeking the smallest areas of the image from which the object detector produces almost the same result as it does from the whole image.
A semantic knowledge transfer to obtain pseudo instance labels by transferring the knowledge of WSSS to WSIS while eliminating the need for the off-the-shelf proposals is proposed and a self-refinement method is proposed to eliminate the semantic drift problem.
This paper proposes generalized multiple instance learning (MIL) and smooth maximum approximation to integrate the bounding box tightness prior into the deep neural network in an end-to-end manner.
Adding a benchmark result helps the community track progress.