3260 papers • 126 benchmarks • 313 datasets
The challenge in multi-label learning with missing labels is that the training data often has incomplete label information. Collecting labels for multi-label datasets is a manual exercise and dependent on external sources, leading to the collection of only a subset of labels. This assumption of complete label information doesn't hold, especially when the label space is large. Inaccurate label-label and label-feature relationships can be captured, leading to suboptimal solutions in missing label settings.
(Image credit: Papersgraph)
These leaderboards are used to track progress in missing-labels-56
No benchmarks available.
Use these libraries to find missing-labels-56 models and implementations
No datasets available.
No subtasks available.
This work considers the hardest version of this problem, where annotators provide only one relevant label for each image, and shows that in some cases it is possible to approach the performance of fully labeled classifiers despite training with significantly fewer confirmed labels.
It is proved that there is one and only one method to convert a classical loss function for fully segmented images into a proper label-set loss function, and the leaf-Dice loss is defined, a label- set generalization of the Dice loss particularly suited for partial supervision with only missing labels.
This work proposes two simple yet effective methods via robust loss design based on an observation that a model can identify missing labels during training with a high precision to fulfill the potential of loss function in MLML without increasing the procedure and complexity.
This work proposes a deep instance-level contrastive network, namely DICNet, that is adept in capturing consistent discriminative representations of multi-view multi-label data and avoiding the negative effects of missing views and missing labels.
This work explores the class-level guidance information obtained by the Markov random walk, which is modeled on a dynamically created graph built over the class tracking matrix, and unifies the historical information of class distribution and class transitions caused by the pseudo-rectifying procedure to maintain the model’s unbiased enthusiasm towards assigning pseudo-labels to all classes.
This paper analyzes generalized metrics budgeted at k in the expected test utility (ETU) framework and derives optimal prediction rules and construct computationally efficient approximations with provable regret guarantees and robustness against model misspecification.
Max-margin deep generative models (mmDGMs) and a class-conditional variant (mmDCGMs), which explore the strongly discriminative principle of max-margin learning to improve the predictive performance of DGMs in both supervised and semi-supervised learning, while retaining the generative capability.
A novel deep neural networks based model, Canonical Correlated AutoEncoder (C2AE), is proposed, which allows end-to-end learning and prediction with the ability to exploit label dependency, and can be easily extended to address the learning problem with missing labels.
Adding a benchmark result helps the community track progress.