3260 papers • 126 benchmarks • 313 datasets
This task has no description! Would you like to contribute one?
(Image credit: Papersgraph)
These leaderboards are used to track progress in partial-label-learning-4
Use these libraries to find partial-label-learning-4 models and implementations
No subtasks available.
This paper proposes the class activation value (CAV), which owns similar properties of CAM, while CAV is versatile in various types of inputs and models, and proposes a novel method named CAV Learning (CAVL) that selects the true label by the class with the maximum CAV for model training.
Extensive experiments on artificial as well as real-world PL data sets show that CIMAP serves as an effective data-level approach to mitigate the class-imbalance problem for partial label learning.
It is shown that, with deep neural networks, the naive model can achieve competitive performances against the other state-of-the-art methods, suggesting it as a strong baseline for PLL and proposes an alternative theory on how deep learning generalize in PLL problems.
The proposed system analyzes session and user-journey level purchasing behaviors to identify customer categories/clusters that can be useful for targeted consumer insights at scale and demonstrates the robustness of each user cluster to new/unlabelled events.
A novel estimator of the classification risk, theoretically analyze the classifier-consistency, and establish an estimation error bound are proposed, and a progressive identification algorithm for approximately minimizing the proposed risk estimator is proposed.
A family of loss functions is proposed, which for the first time introduces the leverage parameter $\beta$ to consider the trade-off between losses on partial labels and non-partial ones and derives a generalized result of risk consistency for the LW loss in learning from partial labels.
This paper considers instance-dependent PLL and assumes that the generation process of the candidate labels could decompose into two sequential parts, where the correct label emerges first in the mind of the annotator but then the incorrect labels related to the feature are also selected with the correct Label due to uncertainty of labeling.
This work develops a novel label disambiguation algorithm PiCO that consists of a contrastive learning module along with a novel class prototype-based disambiguation method that can be rigorously justified from an expectation-maximization (EM) algorithm perspective.
This paper utilizes a pre-trained deep model to perform deep descriptor transformation to estimate the positive correlation between these web images, and detects the open-set noises based on the correlation values, and develops a top-k recall optimization loss for firstly assigning a label set towards each web image to reduce the impact of hard label assignment for closed- set noises.
Adding a benchmark result helps the community track progress.