3260 papers • 126 benchmarks • 313 datasets
Multi-Label Classification is the supervised learning problem where an instance may be associated with multiple labels. This is an extension of single-label classification (i.e., multi-class, or binary) where each instance is only associated with a single class label. Source: Deep Learning for Multi-label Classification
(Image credit: Papersgraph)
These leaderboards are used to track progress in multi-label-classification-32
Use these libraries to find multi-label-classification-32 models and implementations
The Dense Convolutional Network (DenseNet), which connects each layer to every other layer in a feed-forward fashion, and has several compelling advantages: they alleviate the vanishing-gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters.
A partial solution to constraints in using LSTMs to leverage interdependencies among target labels in predicting 14 pathologic patterns from chest x-rays is presented and state of the art results on the largest publicly availablechest x-ray dataset from the NIH without pre-training are established.
In node2vec, an algorithmic framework for learning continuous feature representations for nodes in networks, a flexible notion of a node's network neighborhood is defined and a biased random walk procedure is designed, which efficiently explores diverse neighborhoods.
Sparsemax, a new activation function similar to the traditional softmax, but able to output sparse probabilities, is proposed, and an unexpected connection between this new loss and the Huber classification loss is revealed.
This paper proposes an upper bound for the multi-objective loss and shows that it can be optimized efficiently, and proves that optimizing this upper bound yields a Pareto optimal solution under realistic assumptions.
This paper introduces a novel asymmetric loss ("ASL"), which enables to dynamically down-weights and hard-thresholds easy negative samples, while also discarding possibly mislabeled samples and demonstrating ASL applicability for other tasks, such as single-label classification and object detection.
Sentiment Knowledge Enhanced Pre-training (SKEP) is introduced in order to learn a unified sentiment representation for multiple sentiment analysis tasks, and significantly outperforms strong pre-training baseline, and achieves new state-of-the-art results on most of the test datasets.
Through extensive experiments on multi-class and multi-label classification tasks, this work outperforms the previous state-of-the-art method, NTSG and achieves a significant reduction in training and prediction times compared to other representation methods.
OBJECTIVE In multi-label text classification, each textual document is assigned 1 or more labels. As an important task that has broad applications in biomedicine, a number of different computational methods have been proposed. Many of these methods, however, have only modest accuracy or efficiency and limited success in practical use. We propose ML-Net, a novel end-to-end deep learning framework, for multi-label classification of biomedical texts. MATERIALS AND METHODS ML-Net combines a label prediction network with an automated label count prediction mechanism to provide an optimal set of labels. This is accomplished by leveraging both the predicted confidence score of each label and the deep contextual information (modeled by ELMo) in the target document. We evaluate ML-Net on 3 independent corpora in 2 text genres: biomedical literature and clinical notes. For evaluation, we use example-based measures, such as precision, recall, and the F measure. We also compare ML-Net with several competitive machine learning and deep learning baseline models. RESULTS Our benchmarking results show that ML-Net compares favorably to state-of-the-art methods in multi-label classification of biomedical text. ML-Net is also shown to be robust when evaluated on different text genres in biomedicine. CONCLUSION ML-Net is able to accuractely represent biomedical document context and dynamically estimate the label count in a more systematic and accurate manner. Unlike traditional machine learning methods, ML-Net does not require human effort for feature engineering and is a highly efficient and scalable approach to tasks with a large set of labels, so there is no need to build individual classifiers for each separate label.
The recently introduced Extremely Randomized CNets (XCNets) reduce the structure learning complexity making able to learn ensembles of XCNets outperforming state-of-the-art density estimators.
Adding a benchmark result helps the community track progress.