3260 papers • 126 benchmarks • 313 datasets
This task has no description! Would you like to contribute one?
(Image credit: Papersgraph)
These leaderboards are used to track progress in histopathological-image-classification-5
No benchmarks available.
Use these libraries to find histopathological-image-classification-5 models and implementations
No subtasks available.
This work develops the computational approach based on deep convolution neural networks for breast cancer histology image classification that outperforms other common methods in automated histopathological image classification.
This work leverages both task-agnostic and task-specific unlabeled data based on two novel strategies based on a self-supervised pretext task that harnesses the underlying multi-resolution contextual cues in histology whole-slide images to learn a powerful supervisory signal for unsupervised representation learning.
This work proposes a methodology to exploit continuous concept measures as Regression Concept Vectors (RCVs) in the activation space of a layer to exploit network sensitivity to increasing values of a given concept measure.
The FDT and FDC loss functions are designed based on the statistical formulation of the Fisher Discriminant Analysis (FDA), which is a linear subspace learning method and show the effectiveness of the proposed loss functions.
This work explored the performance of a deep neural network and triplet loss in the area of representation learning, investigated the notion of similarity and dissimilarity in pathology whole-slide images and compared different setups from unsupervised and semi-supervised to supervised learning.
It is found that offline and online mining approaches have comparable performances for a specific architecture, such as ResNet-18 in this study.
Experimental results on two public datasets, namely MNIST and histopathology colorectal cancer, substantiate the effectiveness of the proposed triplet mining method with Bayesian updating and conjugate priors.
This paper introduces a novel attention-based network, the Holistic ATtention Network (HATNet), which outperforms the previous best network Y-Net and uses self-attention to encode global information, allowing it to learn representations from clinically relevant tissue structures without any explicit supervision.
This work proposes a MIL-based method to jointly learn both instance- and bag-level embeddings in a single framework that can accurately predict instance labels and leverages robust hierarchical pooling of features to obtain bag- level features without sacrificing accuracy.
A novel convolutional neural network architecture composed of a Concatenation of multiple Networks, called C-Net, to classify biomedical images, which outperforms all other models on the individual metrics for both datasets and achieves zero misclassification.
Adding a benchmark result helps the community track progress.