1
Boosting Co-teaching with Compression Regularization for Label Noise
2
Learning with Feature-Dependent Label Noise: A Progressive Approach
3
Augmentation Strategies for Learning with Noisy Labels
4
Clusterability as an Alternative to Anchor Points When Learning with Noisy Labels
5
A Survey of Label-noise Representation Learning: Past, Present and Future
6
A Survey on Deep Learning with Noisy Labels: How to train your model when you cannot trust on the annotations?
7
Contrast to Divide: Self-Supervised Pre-Training for Learning with Noisy Labels
8
Learning From Noisy Labels With Deep Neural Networks: A Survey
9
Early-Learning Regularization Prevents Memorization of Noisy Labels
10
Unpacking Information Bottlenecks: Unifying Information-Theoretic Objectives in Deep Learning
11
Combating Noisy Labels by Agreement: A Joint Training Method with Co-Regularization
12
DivideMix: Learning with Noisy Labels as Semi-supervised Learning
13
Identifying Mislabeled Data using the Area Under the Margin Ranking
14
Learning Sparse Networks Using Targeted Dropout
15
SELFIE: Refurbishing Unclean Samples for Robust Deep Learning
16
Recycling: Semi-Supervised Learning With Noisy Labels in Deep Neural Networks
17
EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks
18
Understanding and Utilizing Deep Neural Networks Trained with Noisy Labels
19
MixMatch: A Holistic Approach to Semi-Supervised Learning
20
Unsupervised label noise modeling and loss correction
21
Probabilistic End-To-End Noise Correction for Learning With Noisy Labels
22
How does Disagreement Help Generalization against Label Corruption?
23
Learning to Learn From Noisy Labeled Data
24
Dimensionality-Driven Learning with Noisy Labels
25
Exploring the Limits of Weakly Supervised Pretraining
26
Co-teaching: Robust training of deep neural networks with extremely noisy labels
27
Joint Optimization Framework for Learning with Noisy Labels
28
A Semi-Supervised Two-Stage Approach to Learning from Noisy Labels
29
MentorNet: Learning Data-Driven Curriculum for Very Deep Neural Networks on Corrupted Labels
30
mixup: Beyond Empirical Risk Minimization
31
WebVision Database: Visual Learning and Understanding from Web Data
32
A Closer Look at Memorization in Deep Networks
33
Decoupling "when to update" from "how to update"
34
Active Bias: Training More Accurate Neural Networks by Emphasizing High Variance Samples
35
Robust Loss Functions under Label Noise for Deep Neural Networks
36
Understanding deep learning requires rethinking generalization
37
Training deep neural-networks using a noise adaptation layer
38
Making Deep Neural Networks Robust to Label Noise: A Loss Correction Approach
39
SGDR: Stochastic Gradient Descent with Warm Restarts
40
Identity Mappings in Deep Residual Networks
41
Deep Residual Learning for Image Recognition
42
Learning from massive noisy labeled data for image classification
43
Learning with Symmetric Label Noise: The Importance of Being Unhinged
44
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
45
Training Deep Neural Networks on Noisy Labels with Bootstrapping
46
Very Deep Convolutional Networks for Large-Scale Image Recognition
47
Learning Ordered Representations with Nested Dropout
48
Learning with Noisy Labels
49
ImageNet classification with deep convolutional neural networks
50
Support Vector Machines with the Ramp Loss and the Hard Margin Loss
51
Self-Paced Learning for Latent Variable Models
52
ImageNet: A large-scale hierarchical image database
53
The information bottleneck method
54
Least Squares Support Vector Machine Classifiers
55
Learning From Noisy Examples
56
De Finetti's Theorem on Exchangeable Variables
57
A Bias-Variance Decomposition for Bayesian Deep Learning
58
[ M.S. degree from Oregon State University, Cor-vallis, OR, USA, in 2015, and the Ph.D. degree from the Ecole des Ponts ParisTech, Champs-sur-Marne, France
59
Deep Variational Information Bottleneck
60
Dropout: a simple way to prevent neural networks from overfitting
62
Learning Multiple Layers of Features from Tiny Images
63
A Mathematical Theory of Communication
64
Induction of decision trees
65
Exchangeability and related topics
66
Least Squares Support Vector Machines
67
Shell Xu Hu is with Samsung AI Center, Cambridge, England
68
Term Professor with the University of Florida, Gainesville, FL, USA
69
Advanced Research, and an Associate Dean of the Business School
70
and the Ph.D. degree in applied sciences from Katholieke Universiteit Leuven (KU Leuven), Leu-ven, Belgium, in 1989 and 1995, respectively