Confidence Adaptive Regularization for Deep Learning with Noisy Labels

18 Aug 2021  ·  Yangdi Lu, Yang Bo, Wenbo He ·

Recent studies on the memorization effects of deep neural networks on noisy labels show that the networks first fit the correctly-labeled training samples before memorizing the mislabeled samples. Motivated by this early-learning phenomenon, we propose a novel method to prevent memorization of the mislabeled samples. Unlike the existing approaches which use the model output to identify or ignore the mislabeled samples, we introduce an indicator branch to the original model and enable the model to produce a confidence value for each sample. The confidence values are incorporated in our loss function which is learned to assign large confidence values to correctly-labeled samples and small confidence values to mislabeled samples. We also propose an auxiliary regularization term to further improve the robustness of the model. To improve the performance, we gradually correct the noisy labels with a well-designed target estimation strategy. We provide the theoretical analysis and conduct the experiments on synthetic and real-world datasets, demonstrating that our approach achieves comparable results to the state-of-the-art methods.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Image Classification mini WebVision 1.0 CAR Top-1 Accuracy 77.41 # 28
Top-5 Accuracy 92.25 # 18
ImageNet Top-1 Accuracy 74.09 # 23
ImageNet Top-5 Accuracy 92.09 # 16

Methods


No methods listed for this paper. Add relevant methods here