Robust Feature Learning Against Noisy Labels

Tsung Ming Tai*, Yun Jie Jhang, Wen Jyi Hwang

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contribution


Supervised learning of deep neural networks heavily relies on large-scale datasets annotated by high-quality labels. In contrast, mislabeled samples can significantly degrade the generalization of models and result in memorizing samples, further learning erroneous associations of data contents to incorrect annotations. To this end, this paper proposes an efficient approach to tackle noisy labels by learning robust feature representation based on unsupervised augmentation restoration and cluster regularization. In addition, progressive self-bootstrapping is introduced to minimize the negative impact of supervision from noisy labels. Our proposed design is generic and flexible in applying to existing classification architectures with minimal overheads. Experimental results show that our proposed method can efficiently and effectively enhance model robustness under severely noisy labels.

Original languageEnglish
Title of host publication2023 IEEE International Conference on Image Processing, ICIP 2023 - Proceedings
PublisherIEEE Computer Society
Number of pages5
ISBN (Electronic)9781728198354
Publication statusPublished - 2023
Event30th IEEE International Conference on Image Processing, ICIP 2023 - Kuala Lumpur, Malaysia
Duration: 2023 Oct 82023 Oct 11

Publication series

NameProceedings - International Conference on Image Processing, ICIP
ISSN (Print)1522-4880


Conference30th IEEE International Conference on Image Processing, ICIP 2023
CityKuala Lumpur


  • Image classification
  • noisy labels
  • robust feature learning

ASJC Scopus subject areas

  • Software
  • Computer Vision and Pattern Recognition
  • Signal Processing


Dive into the research topics of 'Robust Feature Learning Against Noisy Labels'. Together they form a unique fingerprint.

Cite this