TY - JOUR
T1 - Margin-aware rectified augmentation for long-tailed recognition
AU - Xiang, Liuyu
AU - Han, Jungong
AU - Ding, Guiguang
N1 - Funding Information:
This work was supported by National Key R&D Program of China (Grant no. 2022YFF1202400 ), National Natural Science Foundation of China (Nos. 61925107 , U1936202 ), the Science Fund for Creative Research Groups of the National Natural Science Funds of China (No. 62021002 ).
Publisher Copyright:
© 2023
PY - 2023/9/30
Y1 - 2023/9/30
N2 - The long-tailed data distribution is prevalent in real world and it poses great challenge on deep neural network training. In this paper, we propose Margin-aware Rectified Augmentation (MRA) to tackle this problem. Specifically, the MRA consists of two parts. From the data perspective, we analyze that data imbalance will cause the decision boundary be biased, and we propose a novel Margin-aware Rectified mixup (MR-mixup) that adaptively rectifies the biased decision boundary. Furthermore, from the model perspective, we analyze that the imbalance will also lead to consistent ‘gradient suppression’ on minority class logits. Then we propose Reweighted Mutual Learning (RML) that provides extra ‘soft target’ as supervision signal and augments the ‘encouraging gradients’ on the minority classes. We conduct extensive experiments on benchmark datasets CIFAR-LT, ImageNet-LT and iNaturalist18. The results demonstrate that the proposed MRA not only achieves state-of-the-art performance, but also yields a better-calibrated prediction.
AB - The long-tailed data distribution is prevalent in real world and it poses great challenge on deep neural network training. In this paper, we propose Margin-aware Rectified Augmentation (MRA) to tackle this problem. Specifically, the MRA consists of two parts. From the data perspective, we analyze that data imbalance will cause the decision boundary be biased, and we propose a novel Margin-aware Rectified mixup (MR-mixup) that adaptively rectifies the biased decision boundary. Furthermore, from the model perspective, we analyze that the imbalance will also lead to consistent ‘gradient suppression’ on minority class logits. Then we propose Reweighted Mutual Learning (RML) that provides extra ‘soft target’ as supervision signal and augments the ‘encouraging gradients’ on the minority classes. We conduct extensive experiments on benchmark datasets CIFAR-LT, ImageNet-LT and iNaturalist18. The results demonstrate that the proposed MRA not only achieves state-of-the-art performance, but also yields a better-calibrated prediction.
KW - Data augmentation
KW - Long-tailed recognition
KW - Mixup
UR - http://www.scopus.com/inward/record.url?scp=85153677812&partnerID=8YFLogxK
U2 - 10.1016/j.patcog.2023.109608
DO - 10.1016/j.patcog.2023.109608
M3 - Article
AN - SCOPUS:85153677812
SN - 0031-3203
VL - 141
JO - Pattern Recognition
JF - Pattern Recognition
M1 - 109608
ER -