TY - GEN
T1 - Learning From Multiple Experts
T2 - 16th European Conference on Computer Vision, ECCV 2020
AU - Xiang, Liuyu
AU - Ding, Guiguang
AU - Han, Jungong
N1 - Funding Information:
Acknowledgement. This work is supported by the National Natural Science Foundation of China (No. U1936202, No. 61925107). We also thank anonymous reviewers for their constructive comments.
Publisher Copyright:
© 2020, Springer Nature Switzerland AG.
PY - 2020/10/29
Y1 - 2020/10/29
N2 - In real-world scenarios, data tends to exhibit a long-tailed distribution, which increases the difficulty of training deep networks. In this paper, we propose a novel self-paced knowledge distillation framework, termed Learning From Multiple Experts (LFME). Our method is inspired by the observation that networks trained on less imbalanced subsets of the distribution often yield better performances than their jointly-trained counterparts. We refer to these models as ‘Experts’, and the proposed LFME framework aggregates the knowledge from multiple ‘Experts’ to learn a unified student model. Specifically, the proposed framework involves two levels of adaptive learning schedules: Self-paced Expert Selection and Curriculum Instance Selection, so that the knowledge is adaptively transferred to the ‘Student’. We conduct extensive experiments and demonstrate that our method is able to achieve superior performances compared to state-of-the-art methods. We also show that our method can be easily plugged into state-of-the-art long-tailed classification algorithms for further improvements.
AB - In real-world scenarios, data tends to exhibit a long-tailed distribution, which increases the difficulty of training deep networks. In this paper, we propose a novel self-paced knowledge distillation framework, termed Learning From Multiple Experts (LFME). Our method is inspired by the observation that networks trained on less imbalanced subsets of the distribution often yield better performances than their jointly-trained counterparts. We refer to these models as ‘Experts’, and the proposed LFME framework aggregates the knowledge from multiple ‘Experts’ to learn a unified student model. Specifically, the proposed framework involves two levels of adaptive learning schedules: Self-paced Expert Selection and Curriculum Instance Selection, so that the knowledge is adaptively transferred to the ‘Student’. We conduct extensive experiments and demonstrate that our method is able to achieve superior performances compared to state-of-the-art methods. We also show that our method can be easily plugged into state-of-the-art long-tailed classification algorithms for further improvements.
UR - http://www.scopus.com/inward/record.url?scp=85097370933&partnerID=8YFLogxK
U2 - 10.1007/978-3-030-58558-7_15
DO - 10.1007/978-3-030-58558-7_15
M3 - Conference Proceeding (Non-Journal item)
AN - SCOPUS:85097370933
SN - 9783030585570
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 247
EP - 263
BT - Computer Vision – ECCV 2020 - 16th European Conference, Proceedings
A2 - Vedaldi, Andrea
A2 - Bischof, Horst
A2 - Brox, Thomas
A2 - Frahm, Jan-Michael
PB - Springer Nature
Y2 - 23 August 2020 through 28 August 2020
ER -