TY - GEN
T1 - Scaling Up Your Kernels to 31×31
T2 - Revisiting Large Kernel Design in CNNs
AU - Ding, Xiaohan
AU - Zhang, Xiangyu
AU - Han, Jungong
AU - Ding, Guiguang
N1 - Funding Information:
*This work is supported by the National Natural Science Foundation of China (Nos.61925107, U1936202, 62021002) and the Beijing Academy of Artificial Intelligence (BAAI). This work is done during Xiaohan Ding’s internship at MEGVII Technology. †Project leader. ‡Corresponding author.
Publisher Copyright:
© 2022 IEEE.
PY - 2022
Y1 - 2022
N2 - We revisit large kernel design in modern convolutional neural networks (CNNs). Inspired by recent advances in vision transformers (ViTs), in this paper, we demonstrate that using a few large convolutional kernels instead of a stack of small kernels could be a more powerful paradigm. We suggested five guidelines, e.g., applying re-parameterized large depthwise convolutions, to design efficient high-performance large-kernel CNNs. Following the guidelines, we propose RepLKNet, a pure CNN architecture whose kernel size is as large as 31×31, in contrast to commonly used 3×3. RepLKNet greatly closes the performance gap between CNNs and ViTs, e.g., achieving comparable or superior results than Swin Transformer on ImageNet and a few typical downstream tasks, with lower latency. RepLKNet also shows nice scalability to big data and large models, obtaining 87.8% top-1 accuracy on ImageNet and 56.0% mIoU on ADE20K, which is very competitive among the state-of-the-arts with similar model sizes. Our study further reveals that, in contrast to small-kernel CNNs, large-kernel CNNs have much larger effective receptive fields and higher shape bias rather than texture bias. Code & models at https://github.com/megvii-research/RepLKNet.
AB - We revisit large kernel design in modern convolutional neural networks (CNNs). Inspired by recent advances in vision transformers (ViTs), in this paper, we demonstrate that using a few large convolutional kernels instead of a stack of small kernels could be a more powerful paradigm. We suggested five guidelines, e.g., applying re-parameterized large depthwise convolutions, to design efficient high-performance large-kernel CNNs. Following the guidelines, we propose RepLKNet, a pure CNN architecture whose kernel size is as large as 31×31, in contrast to commonly used 3×3. RepLKNet greatly closes the performance gap between CNNs and ViTs, e.g., achieving comparable or superior results than Swin Transformer on ImageNet and a few typical downstream tasks, with lower latency. RepLKNet also shows nice scalability to big data and large models, obtaining 87.8% top-1 accuracy on ImageNet and 56.0% mIoU on ADE20K, which is very competitive among the state-of-the-arts with similar model sizes. Our study further reveals that, in contrast to small-kernel CNNs, large-kernel CNNs have much larger effective receptive fields and higher shape bias rather than texture bias. Code & models at https://github.com/megvii-research/RepLKNet.
KW - Deep learning architectures and techniques
UR - http://www.scopus.com/inward/record.url?scp=85130484389&partnerID=8YFLogxK
U2 - 10.1109/CVPR52688.2022.01166
DO - 10.1109/CVPR52688.2022.01166
M3 - Other contribution
SN - 1063-6919
SN - 978-1-6654-6946-3
T3 - Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
PB - IEEE Press
ER -