TY - GEN
T1 - Enhancing Single Image Super Resolution
T2 - 23rd UK Workshop on Computational Intelligence (UKCI) 2024
AU - Chen, Zheyi
AU - Chang, Xiang
AU - Chao, Fei
AU - Chen, Yanjie
AU - Shang, Changjing
AU - Shen, Qiang
N1 - 23rd UK Workshop on Computational Intelligence, Belfast, NORTH IRELAND, SEP 02-04, 2024
PY - 2024
Y1 - 2024
N2 - Deep learning networks effectively address the challenge of transforming low-resolution images into high-resolution images by learning from a series of LR-HR sample pairs. However, most network models are specifically trained for certain scales, and each set of network parameters is only applicable to a particular scale of super-resolution problems. To address this issue, this study introduces an arbitrary-scale super-resolution neural operator network based on a Galerkin attention mechanism, integrating Residual Channel Attention Networks as a replacement for the original feature extraction module. Furthermore, it investigates the impact of different loss functions, training epochs, and feature extraction modules on the performance of the super-resolution neural operator. Experimental results validate the performance of the proposed feature extraction module. The findings indicate that, under the same loss functions and training epochs, the improved module exhibits smaller losses on the training set compared to the original module, demonstrating enhancements. Even with significantly more training epochs, the visual effects of the original network using EDSR-Baseline as the feature extraction module still fall short of those achieved by the improved network.
AB - Deep learning networks effectively address the challenge of transforming low-resolution images into high-resolution images by learning from a series of LR-HR sample pairs. However, most network models are specifically trained for certain scales, and each set of network parameters is only applicable to a particular scale of super-resolution problems. To address this issue, this study introduces an arbitrary-scale super-resolution neural operator network based on a Galerkin attention mechanism, integrating Residual Channel Attention Networks as a replacement for the original feature extraction module. Furthermore, it investigates the impact of different loss functions, training epochs, and feature extraction modules on the performance of the super-resolution neural operator. Experimental results validate the performance of the proposed feature extraction module. The findings indicate that, under the same loss functions and training epochs, the improved module exhibits smaller losses on the training set compared to the original module, demonstrating enhancements. Even with significantly more training epochs, the visual effects of the original network using EDSR-Baseline as the feature extraction module still fall short of those achieved by the improved network.
KW - Super-resolution
KW - Galerkin-type attention mechanism
KW - Low-level vision processing
KW - Deep neural network
U2 - 10.1007/978-3-031-78857-418
DO - 10.1007/978-3-031-78857-418
M3 - Conference Proceeding (Non-Journal item)
SN - 9783031788567
SN - 9783031788574
VL - 1462
T3 - Advances in Intelligent Systems and Computing
SP - 221
EP - 232
BT - Advances in Computational Intelligence Systems, UKCI 2024
A2 - Zheng, Huiru
A2 - Glass, David
A2 - Mulvenna, Maurice
A2 - Liu, Jun
A2 - Wang, Hui
PB - Springer Nature
CY - Gewerbestrasse 11, Cham, CH-6330, SWITZERLAND
Y2 - 2 September 2024 through 4 September 2024
ER -