Abstract
Deep learning networks effectively address the challenge of transforming low-resolution images into high-resolution images by learning from a series of LR-HR sample pairs. However, most network models are specifically trained for certain scales, and each set of network parameters is only applicable to a particular scale of super-resolution problems. To address this issue, this study introduces an arbitrary-scale super-resolution neural operator network based on a Galerkin attention mechanism, integrating Residual Channel Attention Networks as a replacement for the original feature extraction module. Furthermore, it investigates the impact of different loss functions, training epochs, and feature extraction modules on the performance of the super-resolution neural operator. Experimental results validate the performance of the proposed feature extraction module. The findings indicate that, under the same loss functions and training epochs, the improved module exhibits smaller losses on the training set compared to the original module, demonstrating enhancements. Even with significantly more training epochs, the visual effects of the original network using EDSR-Baseline as the feature extraction module still fall short of those achieved by the improved network.
Original language | English |
---|---|
Title of host publication | UKCI 2024: 23rd UK Workshop on Computational Intelligence |
Publication status | Published - 24 Jul 2024 |
Event | 23rd UK Workshop on Computational Intelligence and 8th International Conference on Belief Functions (BELIEF 2024) - Belfast Campus, Ulster University, Belfast, United Kingdom of Great Britain and Northern Ireland Duration: 04 Sept 2024 → 06 Sept 2024 |
Conference
Conference | 23rd UK Workshop on Computational Intelligence and 8th International Conference on Belief Functions (BELIEF 2024) |
---|---|
Abbreviated title | UKCI 2024 |
Country/Territory | United Kingdom of Great Britain and Northern Ireland |
City | Belfast |
Period | 04 Sept 2024 → 06 Sept 2024 |
Keywords
- super-resolution
- Galerkin-type attention mechanism
- Low-level vision processing
- deep neural network