TY - JOUR
T1 - Unsupervised multimodal thick cloud removal for optical remote sensing images via adversarial learning
AU - Han, Qizhuo
AU - Huang, Bo
AU - Li, Ying
AU - Shang, Changjing
AU - Shen, Qiang
N1 - Publisher Copyright:
© 2025 Informa UK Limited, trading as Taylor & Francis Group.
PY - 2025/12/2
Y1 - 2025/12/2
N2 - Cloud contamination is a common degradation in optical remote sensing images, adversely affecting the application of such images. Deep-learning-based cloud removal algorithms with auxiliary information have received increasing attention in recent years. Most of these methods rely on georeferenced, cloud-free optical images from other periods as references. However, the inherent gap between the reference and the target images often leads to inaccurate reconstruction. Unsupervised methods have also been proposed, mitigating the gap issue by eliminating the need for reference images. Yet, they typically and solely rely on reconstruction loss during training, often resulting in unnatural outcomes. To tackle these limitations, we propose ALM-CR (Adversarial Learning–based Multimodal Cloud Removal), an unsupervised two-stage framework that leverages synthetic aperture radar (SAR) as auxiliary input. The first stage performs SAR-to-optical translation for structural and approximate spectral recovery, followed by SAR-optical fusion to restore fine-grained spectral details. The proposed adversarial learning strategy removes the need for temporal reference images, enabling precise reconstruction of cloud-covered images while preventing overfitting. Experimental results demonstrate that our method surpasses existing unsupervised methods on both reference and no-reference metrics, and reconstructs spectral information more consistently than supervised methods.
AB - Cloud contamination is a common degradation in optical remote sensing images, adversely affecting the application of such images. Deep-learning-based cloud removal algorithms with auxiliary information have received increasing attention in recent years. Most of these methods rely on georeferenced, cloud-free optical images from other periods as references. However, the inherent gap between the reference and the target images often leads to inaccurate reconstruction. Unsupervised methods have also been proposed, mitigating the gap issue by eliminating the need for reference images. Yet, they typically and solely rely on reconstruction loss during training, often resulting in unnatural outcomes. To tackle these limitations, we propose ALM-CR (Adversarial Learning–based Multimodal Cloud Removal), an unsupervised two-stage framework that leverages synthetic aperture radar (SAR) as auxiliary input. The first stage performs SAR-to-optical translation for structural and approximate spectral recovery, followed by SAR-optical fusion to restore fine-grained spectral details. The proposed adversarial learning strategy removes the need for temporal reference images, enabling precise reconstruction of cloud-covered images while preventing overfitting. Experimental results demonstrate that our method surpasses existing unsupervised methods on both reference and no-reference metrics, and reconstructs spectral information more consistently than supervised methods.
KW - adversarial learning
KW - optical remote sensing
KW - SAR image
KW - Unsupervised cloud removal
UR - https://www.scopus.com/pages/publications/105019961475
U2 - 10.1080/2150704X.2025.2577282
DO - 10.1080/2150704X.2025.2577282
M3 - Article
AN - SCOPUS:105019961475
SN - 2150-704X
VL - 16
SP - 1336
EP - 1347
JO - Remote Sensing Letters
JF - Remote Sensing Letters
IS - 12
ER -