Cloud removal in Sentinel-2 imagery using a deep residual neural network and SAR-optical data fusion

2 Jul 2020  ·  Andrea Meraner, Patrick Ebel, Xiao Xiang Zhu, Michael Schmitt ·

Optical remote sensing imagery is at the core of many Earth observation activities. The regular, consistent and global-scale nature of the satellite data is exploited in many applications, such as cropland monitoring, climate change assessment, land-cover and land-use classification, and disaster assessment. However, one main problem severely affects the temporal and spatial availability of surface observations, namely cloud cover. The task of removing clouds from optical images has been subject of studies since decades. The advent of the Big Data era in satellite remote sensing opens new possibilities for tackling the problem using powerful data-driven deep learning methods. In this paper, a deep residual neural network architecture is designed to remove clouds from multispectral Sentinel-2 imagery. SAR-optical data fusion is used to exploit the synergistic properties of the two imaging systems to guide the image reconstruction. Additionally, a novel cloud-adaptive loss is proposed to maximize the retainment of original information. The network is trained and tested on a globally sampled dataset comprising real cloudy and cloud-free images. The proposed setup allows to remove even optically thick clouds by reconstructing an optical representation of the underlying land surface structure.

PDF Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Cloud Removal SEN12MS-CR DSen2-CR MAE 0.031 # 3
PSNR 27.76 # 3
SAM 9.472 # 3
SSIM 0.874 # 3
Cloud Removal SEN12MS-CR-TS DSen2-CR RMSE 0.060 # 7
PSNR 26.04 # 6
SSIM 0.810 # 7
SAM 12.147 # 5

Methods