S3: A Spectral-Spatial Structure Loss for Pan-Sharpening Networks

Cited 1 time in webofscience Cited 0 time in scopus
  • Hit : 172
  • Download : 0
Recently, many deep-learning-based pan-sharpening methods have been proposed for generating high-quality pan-sharpened (PS) satellite images. These methods focused on various types of convolutional neural network (CNN) structures, which were trained by simply minimizing a spectral loss between network outputs and the corresponding high-resolution (HR) multi-spectral (MS) target images. However, owing to different sensor characteristics and acquisition times, HR panchromatic (PAN) and low-resolution MS image pairs tend to have large pixel misalignments, especially for moving objects in the images. Conventional CNNs trained with only the spectral loss with these satellite image data sets often produce PS images of low visual quality including double-edge artifacts along strong edges and ghosting artifacts on moving objects. In this letter, we propose a novel loss function, called a spectral-spatial structure (S3) loss, based on the correlation maps between MS targets and PAN inputs. Our proposed S3 loss can be very effectively used for pan-sharpening with various types of CNN structures, resulting in significant visual improvements on PS images with suppressed artifacts.
Publisher
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
Issue Date
2020-05
Language
English
Article Type
Article
Citation

IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, v.17, no.5, pp.829 - 833

ISSN
1545-598X
DOI
10.1109/LGRS.2019.2934493
URI
http://hdl.handle.net/10203/274231
Appears in Collection
EE-Journal Papers(저널논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 1 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0