A HVS-inspired attention to improve loss metrics for cnn-based perception-oriented super-resolution

Cited 3 time in webofscience Cited 0 time in scopus
  • Hit : 61
  • Download : 0
Deep Convolutional Neural Network (CNN) features have been demonstrated to be effective perceptual quality features. The perceptual loss, based on feature maps of pre-trained CNN's has proven to be remarkably effective for CNN based perceptual image restoration problems. In this work, taking inspiration from the the Human Visual System (HVS) and visual perception, we propose a spatial attention mechanism based on the dependency human contrast sensitivity on spatial frequency. We identify regions in input images, based on the underlying spatial frequency, which are not generally well reconstructed during Super-Resolution but are most important in terms of visual sensitivity. Based on this prior, we design a spatial attention map that is applied to feature maps in the perceptual loss and its variants, helping them to identify regions that are of more perceptual importance. The results demonstrate the our technique improves the ability of the perceptual loss and contextual loss to deliver more natural images in CNN based super-resolution.
Publisher
Institute of Electrical and Electronics Engineers Inc.
Issue Date
2019-10
Language
English
Citation

17th IEEE/CVF International Conference on Computer Vision Workshop, ICCVW 2019, pp.3904 - 3912

ISSN
2473-9936
DOI
10.1109/ICCVW.2019.00484
URI
http://hdl.handle.net/10203/311826
Appears in Collection
EE-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 3 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0