Visual Comfort Assessment of Stereoscopic Images using Deep Visual and Disparity Features Based on Human Attention

Cited 14 time in webofscience Cited 0 time in scopus
  • Hit : 636
  • Download : 0
This paper proposes a novel visual comfort assessment (VCA) for stereoscopic images using deep learning. To predict visual discomfort of human visual system in stereoscopic viewing, we devise VCA deep networks to latently encode perceptual cues, which are visual differences between stereoscopic images and human attention-based disparity magnitude and gradient information. To extract the visual difference features from left and right views, a Siamese network is employed. In addition, human attention region-based disparity magnitude and gradient maps are fed to two individual deep convolutional neural networks (DCNNs) for disparity-related features based on human visual system (HVS). Finally, by aggregating these perceptual features, the proposed method directly predicts the final visual comfort score. Extensive and comparative experiments have been conducted on IEEE-SA dataset. Experimental results show that the proposed method can yield excellent correlation performance compared to existing methods.
Publisher
IEEE Signal Processing Society
Issue Date
2017-09-17
Language
English
Citation

24th IEEE International Conference on Image Processing (ICIP), pp.715 - 719

ISSN
1522-4880
DOI
10.1109/ICIP.2017.8296374
URI
http://hdl.handle.net/10203/225686
Appears in Collection
EE-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 14 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0