Video Super-Resolution Based on 3D-CNNS with Consideration of Scene Change

Cited 27 time in webofscience Cited 17 time in scopus
  • Hit : 178
  • Download : 0
In video super-resolution, the spatio-temporal coherence between, and among the frames must be exploited appropriately for the accurate prediction of the high resolution frames. Although 2D-CNNs are powerful in modelling images, 3D-CNNs are more suitable for spatio-temporal feature extraction as they can preserve the temporal information. To this end, we propose an effective 3D-CNN for video super-resolution that does not require motion alignment as preprocessing. The proposed 3DSRnet maintains the temporal depth of spatio-temporal feature maps to maximally capture the temporally nonlinear characteristics between low and high resolution frames, and adopts residual learning in conjunction with the sub-pixel outputs. It outperforms the state-of-the-art method with average 0.45 dB and 0.36 dB higher in PSNR, for scale 3 and 4, in the Vidset4 benchmark. Our 3DSRnet first deals with the performance drop due to scene change, which is important in practice but has not been previously considered.
Publisher
Institute of Electrical and Electronics Engineers (IEEE)
Issue Date
2019-09-25
Language
English
Citation

26th IEEE International Conference on Image Processing, ICIP 2019, pp.2381 - 2384

ISSN
1522-4880
DOI
10.1109/ICIP.2019.8803297
URI
http://hdl.handle.net/10203/269285
Appears in Collection
EE-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 27 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0