A Unified Approach of Multi-scale Deep and Hand-crafted Features for Defocus Estimation

Cited 55 time in webofscience Cited 0 time in scopus
  • Hit : 246
  • Download : 0
In this paper, we introduce robust and synergetic handcrafted features and a simple but efficient deep feature from a convolutional neural network (CNN) architecture for defocus estimation. This paper systematically analyzes the effectiveness of different features, and shows how each feature can compensate for the weaknesses of other features when they are concatenated. For a full defocus map estimation, we extract image patches on strong edges sparsely, after which we use them for deep and hand-crafted feature extraction. In order to reduce the degree of patch-scale dependency, we also propose a multi-scale patch extraction strategy. A sparse defocus map is generated using a neural network classifier followed by a probability-joint bilateral filter. The final defocus map is obtained from the sparse defocus map with guidance from an edge-preserving filtered input image. Experimental results show that our algorithm is superior to state-of-the-art algorithms in terms of defocus estimation. Our work can be used for applications such as segmentation, blur magnification, all-in-focus image generation, and 3-D estimation.
Publisher
IEEE Computer Society and the Computer Vision Foundation (CVF)
Issue Date
2017-07
Language
English
Citation

30th IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp.2760 - 2769

ISSN
1063-6919
DOI
10.1109/CVPR.2017.295
URI
http://hdl.handle.net/10203/227676
Appears in Collection
EE-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 55 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0