Robotics and Computer Vision Laboratory Login  
  Robotics and Computer Vision Laboratory kaist logo
Archive Courses

Publications
Home  >  Research  >  Publications

 
[International Conference] A Unified Approach of Multi-scale Deep and Hand-crafted Features for Defocus Estimation
IEEE Conference on Computer Vision and Pattern Recognition (CVPR) , July 2017
Download
  JSPark_CVPR17.pdf JSPark_CVPR17.pdf (7.2M) [168]
Abstract
In this paper, we introduce robust and synergetic hand-crafted features and a simple but efficient deep feature from a convolutional neural network (CNN) architecture for defocus estimation. This paper systematically analyzes the effectiveness of different features, and shows how each feature can compensate for weaknesses of other features when they are concatenated. For a full defocus map estimation, we extract image patches on strong edges sparsely, then we use them for the deep and hand-crafted features extraction. In order to reduce patch scale dependency, we also propose multi-scale patch extraction strategy. A sparse defocus map is generated using a neural network classifier followed by a probability-joint bilateral filter. The final defocus map is obtained from the sparse defocus map with a guidance of an edge preserving filtered input image. Experimental results show that our algorithm is superior to state-of-the-art algorithms in defocus estimation. Our work can be used for applications including segmentation, blur magnification, all-in-focus image generation, and 3-D estimation.
Notes
This work was supported by the Technology Innovation Program (No. 2017-10069072) funded by the Ministry of Trade, Industry & Energy (MOTIE, Korea).

 
   
 

Robotics and Computer Vision Laboratory
KAIST | Electrical Engineering | Contact Us | Sitemap