Compared to image representation based on low-level local descriptors, deep neural activations of Convolutional Neural Networks (CNNs) are richer in mid-level representation, but poorer in geometric invariance properties. In this paper, we present a straightforward framework for better image representation by combining the two approaches. To take advantages of both representations, we extract a fair amount of multi-scale dense local activations from a pre-trained CNN. We then aggregate the activations by Fisher kernel framework, which has been modified with a simple scale-wise normalization essential to make it suitable for CNN activations. Our representation demonstrates new state-of-the-art performances on three public datasets: 80.78% (Acc.) on MIT Indoor 67, 83.20% (mAP) on PASCAL VOC 2007 and 91.28% (Acc.) on Oxford 102 Flowers. The results suggest that our proposal can be used as a primary image representation for better performances in wide visual recognition tasks.
Publications
International Conference
Multi-scale Pyramid Pooling for Deep Convolutional Representation
조회 수 1314
댓글 0
저 자 | Donggeun Yoo, Sunggyun Park, Joon-Young Lee, In So Kweon |
---|---|
학 회 | IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPR Workshop - DeepVision) |
논문일시(Year) | 2015 |
논문일시(Month) | 06 |