https://arxiv.org/abs/1708.02731This paper proposes weakly- and self-supervised deep convolutional neural network (WSSDCNN) for content-aware image retargeting. Our network takes a source image and a target aspect ratio, and then directly outputs a retargeted image. Retargeting is performed through a shift map which is a pixel-wise mapping from source to target grid. Our method implicitly learns an attention map, which leads to a content-aware shift map for image retargeting. As a result, discriminative parts in an image are preserved, while background regions are adjusted seamlessly. In training phase, pairs of an image and its image level annotation are used to compute content and structure losses. We demonstrate effectiveness of our proposed method for the retargeting application with insightful analyses.
조회 수 311 댓글 0
|저 자||Donghyeon Cho, Jinsun Park, Tae-Hyun Oh, Yu-Wing Tai, In So Kweon|
|학 회||IEEE International Conference on Computer Vision (ICCV)|