Most research on image decomposition, e.g. image segmentation and image parsing, has predominantly focused on the low-level visual clues within single image and neglected the contextual information across different images. In this paper, we present a new perspective to image decomposition piloted by the multi-labels associated with individual images. Observing that the context information (i.e., local label representations of the same label are similar while those from different labels are dissimilar) exists across different images, we propose to perform image decomposition in a collective way, and then the image decomposition problem is formulated as an optimization which maximizes inter-label difference and at the same time minimizes intralabel difference of the target label representations. Such contextual image decomposition has a wide variety of applications, among which the two exemplary ones are: 1) multi-label image annotation in which the sparse coding of a query image over the bases consisting of all learned label representations naturally produces the multi-label annotation, and 2) label ranking in which the annotated labels are re-ordered according to the sparse coding coefficients on those learned label representations. It is worth noting that these two applications can be performed simultaneously via the label propagation process in sparse coding.
|저 자||Teng Li, Tao Mei, Shuicheng Yan, In So Kweon, Chilwoo Lee|
|학 회||Accepted to IEEE Conference on Computer Vision and Pattern Recognition (CVPR2009)|