Robotics and Computer Vision Lab

Publications

조회 수 1174 댓글 0
Extra Form
저 자 Tae-Hyun Oh
학 회 KAIST
논문일시(Year) 2017
논문일시(Month) 5

제목: 사전 정보를 이용한 강인한 행렬 계수 최적화 

 

Committee: 

In So Kweon (Dept. of EE)

Jinwoo Shin (Dept. of EE)

Jong Chul Ye (Dept. of Bio and Brain Engineering, Dept. Mathematical Sciences)

Junmo Kim (Dept. of EE)

Yasuyuki Matsushita (Osaka University)

 

Abstract:

Low-rank matrix recovery arises from many engineering and applied science problems. Rank minimization is a crucial regularizer to derive a low-rank solution, which has attracted much attention. Since directly solving rank minimization is an NP-hard problem, its tightest convex surrogate has been solved instead. In literature, while the convex relaxation has proven that under some mild conditions, exact recoverability is guaranteed, i.e., the global optimal solution of the approximate problem matches the global optimal one of the original NP-hard problem, many real-world problems do not often satisfy these conditions. Furthermore, in this case, the optimal solution of the convex surrogate is departing from the true solution.

This is a problem caused by the approximation gap. Although many non-convex approaches have been proposed to reduce the gap, there has been no remarkable improvement. In this regard, I focus on the fact that the approaches have not exploited prior information according to the data generation procedure of each problem. In this dissertation, I leverage prior information, which naturally arises from each problem definition itself, so that performance degradation caused by the gap can be improved. The contributions of this dissertation are as follows.

(1) By proposing a soft rank constraint, the rank of a low-rank solution is encouraged to be close to the target rank. By virtue of this simple additional information, it properly deals with a deficient number of data regimes where the convex nuclear norm approach fails.

(2) I propose a method to learn priors from data in the empirical Bayesian manner. This method demonstrates the state-of-the-art performance. Surprisingly, the proposed method outperforms the matrix completion method, which assumes the perfect knowledge of exact outlier locations, without such prior knowledge.

(3) I extend the learning prior approach such that the prior information of rank and fractional outlier location is leveraged, i.e., robust matrix completion with rank prior. This further improves the success regimes of the algorithm.

The proposed methods are applied to the various real computer vision problems to demonstrate their practicality (in terms of quality and efficiency). The above three contributions have shown the fundamental performance improvement. This implies that the applicability range has widened far beyond at least the vast range of applications of the existing problems, e.g., PCA and matrix completion. Namely, the practicality of the low-rank approach has improved dramatically.

Who's 오태현

"이것 또한 다 지나가리라"

List of Articles
» Robust Low-rank Optimization with Priors
Tae-Hyun Oh
KAIST 2017 / 5
Board Pagination Prev 1 Next
/ 1