Recently, many vision-based navigation methods have been introduced as an intelligent robot application. However, many of these methods mainly focus on finding an image in the database corresponding to a query image. Thus, if the environment changes, for example, objects moving in the environment, a robot is unlikely to find consistent corresponding points with one of the database images. To handle these problems, we propose a novel motion-based navigation method in contrast with appearance-based approaches. This algorithm is based on motion estimation by a camera to plan the next movement of a robot and robust feature matching to recognize home and destination locations. Experimental results demonstrate the capability of the vision-based autonomous navigation against environment changes.
조회 수 7215 댓글 0
|저 자||Jungho Kim, Yunsu Bok, In So Kweon|
|학 회||IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)|
Prev A Robust Top-Down Approach for Rotation Estimation and Vanish... A Robust Top-Down Approach for Rotation Estimation and Vanish... 2009.09.18by Efficient Feature Tracking for Scene Recognition using Angular and Scale Constraints Next Efficient Feature Tracking for Scene Recognition using Angular and Scale Constraints 2009.09.18by 김원진