In this paper, we present an accurate and robust 6D SLAM method that uses multiple 2D sensors, i.e. perspective cameras and planar laser scanners. We have investigated strengths and weaknesses of those two sensors for 6D SLAM by conducting specifically designed experiments, and found that the sensors can complement to each other. In order to take full advantages of each approach, we fuse correspondences of those two sensors, rather than individually estimated motions. Correspondences obtained by the two sensors have different characteristics, but can be expressed in common 2D-3D relation form. We use the correspondences in a single structure-frommotion framework. In the initial motion estimation step, we propose a RANSAC-based method to generate and test multiple motion hypotheses by using multiple pools of correspondences, aiming to avoid potential bias of each sensor data. In the later motion refinement step, we introduce a variant of bundle adjustment to consider different types of constraints from the two sensors. The performance of the proposed method is demonstrated both quantitatively by experiments on closed-loop sequences and qualitatively by large-scale experiments with DGPS trajectory. The proposed method successfully closes a loop of 320 meters in twenty thousand frames by incremental process only.
Publications
International Conference
Complementation of Cameras and Lasers for Accurate 6D SLAM: From Correspondences To Bundle Adjustment
조회 수 1999
댓글 0
저 자 | Yekeun Jeong, Yunsu Bok, Jun-sik Kim, In So Kweon |
---|---|
학 회 | IEEE International Conference on Robotics and Automation (ICRA) |
논문일시(Year) | 2011 |
논문일시(Month) | 05 |