Robotics and Computer Vision Lab

Research Area

?

단축키

Prev이전 문서

Next다음 문서

크게 작게 위로 아래로 댓글로 가기 인쇄 첨부
?

단축키

Prev이전 문서

Next다음 문서

크게 작게 위로 아래로 댓글로 가기 인쇄 첨부

1879535083_4vdLfG8t_1.jpg

 

 Members : Jungho Kim, Seunghak Shin, Jaesik Park

 Associated center : Samsung techwin co.

 

Overview

1. Purpose of project.
Developing self-localization technology for mobile surveillance robot.

To track and follow people or objects, mobile robot should have self-localization and self-driving related algorithms. In this research, we propose a sensor-fusion based self-localization system that can be applied to intelligence robots on outside conditions. In outdoors, usually vision based localization approach suffer from many obstacles; abrupt luminance changes, ambiguous scene images and motion blur (Fig 1, 2).

 

 

dxw7gM5MR1UC1U3F.jpg   XSbPWIIHAoiS32Bqm.jpg

 

< Fig 1. Abrupt Illumination changes >

 

YA7psCF4sER.jpg   k5izxTyvhYyb.jpg

 

< Fig 2. limitations of vision based approach - ambiguous scene images and motion blur >

 

To solve this problem, the sensor-fusion system proposed. The system composed of LRF (Laser Range Finder), multiple cameras and DGPS (Differential Global Positioning System) can be applied to position estimation process. Therefore, even in the bad weather or night condition, the robot can estimate its location correctly.


2. Self-localization techniques.
We have been developed several techniques to achieve our goal. (Fig 3, 4, 5)

 

E7k9DwThtruAlSNKMf4kSY1sAHGI6.jpg

< Fig 3. Stereo camera based real-time localization technique >

 

Sk14OvCiHl6bsKlweFW.jpg

 

< Fig 4. Single camera based localization algorithm >

 

sVqKNKdsACPNdJwvn3rI.jpg

< Fig 5. Camera-Laser fusion system for estimating location >

 

 

3. Sensor fusion system.
If local information is gathered by camera or laser sensors, the error can be accumulated because these sensors do not have global perspective. The sensor-fusion algorithm can solve this problem. To be specific, using the local sensors such as camera or laser, we can estimate mobile robot’s motion and direction. Additionally, the global sensors, DGPS can gather accurate location information. Therefore, overall performance can be improved when we use those data alternatively.

 
 

 

 


  1. [Current] Visual Perception for Autonomous Driving in Day and Night

    Date2018.02.27 CategoryProject ByJIT Views634
    Read More
  2. [Current] Megacity Modeling

    Date2018.02.27 CategoryProject ByJIT Views407
    Read More
  3. [Current] Intelligent Robot Vision System Research Center (Localization technology development Team)

    Date2018.02.27 CategoryProject ByJIT Views338
    Read More
  4. [Current] i3D: Interactive Full 3D Service (Foreground/Background Segmentation Part)

    Date2018.02.27 CategoryProject ByJIT Views246
    Read More
  5. [Past] National Research Lab: Development of Robust Vision Technology for Object Recognition and Shape Modeling of Intelligent Robot

    Date2018.02.27 CategoryProject ByJIT Views312
    Read More
  6. [Current] Intelligent Robot Vision System Research Center (Detection Tracking Research Team)

    Date2018.02.27 CategoryProject ByJIT Views416
    Read More
Board Pagination Prev 1 Next
/ 1