Robotics and Computer Vision Lab

Research Area

JIT
조회 수 562 추천 수 0 댓글 0
?

단축키

Prev이전 문서

Next다음 문서

크게 작게 위로 아래로 댓글로 가기 인쇄 첨부
?

단축키

Prev이전 문서

Next다음 문서

크게 작게 위로 아래로 댓글로 가기 인쇄 첨부

1879535083_4vdLfG8t_1.jpg

 

 Members : Jungho Kim, Seunghak Shin, Jaesik Park

 Associated center : Samsung techwin co.

 

Overview

1. Purpose of project.
Developing self-localization technology for mobile surveillance robot.

To track and follow people or objects, mobile robot should have self-localization and self-driving related algorithms. In this research, we propose a sensor-fusion based self-localization system that can be applied to intelligence robots on outside conditions. In outdoors, usually vision based localization approach suffer from many obstacles; abrupt luminance changes, ambiguous scene images and motion blur (Fig 1, 2).

 

 

dxw7gM5MR1UC1U3F.jpg   XSbPWIIHAoiS32Bqm.jpg

 

< Fig 1. Abrupt Illumination changes >

 

YA7psCF4sER.jpg   k5izxTyvhYyb.jpg

 

< Fig 2. limitations of vision based approach - ambiguous scene images and motion blur >

 

To solve this problem, the sensor-fusion system proposed. The system composed of LRF (Laser Range Finder), multiple cameras and DGPS (Differential Global Positioning System) can be applied to position estimation process. Therefore, even in the bad weather or night condition, the robot can estimate its location correctly.


2. Self-localization techniques.
We have been developed several techniques to achieve our goal. (Fig 3, 4, 5)

 

E7k9DwThtruAlSNKMf4kSY1sAHGI6.jpg

< Fig 3. Stereo camera based real-time localization technique >

 

Sk14OvCiHl6bsKlweFW.jpg

 

< Fig 4. Single camera based localization algorithm >

 

sVqKNKdsACPNdJwvn3rI.jpg

< Fig 5. Camera-Laser fusion system for estimating location >

 

 

3. Sensor fusion system.
If local information is gathered by camera or laser sensors, the error can be accumulated because these sensors do not have global perspective. The sensor-fusion algorithm can solve this problem. To be specific, using the local sensors such as camera or laser, we can estimate mobile robot’s motion and direction. Additionally, the global sensors, DGPS can gather accurate location information. Therefore, overall performance can be improved when we use those data alternatively.

 
 

 

 


  1. No Image

    Pixel-level Video Recognition and Understanding Projects

    Date2024.03.11 CategoryProject Views5
    Read More
  2. MTMMC: A Large-Scale Real-World Multi-Modal Camera Tracking Benchmark

    Date2024.03.11 CategoryProject Views45
    Read More
  3. [2021 Research Area] 1. Recognition

    Date2021.05.28 CategoryReasearch Area Views1559
    Read More
  4. [2021 Research Area] 2. Video and Language

    Date2021.05.28 CategoryReasearch Area Views432
    Read More
  5. [2021 Research Area] 3. Video Processing

    Date2021.05.28 CategoryReasearch Area Views428
    Read More
  6. [2021 Research Area] 4. Domain Adaptation

    Date2021.05.28 CategoryReasearch Area Views409
    Read More
  7. [2021 Research Area] 5. 3D and Depth Completion

    Date2021.05.28 CategoryReasearch Area Views1237
    Read More
  8. [2021 Research Area] 6. Adversarial Attack

    Date2021.05.28 CategoryReasearch Area Views29719
    Read More
  9. [2021 Research Area] 7. Multi-Modal Learning

    Date2021.05.28 CategoryReasearch Area Views1359
    Read More
  10. [2021 Research Area] 8. Machine Learning

    Date2021.05.28 CategoryReasearch Area Views288
    Read More
  11. [2021 Research Area] 9. Semi Supervised Learning

    Date2021.05.28 CategoryReasearch Area Views224
    Read More
  12. [2021 Research Area] 10. Sign Language

    Date2021.05.28 CategoryReasearch Area Views215
    Read More
  13. [2021 Research Area] 11. Vehicle and Robot

    Date2021.05.28 CategoryReasearch Area Views730
    Read More
  14. Visual Perception for Autonomous Driving in Day and Night

    Date2018.02.27 CategoryProject Views1089
    Read More
  15. Megacity Modeling

    Date2018.02.27 CategoryProject Views760
    Read More
  16. Intelligent Robot Vision System Research Center (Localization technology development Team)

    Date2018.02.27 CategoryProject Views562
    Read More
  17. i3D: Interactive Full 3D Service (Foreground/Background Segmentation Part)

    Date2018.02.27 CategoryProject Views517
    Read More
  18. National Research Lab: Development of Robust Vision Technology for Object Recognition and Shape Modeling of Intelligent Robot

    Date2018.02.27 CategoryProject Views606
    Read More
  19. Intelligent Robot Vision System Research Center (Detection Tracking Research Team)

    Date2018.02.27 CategoryProject Views723
    Read More
  20. [2018] RCV lab Research Fields

    Date2018.02.27 CategoryReasearch Area Views3905
    Read More
Board Pagination Prev 1 2 Next
/ 2