Robotics and Computer Vision Lab

Research Area

조회 수 646 추천 수 0 댓글 0


Prev이전 문서

Next다음 문서

크게 작게 위로 아래로 댓글로 가기 인쇄 첨부


Prev이전 문서

Next다음 문서

크게 작게 위로 아래로 댓글로 가기 인쇄 첨부



 Members : Rockhun Do, JoonYoung Lee, Eunji Song.

 Associated center : Samsung Techwin



Developing Detection and Tracking Algorithm Working on a Mobile Robot.


Our goal is to develop algorithms that we can apply on mobile robots, but currently we focus on algorithms for static camera in order to form our fundamental structure.



When detecting object, we need to consider noises due to environmental changes, illumination changes, and many other causes.




Detection in dynamic environment



Detection when there is illumination change

 Current techniques for background subtraction have limitations as you can see from the images above. There are many false alarms caused by noises such as tree waving, color similarities between background and foreground, etc. Besides, pixel intensities change through the whole area when illumination changes and it is hard to decide if the change is from the light or from the foreground movement. To solve such problems, we are trying to model noises precisely so that we can correctly figure out where those false alarms coming from. Moreover, we are developing detection algorithm which is also robust to weather conditions like rain, snow, or wind.



Detecting the objects at a long distance


In addition to that, we are also considering about objects at a long distance.




Tracking algorithm need to track objects even if the object is blocked or is intersecting with another object. Although cameras would be static in current stage, we are developing algorithm that is applicable when the camera unit can do panning and tilting so that we can control the camera to make the object located at the center of the image.



Object tracking when there is an occlusion.



Tracking many objects when there are intersections among the objects.


In order to succeed the tracking algorithm in such environment and limitations, we need to use not only color or texture information but also motion models of the objects. Moreover, we need to extract features from the objects and store them. Thus, we can robustly estimate where the object would be using the motion model and we can also verify and adjust the results using the specific descriptions.


  1. [Current] Visual Perception for Autonomous Driving in Day and Night

    Date2018.02.27 CategoryProject Views976
    Read More
  2. [Current] Megacity Modeling

    Date2018.02.27 CategoryProject Views686
    Read More
  3. [Current] Intelligent Robot Vision System Research Center (Localization technology development Team)

    Date2018.02.27 CategoryProject Views495
    Read More
  4. [Current] i3D: Interactive Full 3D Service (Foreground/Background Segmentation Part)

    Date2018.02.27 CategoryProject Views452
    Read More
  5. [Past] National Research Lab: Development of Robust Vision Technology for Object Recognition and Shape Modeling of Intelligent Robot

    Date2018.02.27 CategoryProject Views556
    Read More
  6. [Current] Intelligent Robot Vision System Research Center (Detection Tracking Research Team)

    Date2018.02.27 CategoryProject Views646
    Read More
Board Pagination Prev 1 Next
/ 1