DSpace Repository

Model-based Target Tracking From a Moving Monocular Camera

Show simple item record

dc.contributor.advisor Dailey, Matthew N.
dc.contributor.author Basit, Abdul
dc.contributor.other Ekpanyapong, Mongkol
dc.contributor.other Luong, Huynh Trung
dc.date.accessioned 2015-05-13T15:15:46Z
dc.date.available 2015-05-13T15:15:46Z
dc.date.issued 2014-07
dc.identifier.other AIT Diss no.CS-14-03
dc.identifier.uri http://www.cs.ait.ac.th/xmlui/handle/123456789/782
dc.description 93 p. en_US
dc.description.abstract Small unmanned ground vehicles (SUGVs) and Unmanned Aerial Vehicles (UAVs) are useful for gathering information about environments where access by human beings is either impossible or dangerous. They are portable, light weight, and inexpensive. Such robots are becoming increasingly common in military applications and disaster areas, where they are generally teleoperated, as access by human beings is either restricted or dangerous. Onei ntriguing application of the small autonomous vehicle is pursuit. Robot pursuit applications include following and monitoring important people and pursuing suspicious people in security or military contexts. One possibility for the main sensor of a pursuit robot is a monocular camera. Although a single camera simplifies the design and lowers the cost of the robot, it also presents challenges. First, tracking an object during target pursuit requires a tracker that is both sufficiently accurate and sufficiently fast to keep track of the target in real time. Second, since depth estimates based on monocular cues will necessarily be extremely noisy, to obtain usable target position estimates, sensor modeling and state filtering will be required. In this thesis, I focus on the use of noisy sensor measurements while tracking a target with the pursuit robot’s monocular camera. Noise makes target tracking difficult. The noisy data may lead the pursuit robot to track a false target or abruptly shut down the visual tracking process. However, in addition to camera data, we also receive noisy odometry data from the robots‘s encoders. I propose a method to obtain smooth tracking and trajectories of robot and target jointly with a moving monocular camera by coupling the robot’s kinematics and the target’s dynamics in a joint states pace model. I propose a novel joint localization model to reduce integrated robot and target position estimation error caused by noisy monocular depth cues. The method fuses information from the 2D visual tracker and the SUGV’s wheel encoders with knowledge of the robot’s kinematics in an extended Kalman filter to obtain superior state estimation accuracy. The model maintains an estimate of the state of the target, assuming a simple linear dynamical model, as well as an estimate of the pursuit robot’s state, assuming differential drive robot kinematics. The joint localization model significantly improves estimation accuracy compared to simple sensor based position estimates as well as compared to filters not incorporating pursuit robot kinematics. I use joint localization or joint state estimation for the proposed method in this thesis. Additionally, I propose a fast visual tracking method using color histogram back projection and an adaptive histogram similarity threshold. In the first phase, I use a CAMSHIFT tracker for monocular target tracking and suspend the tracking process when the target is occluded or lost in a cluttered environment. The suspension decision uses an adaptive histogram similarity threshold. This helps prevent the visual tracker from tracking an incorrect object. Once the target is reported absent from the scene, we need a fast method to correctly reinitialize the CAMSHIFT tracker in order to restart the tracking process. The second part of visual sensor is the redetection phase. The proposed redetection method swiftly searches the entire image in real time for the target, reducing false detection and correctly reinializing the visual tracker. The results show that the proposed visual tracking method is fast and easily recovers the target once it reappears after being occluded in a cluttered environment. Furthermore, the proposed estimation model produces a smoother trajectory and is more tolerant to noise than alternative methods.
dc.description.abstract Small unmanned ground vehicles (SUGVs) and Unmanned Aerial Vehicles (UAVs) are useful for gathering information about environments where access by human beings is either impossible or dangerous. They are portable, light weight, and inexpensive. Such robots are becoming increasingly common in military applications and disaster areas, where they are generally teleoperated, as access by human beings is either restricted or dangerous. Onei ntriguing application of the small autonomous vehicle is pursuit. Robot pursuit applications include following and monitoring important people and pursuing suspicious people in security or military contexts. One possibility for the main sensor of a pursuit robot is a monocular camera. Although a single camera simplifies the design and lowers the cost of the robot, it also presents challenges. First, tracking an object during target pursuit requires a tracker that is both sufficiently accurate and sufficiently fast to keep track of the target in real time. Second, since depth estimates based on monocular cues will necessarily be extremely noisy, to obtain usable target position estimates, sensor modeling and state filtering will be required. In this thesis, I focus on the use of noisy sensor measurements while tracking a target with the pursuit robot’s monocular camera. Noise makes target tracking difficult. The noisy data may lead the pursuit robot to track a false target or abruptly shut down the visual tracking process. However, in addition to camera data, we also receive noisy odometry data from the robots‘s encoders. I propose a method to obtain smooth tracking and trajectories of robot and target jointly with a moving monocular camera by coupling the robot’s kinematics and the target’s dynamics in a joint states pace model. I propose a novel joint localization model to reduce integrated robot and target position estimation error caused by noisy monocular depth cues. The method fuses information from the 2D visual tracker and the SUGV’s wheel encoders with knowledge of the robot’s kinematics in an extended Kalman filter to obtain superior state estimation accuracy. The model maintains an estimate of the state of the target, assuming a simple linear dynamical model, as well as an estimate of the pursuit robot’s state, assuming differential drive robot kinematics. The joint localization model significantly improves estimation accuracy compared to simple sensor based position estimates as well as compared to filters not incorporating pursuit robot kinematics. I use joint localization or joint state estimation for the proposed method in this thesis. Additionally, I propose a fast visual tracking method using color histogram back projection and an adaptive histogram similarity threshold. In the first phase, I use a CAMSHIFT tracker for monocular target tracking and suspend the tracking process when the target is occluded or lost in a cluttered environment. The suspension decision uses an adaptive histogram similarity threshold. This helps prevent the visual tracker from tracking an incorrect object. Once the target is reported absent from the scene, we need a fast method to correctly reinitialize the CAMSHIFT tracker in order to restart the tracking process. The second part of visual sensor is the redetection phase. The proposed redetection method swiftly searches the entire image in real time for the target, reducing false detection and correctly reinializing the visual tracker. The results show that the proposed visual tracking method is fast and easily recovers the target once it reappears after being occluded in a cluttered environment. Furthermore, the proposed estimation model produces a smoother trajectory and is more tolerant to noise than alternative methods.
dc.description.sponsorship University of Balochistan, Quetta, Pakistan en_US
dc.publisher AIT en_US
dc.title Model-based Target Tracking From a Moving Monocular Camera en_US
dc.type Dissertation en_US


Files in this item

This item appears in the following Collection(s)

Show simple item record

Search DSpace


Advanced Search

Browse

My Account