Please use this identifier to cite or link to this item: http://dspace.dtu.ac.in:8080/jspui/handle/repository/16516
Title: MULTIPLE OBJECT TRACKING BY DECISION MAKING USING MEMORYLESS STATE TRANSITIONS
Authors: TYAGI, ADITYA
Keywords: OBJECT TRACKING
MARKOV DECISION PROCESS
STATE TRANSITIONS
TRACKED TARGET
Issue Date: Jul-2018
Series/Report no.: TD-4341;
Abstract: This thesis work formulates tracking as decision making process where a tracker should follow an object despite ambiguous frames and having some limited computational budget. In tracking by detection, data association is main challenge i.e. to accurately associate noisy/ambiguous detected object of current frame to tracked objects from previous video frames and are linked to form trajectories of targets. In this work object tracking is done by making decisions regarding transition of memoryless states. The time for which an object is present in different video frames is modeled by memoryless states specifically known as Markov decision process(MDP). It has four different states: active, tracked, lost and inactive. Every new detected object enters active state. An active target can transition to tracked or inactive i.e. true positive from object detector should transition to tracked state, while a false alarm goes to inactive state. A tracked target will remain tracked, or transition to lost if the target is lost due to some reason, such as occlusion or disappearance from the field of view of the camera. Lost target can stay as lost if not viewed for some frames, or get back to tracked state if it appears again, or transition to inactive state if lost for very long time. Finally, inactive state is the terminal state for any target, i.e. an inactive target stays as inactive forever. This is for single object, likewise for tracking multiple objects several Markov decision processes are assembled. For tracking of object template, iterative Lucas-Kanade tracker is used which works by computing optical flow. Whenever the tracker fails to track the target due to change in appearance and the present state transitions to lost, only then the template is updated. History of previous templates are stored and the tracking template is the mean vi | P a g e of past templates from history of the tracked target. For the data association of tracked target and current detections, I have used determinative Hungarian algorithm and Murty’s best assignments. Similarity function needs to be learned which is equal to learning a policy for Markov decision process for data association. Policy determines which action to take for state transitioning. Reinforcement learning is used for policy learning that takes benefit from advantages of both offline and online learning in data association. Initially providing the ground truth trajectory of a target and similarity function, Markov decision process attempts to track the target and takes feedback from the ground truth. According to obtained feedback, decision process updates the similarity function to improve tracking. Similarity function is updated only when decision process makes a mistake in data association. Lastly, training is finished when Markov decision process can successfully track the target. This framework can handle the birth/death and appearance/disappearance of objects by treating them as state transitions. Also it is very robust in handling occlusions.
URI: http://dspace.dtu.ac.in:8080/jspui/handle/repository/16516
Appears in Collections:M.E./M.Tech. Electronics & Communication Engineering

Files in This Item:
File Description SizeFormat 
Aditya MTech Thesis.pdf2.1 MBAdobe PDFView/Open


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.