Please use this identifier to cite or link to this item:
http://dspace.dtu.ac.in:8080/jspui/handle/repository/15987
Title: | SUBSPACE BASED ADAPTATION OF DETECTORS FOR VIDEO |
Authors: | BHAN, SANGEETA |
Keywords: | ADAPTATION SUBSPACE DETECTORS FOR VIDEO |
Issue Date: | Jul-2017 |
Series/Report no.: | TD-2966; |
Abstract: | Object detection in videos has always been a challenging problem to work with. Detection of a particular class object plays an important role in many real-world applications. Since the domain of source and target video vary significantly, classifier being trained on source video does not give expected results on the target video. Thus, domain adaptation techniques are used, one of which is Subspace Based Adaptation. In this technique, first, we compute both source and target subspace from the features collected. Since we do not have target data directly, we use different ways to get data from the target video. Compute subspace after collecting the data from both source and target videos. This generated source and target subspaces are described by eigenvectors. These d-dimensional subspaces are independently created by PCA for both source and target video. With the help of these subspaces, a transformation function is generated. Using this function source coordinate system is transformed into the target aligned source coordinate system. Now using this new coordinate system we map the source data to target aligned source subspace. The other thing we use is, learning from online samples. Sliding Window Method is used to categorize online data into TP and FP, this weakly labeled data is used to modify the model iteratively. We run our method of domain adaptation in different ways out of which some perform fairly well. |
URI: | http://dspace.dtu.ac.in:8080/jspui/handle/repository/15987 |
Appears in Collections: | M.E./M.Tech. Computer Engineering |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
sangeetathesis.pdf | 993.13 kB | Adobe PDF | View/Open |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.