Please use this identifier to cite or link to this item:
http://dspace.dtu.ac.in:8080/jspui/handle/repository/15480
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | DHIMAN, ASHISH | - |
dc.date.accessioned | 2017-01-18T08:52:05Z | - |
dc.date.available | 2017-01-18T08:52:05Z | - |
dc.date.issued | 2014-07 | - |
dc.identifier.uri | http://dspace.dtu.ac.in:8080/jspui/handle/repository/15480 | - |
dc.description.abstract | This thesis presents the framework of the combination of motion and shape information produced by different actions for human action recognition. The motion information obtained from the Radon Transform, computed on the Binary Silhouettes obtained from video sequence. Radon Transform of an image gives the projection of lines in all directions therefore it gives information about pixels variation inside the shape. We use the properties of Radon transform i.e. invariance to scaling and translational, to make noise free and robust model and rotational variance to distinguish the different actions. For the shape information we generate the set of static shapes such as MHI/MEI and AEI. Features are extracted from these models by using Pyramid of Histograms (PHOG), Directional Pixels and 2-D DFT. PHOG represents the concatenation of Histogram of Gradients (Hog) over the sub-regions and will give the global spatial information about the shape representation. Shape feature vectors alone will not give discriminating features to recognize the actions such as “run” and “walk”, “jumping at one place” and jumping forward, therefore we integrate with motion descriptor that provides the angular variations while performing actions. We also present another integrated model with motion descriptor where we use Single action images for the representation of pose of action. Single action images are extracted from the sequence of videos using the Fuzzy inference system. Finally with all the extracted features, we train the system using a Support Vector Machine and K-Nearest Neighbour algorithm to recognize the various actions. Weizmann dataset is used for Evaluation. | en_US |
dc.language.iso | en | en_US |
dc.relation.ispartofseries | TD NO.1586; | - |
dc.subject | R-TRANSFORM | en_US |
dc.subject | SPATIAL TEMPORAL DESCRIPTOR | en_US |
dc.subject | MHI/MEI | en_US |
dc.subject | PHOG | en_US |
dc.subject | AEI | en_US |
dc.title | HUMAN ACTION RECOGNITION BASED ON R-TRANSFORM AND SPATIAL TEMPORAL DESCRIPTOR | en_US |
dc.type | Thesis | en_US |
Appears in Collections: | M.E./M.Tech. Electronics & Communication Engineering |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
Major thesis Report.pdf | 4.1 MB | Adobe PDF | View/Open |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.