Please use this identifier to cite or link to this item: http://dspace.dtu.ac.in:8080/jspui/handle/repository/15272
Full metadata record
DC FieldValueLanguage
dc.contributor.authorGAUTAM, JAYA-
dc.date.accessioned2016-10-26T11:53:38Z-
dc.date.available2016-10-26T11:53:38Z-
dc.date.issued2016-10-
dc.identifier.urihttp://dspace.dtu.ac.in:8080/jspui/handle/repository/15272-
dc.description.abstractHuman activity recognition is a formidable topic of machine learning and computer vision research. The aim of action recognition is to analyse the events occurring during the on-going activity from video data. A dependable HAR system is capable of recognizing human actions based upon the uniqueness of the activities and has several applications include video surveillance systems, human computer interaction which involves communication between humans and machine, content-based video annotation and retrieval, video summarization, biometrics and in health care domain. In past decade, an expeditious proliferation of video cameras has resulted in an enormous outburst of video content. The area of analysing human activity from video data is growing faster and received rapid importance due to surveillance, security, entertainment and personal logging. The activity recognition is an area compiled with several challenges at each level of processing. The low level processing contains pre-processing challenges, robustness against errors. Mid level processing has space and time-invariant representations challenges whereas high level processing has semantic representation problems. In this work, a new hybrid technique is proposed for human action and activity recognition in video sequences. The work is demonstrated on widely used databases i.e. KTH, Weizmann, Ballet and a multi view dataset IXMAS to show the accuracy of the adopted method. The videos are segmented using texture based segmentation followed by calculating the average energy image (AEI). The extreme points are calculated from difference of Gaussians images to find the key points of AEI images. The vocabulary of these points is created iii using vector quantization which is unique for each class of dataset. Then spatial distribution gradients are calculated which are combined with key point descriptors to act as a unique feature vector. These features are classified using support vector machine (SVM) and hidden markov model (HMM) for accurate recognition.en_US
dc.language.isoen_USen_US
dc.relation.ispartofseriesTD NO.2543;-
dc.subjectHUMAN ACTIVITY RECOGNITIONen_US
dc.subjectAVERAGE ENERGY IMAGEen_US
dc.subjectSPATIAL DISTRIBUTION GRADIENTSen_US
dc.subjectSPATIO TEMPORALen_US
dc.titleSPATIO TEMPORAL INTEREST KEYPOINTS AND SPATIAL DISTRIBUTION GRADIENTS BASED HARen_US
dc.typeThesisen_US
Appears in Collections:M.E./M.Tech. Electronics & Communication Engineering

Files in This Item:
File Description SizeFormat 
starting_thesis_jaya.pdf271.1 kBAdobe PDFView/Open
thesis.pdf2.48 MBAdobe PDFView/Open


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.