Please use this identifier to cite or link to this item: http://dspace.dtu.ac.in:8080/jspui/handle/repository/14709
Full metadata record
DC FieldValueLanguage
dc.contributor.authorMEENU-
dc.date.accessioned2016-05-04T10:09:48Z-
dc.date.available2016-05-04T10:09:48Z-
dc.date.issued2016-05-
dc.identifier.urihttp://dspace.dtu.ac.in:8080/jspui/handle/repository/14709-
dc.description.abstractMusic has been an inherent part of human life when it comes to recreation; entertainment and much recently, even as a therapeutic medium. The way music is composed, played and listened to has witnessed an enormous transition from the age of magnetic tape recorders to the recent age of digital music players streaming music from the cloud. What has remained intact is the special relation that music shares with human emotions. We most often choose to listen to a song or music which best suits our mood at that instant. In spite of this strong correlation, most of the music softwares present today are still devoid of providing the facility of mood-aware play-list generation. This increase the time music listeners take in manually choosing a list of songs suiting a particular mood or occasion, which can be avoided by annotating songs with the relevant emotion category they convey. This task is to automatically mark a song using affective labels in an emotion set specified by psychologists. In the last few years, it has attracted more and more attention and wide range of related researches have been carried out. We take the same inspiration forward and contribute by making an effort to build a system for automatic identification of mood underlying the audio songs by mining their spectral, temporal audio features. We will built a hybrid system that will depict the mood of the song on the basis of lyrics as well as music. Our focus is specifically on India Popular Hindi songs. We have analyzed various data classification algorithms in order to learn, train and test the model representing the moods of these audio songs and developed an open source framework for the same. We have been music by introducing successful to achieve a satisfactory precision of 70% to 75% in identifying the mood.en_US
dc.language.isoen_USen_US
dc.relation.ispartofseriesTD NO.2141;-
dc.subjectMOOD IDENTIFICATIONen_US
dc.subjectAUDIO CLIPSen_US
dc.subjectTHERAPEUTIC MEDIUMen_US
dc.subjectMUSICen_US
dc.titleMOOD IDENTIFICATION ON THE BASIS OF LYRICS AND AUDIO CLIPSen_US
dc.typeThesisen_US
Appears in Collections:M.E./M.Tech. Computer Engineering

Files in This Item:
File Description SizeFormat 
final_thesis.pdf1.46 MBAdobe PDFView/Open


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.