Please use this identifier to cite or link to this item: http://dspace.dtu.ac.in:8080/jspui/handle/repository/20824
Full metadata record
DC FieldValueLanguage
dc.contributor.authorMEHER, HEMSAGAR-
dc.date.accessioned2024-08-05T09:00:21Z-
dc.date.available2024-08-05T09:00:21Z-
dc.date.issued2024-05-
dc.identifier.urihttp://dspace.dtu.ac.in:8080/jspui/handle/repository/20824-
dc.description.abstractIt is essential to be able to read someone's sentiments by simply glancing at them in social situations and daily life. Furthermore, computers with this competence would interact with people more successfully. But no machine today is capable of fully understanding human emotion. Here, we're using the EMOTIC dataset, which consists of pictures of individuals in diverse settings in nature, cach with a label indicating the subject's apparent emotion. Facial expressions that are constant from person to person reveal a universal and fundamental set of emotions that all people experience. An algorithm for detection, extraction, and automatic human emotion identification in pictures and videos will be possible thanks to the analysis of these facial expressions. By mixing data from the bounding box containing the individual with contextual data obtained from the scene, we train multiple CNN models for emotion identification. Our findings highlight the significance of scene context for automatically identifying emotional states. In this study, we offer a collection of images showing real individuals in actual outdoor settings. 26 different emotional categories, as well as the continuous diagonal dimensions of valence, arousal, and dominance, are used to identify people in these pictures. In this study, we demonstrate the successful application of transfer learning for recognizing emotions in context using the EMOTIC dataset. Among the evaluated models, ResNet-50 proved to be the most effective, leveraging residual learning to capture emotional nuances. It achieved the highest accuracy of 75.6% for discrete emotion classification, with precision, recall, and Fl-score metrics around 75%. Additionally, ResNet-50 excelled in predicting valence, arousal, and dominance (VAD) scores with a low mean absolute error (MAE) of 0.060. DenseNet-169 also performed robustly, underscoring the importance of dense connecetivity. The integration of residual connections in ResNet-50 facilitated improved feature extraction and deeper network training without gradient vanishing issues. Future research could explore the integration of multi-modal data and advanced pre-processing techniques to further enhance the accuracy and reliability of emotion recognition tasks.en_US
dc.language.isoenen_US
dc.relation.ispartofseriesTD-7349;-
dc.subjectEMOTION RECOGNITIONen_US
dc.subjectEMOTIC DATASETen_US
dc.subjectCNN MODELen_US
dc.titleCONTEXT BASED EMOTION RECOGNITION USING EMOTIC DATASETen_US
dc.typeThesisen_US
Appears in Collections:M.E./M.Tech. Information Technology

Files in This Item:
File Description SizeFormat 
Hemsagar Meher M.Tech..pdf2.6 MBAdobe PDFView/Open


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.