Please use this identifier to cite or link to this item: http://dspace.dtu.ac.in:8080/jspui/handle/repository/20464
Full metadata record
DC FieldValueLanguage
dc.contributor.authorKAUSTUBH, AMULYA-
dc.date.accessioned2024-01-18T05:50:53Z-
dc.date.available2024-01-18T05:50:53Z-
dc.date.issued2023-05-
dc.identifier.urihttp://dspace.dtu.ac.in:8080/jspui/handle/repository/20464-
dc.description.abstractFacial emotion recognition is a fundamental task in the field of computer vision and human-computer interaction that aims to automatically detect and classify emotions expressed by individuals based on their facial expressions. It is an expanding field that finds application in various domains including mental health monitoring, marketing, social robotics, sentiment analysis, education, security, and gaming. In human-computer communication, nonverbal interaction methods such as facial expressions, eye ball movements, and body gestures are utilized, with facial emotion being particularly prevalent as it effectively communicates people's emotions and feelings. However, recognizing facial expressions poses challenges for machine learning methods due to the significant variations exhibited by individuals in expressing their emotions. Factors like differences in brightness, backdrop, position, and subject characteristics such as shape and ethnicity in the facial images contribute to the complexity of facial emotion recognition. As a result, it remains a difficult topic in the field of deep learning and computer vision. In this project, a simple approach for it is introduced that combines a CNN with certain image preprocessing procedures. The proposed model comprises of four convolutional layers, followed by max pooling. The FER2013 dataset has been used for training the network. The network uses a single-component architecture to detect and classify facial photos into one of the seven fundamental human facial expressions. The model was trained for a total of 50 epochs, achieving a training and validation accuracy of 86.13% and 62.39%, respectively. The corresponding training and validation losses are measured at 0.38 and 1.19, respectively.en_US
dc.language.isoenen_US
dc.relation.ispartofseriesTD-6992;-
dc.subjectFACIAL EMOTION RECOGNITIONen_US
dc.subjectVALIDATIONen_US
dc.subjectHUMAN COMPUTER COMMUNICATIONen_US
dc.subjectCNNen_US
dc.titleFACIAL EMOTION RECOGNITION USING CNNen_US
dc.typeThesisen_US
Appears in Collections:MTech Data Science

Files in This Item:
File Description SizeFormat 
Amulya Kaustubh M.Tech..pdf2.62 MBAdobe PDFView/Open


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.