Please use this identifier to cite or link to this item:
http://dspace.dtu.ac.in:8080/jspui/handle/repository/20464
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | KAUSTUBH, AMULYA | - |
dc.date.accessioned | 2024-01-18T05:50:53Z | - |
dc.date.available | 2024-01-18T05:50:53Z | - |
dc.date.issued | 2023-05 | - |
dc.identifier.uri | http://dspace.dtu.ac.in:8080/jspui/handle/repository/20464 | - |
dc.description.abstract | Facial emotion recognition is a fundamental task in the field of computer vision and human-computer interaction that aims to automatically detect and classify emotions expressed by individuals based on their facial expressions. It is an expanding field that finds application in various domains including mental health monitoring, marketing, social robotics, sentiment analysis, education, security, and gaming. In human-computer communication, nonverbal interaction methods such as facial expressions, eye ball movements, and body gestures are utilized, with facial emotion being particularly prevalent as it effectively communicates people's emotions and feelings. However, recognizing facial expressions poses challenges for machine learning methods due to the significant variations exhibited by individuals in expressing their emotions. Factors like differences in brightness, backdrop, position, and subject characteristics such as shape and ethnicity in the facial images contribute to the complexity of facial emotion recognition. As a result, it remains a difficult topic in the field of deep learning and computer vision. In this project, a simple approach for it is introduced that combines a CNN with certain image preprocessing procedures. The proposed model comprises of four convolutional layers, followed by max pooling. The FER2013 dataset has been used for training the network. The network uses a single-component architecture to detect and classify facial photos into one of the seven fundamental human facial expressions. The model was trained for a total of 50 epochs, achieving a training and validation accuracy of 86.13% and 62.39%, respectively. The corresponding training and validation losses are measured at 0.38 and 1.19, respectively. | en_US |
dc.language.iso | en | en_US |
dc.relation.ispartofseries | TD-6992; | - |
dc.subject | FACIAL EMOTION RECOGNITION | en_US |
dc.subject | VALIDATION | en_US |
dc.subject | HUMAN COMPUTER COMMUNICATION | en_US |
dc.subject | CNN | en_US |
dc.title | FACIAL EMOTION RECOGNITION USING CNN | en_US |
dc.type | Thesis | en_US |
Appears in Collections: | MTech Data Science |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
Amulya Kaustubh M.Tech..pdf | 2.62 MB | Adobe PDF | View/Open |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.