Please use this identifier to cite or link to this item: http://dspace.dtu.ac.in:8080/jspui/handle/repository/19127
Full metadata record
DC FieldValueLanguage
dc.contributor.authorSHARMA, ANKIT-
dc.date.accessioned2022-06-07T06:13:10Z-
dc.date.available2022-06-07T06:13:10Z-
dc.date.issued2022-05-
dc.identifier.urihttp://dspace.dtu.ac.in:8080/jspui/handle/repository/19127-
dc.description.abstractFacial expressions are the primary way to express intentions and emotions. In real life, computers can understand the emotions of humans by analysing their facial expressions. Facial Expression Recognition (FER) plays an important role in human-computer interaction necessity and medical field. In the past, the facial features are extracted manually for recognizing the expressions. In the present, it is an important hotspot in computer vision, Internet of Things, and artificial intelligence fields. Certain processes are involved to recognize the facial expression in an efficient manner such as; • Pre-Processing • Segmentation • Feature Extraction • Classification Various algorithms are designed manually for feature extraction and other feature extraction algorithms such as Local Binary Pattern (LBP), Gabor wavelet, Histogram of Oriented Gradient (HOG). Various challenges are involved in the FER to recognize accurate expressions of facial images. For robust classification of facial expression, consideration of illumination and pose of the facial image is important. The poses and facial identity learning are essential to get accurate results. Several existing works faced challenges regarding identity, pose variation, and inter-subject variation. For estimating the pose of the facial images existing methods used hand-crafted features. For detecting the pose of the facial images, pose normalization is performed by considering the angle in the existing works. Previous works also considering one v pixel-based normalization to increase the accuracy for recognizing the facial expression. On the other hand, general illumination effect of the images also affects the accuracy, by contrast, occlusion, etc. Segmentation is one of the major processes to partition the facial images to extract the features. Various existing methods used different techniques to segment the facial images such as bounding box-based segmentation, region-based segmentation, cluster-based segmentation, etc., by using various algorithms such as Discrete Cosine Transform (DCT), K-mean Clustering algorithm, etc. For extracting the features after segmenting the facial images various Machine learning (ML) algorithms such as Support Vector Machines (SVM), Naive Bayes, K-Nearest Neighbours (KNN), and Deep Learning (DL) algorithms such as Convolutional Neural network (CNN), Generative Adversarial Network (GAN), Long-Short Term Memory Networks (LSTM). After extracting the features from the facial images, classification is performed to identify the facial expressions (happy, sad, anger etc.). Various classifiers are used in the previous works for classification processes such as VGG-16, VGG-19, ResNet, etc. These classifiers get the extracted feature as input and classify the expressions. For training and testing purposes various datasets are used but, mostly FER-2013, CK+, are used.en_US
dc.language.isoenen_US
dc.relation.ispartofseriesTD-5714;-
dc.subjectFACIAL EXPRESSION RECOGNITIONen_US
dc.subjectDEEP LEARNINGen_US
dc.subjectGEOMETRYen_US
dc.subjectLBPen_US
dc.titleMULTI-FEATURE AWARE POSE AND GEOMETRY BASED FACIAL EXPRESSION RECOGNITION USING DEEP LEARNINGen_US
dc.typeThesisen_US
Appears in Collections:M.E./M.Tech. Computer Engineering

Files in This Item:
File Description SizeFormat 
ankitsharma_2K20CSE05-1-dat-FINAL_M.TecH.pdf1.01 MBAdobe PDFView/Open


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.