Please use this identifier to cite or link to this item: http://dspace.dtu.ac.in:8080/jspui/handle/repository/18768
Title: CONTEXTUAL FRAMEWORKS FOR SENTIMENT ANALYSIS
Authors: YADAV, ASHIMA
Keywords: CONTEXTUAL FRAMEWORKS
SENTIMENT ANALYSIS
HANDCRAFTED FEATURES
EMOTION-BASED GENRE DETECTION FOR BOLLYWOOD (EMOGDB)
Issue Date: Apr-2021
Publisher: DELHI TECHNOLOGICAL UNIVERSITY
Series/Report no.: TD - 5263;
Abstract: Social media is a powerful source of communication among people to share their sentiments, opinions, and views about any topic or article, which results in an enormous amount of unstructured information. Business organizations need to process and study these sentiments to investigate data and to gain business insights. The previous research in sentiment analysis has majorly focused on extracting the sentiments from the textual data only. Thus, various machine learning and natural language processing-based approaches have been used to analyze these sentiments. The text-based sentiment classification suffers from various challenges like domain adaptation, sarcasm detection, multilingual sentiment classification, etc. [1] [2]. However, with the evolution of the web and smartphones, the sentiments can be extracted from varied multimedia content, including text, images, videos, emoticons, GIFs, audios, etc., found on social media networks. These multimodal data like images can detect sarcastic posts by analyzing the facial expressions from the visual data. Most of the earlier works in this area are based on handcrafted features that fail to explore the high-level semantics of the data. These approaches cannot handle the massive amount of data and require ample time and effort to manually extract the features, impacting the classifier’s performance. Hence, the great feats of deep learning, especially in computer vision, have motivated us to apply them in sentiment analysis. These approaches can automatically learn the complex features from the data, thus improving the sentiment analysis process. However, this area still suffers from many challenges. The visual sentiment analysis process is abstract in nature due to the high biasing level in the human recognition process. Similarly, affective video content analysis has emerged as one of the most challenging research tasks as it aims to analyze the emotions elicited by videos automatically. However, little progress has been achieved in this field due to the enigmatic nature of emotions. This widens the gap between the human affective state and the structure of the video. The multimodal sentiment analysis area remains an open problem because each modality has its individual characteristics and is expressed differently by the human cognitive system. Thus, it isn’t easy to deal with such heterogeneous content for multimodal analysis. v Hence, our work aims to address the issues mentioned above by designing effective frameworks. Chapter 1 gives the background of sentiment analysis and outlines the motivation behind the research. Chapter 2 is dedicated to the literature review, where the existing state-of-the-arts for sentiment classification are reviewed for textual and visual (images and videos) modalities. The prevalent approaches in each of the modalities are grouped into a taxonomy, which helped in identifying the research gaps in this area. Finally, the research objectives are also briefly addressed. In Chapter 3, we discuss two approaches corresponding to the image and video modalities. We apply transfer learning by utilizing the pre-trained models with visual attention for learning the high-level discriminative features from the images to address the problem in visual sentiment analysis. For affective video classification, we propose a deep affect-based movie genre classification framework that aims to study the relationship between the induced emotions in the movie trailer and its corresponding genre by developing Emotion-based Genre Detection for Bollywood (EmoGDB) Dataset, which helps to create the Emotion-genre based theory. In order to address the challenges for multimodal sentiment classification, we propose a network in Chapter 4 that generates the discriminative features from the visual images and their textual descriptions by introducing attention at multiple levels. We utilize the channel dimension to generate robust visual features, enhancing the crucial channels in the given image. Further, we extract the essential sentiment words corresponding to the image features by employing semantic attention, which boosts our network’s overall performance. One of the significant applications of sentiment analysis is to analyze the opinion of the people. Recently, with the outbreak of the COVID-19 pandemic, enormous amount of sentiments were being generated on Twitter, which could help to assess the people’s attitude and behavior related to the pandemic. In Chapter 5, we designed a Multilevel Attention-based Conv-BiGRU Network to classify the opinions of the people posted on Twitter from the countries that were worse-affected by the pandemic so that the analysis can serve as feedback to the government agencies regarding the mitigation plans taken by them. Finally, in Chapter 6, we summarize the conclusions inferred from our research work and highlight future work in this area.
URI: http://dspace.dtu.ac.in:8080/jspui/handle/repository/18768
Appears in Collections:Ph.D. Information Technology

Files in This Item:
File Description SizeFormat 
Thesis_Ashima.pdf4.47 MBAdobe PDFView/Open


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.