Please use this identifier to cite or link to this item:
http://dspace.dtu.ac.in:8080/jspui/handle/repository/19854
Title: | DETECTION OF HATE SPEECH USING DEEP LEARNING |
Authors: | SHARMA, SHIVAM |
Keywords: | HATE SPEECH DEEP LEARNING LSTM NETWORK |
Issue Date: | May-2023 |
Series/Report no.: | TD-6412; |
Abstract: | One of the most significant difficulties in recent decades has been identifying hate speech. These last few years have been contributed to the research based on identifying hate speech on social networking platforms. It appears to be a divisive mode of communication that expresses a hate ideology through misunderstandings. The protected characteristics that are the target of hate speech include gender, religion, race, and disability. For some it might be just fun but for others ,it can become the reason for depression, anxiety, etc. Therefore, it is crucial to keep an eye on user posts and filter out any that may be related to hate speech before it spreads. The number of tweets sent and received on Twitter, however, is over 600 every second and over 500 million each day. This type of incoming traffic makes manual detection of hate speech near to impossible. Hence, a model is proposed to detect hate speech using transformers, deep convolutional networks, and long short-term memory. The proposed model is given raw tweets as input and these tweets are passed through the transformer layers followed by deep convolutional networks. The output is fed to the LSTM network. The final output results in whether the tweet is hateful or not when passed through the SoftMax layer. The proposed model has been trained and tested on 5 state-of-the datasets and compared with the previous models too. |
URI: | http://dspace.dtu.ac.in:8080/jspui/handle/repository/19854 |
Appears in Collections: | M.E./M.Tech. Computer Engineering |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
SHIVAM SHARMA M.Tech.pdf | 2.18 MB | Adobe PDF | View/Open |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.