Please use this identifier to cite or link to this item: http://dspace.dtu.ac.in:8080/jspui/handle/repository/21813
Full metadata record
DC FieldValueLanguage
dc.contributor.authorKUMAR, JAYBARDHAN-
dc.date.accessioned2025-07-08T08:44:41Z-
dc.date.available2025-07-08T08:44:41Z-
dc.date.issued2025-05-
dc.identifier.urihttp://dspace.dtu.ac.in:8080/jspui/handle/repository/21813-
dc.description.abstractIn the era of technology,social media has become flag bearer of freedom of speech and expression. However, in the guise of freedom of speech and expression hate and offensive speech is increasing day by day, posing significant risks to societal harmony. Hate speech detection on social media has become increasingly important due to the rise of online platforms and their potential to amplify harmful content [1], [2]. While traditional text-based hate speech detection is well-researched, the unique challenges of spoken language transcription, particularly on platforms like YouTube, require specialized approaches. This study investigates the overall effectiveness of hate speech detection models trained on datasets from Facebook and Twitter when applied to the distinct context of YouTube transcriptions. Various machine learning models are explored, comparing the performance of traditional classifiers like Naive Bayes, Support Vector Machines (SVM), Logistic Regression, and Random Forest – using TF-IDF features to assess their ability to generalize with the complexities of YouTube transcriptions. We evaluate each model's performance across accuracy, precision, recall, and F1-score to determine their effectiveness in capturing both explicit and implicit hate speech. Findings reveal that models trained on datasets from Facebook and Twitter struggle to generalize effectively to the more nuanced and context-rich environment of YouTube transcriptions, Support Vector Machines and Logistic Regression show relatively better adaptability. This work highlights the importance of contextual and linguistic adaptability in hate speech detection on multimedia platforms and discusses implications for ethical content moderation and policy development. This study underscores the need for continued research into models that address platform- specific language, cultural nuances, and code-mixing, particularly in low resource languages. These findings provide a foundation for researchers and practitioners seeking to develop or refine hate speech detection systems for real-world application.en_US
dc.language.isoenen_US
dc.relation.ispartofseriesTD-8024;-
dc.subjectCROSS-PLATFORM ANALYSISen_US
dc.subjectHATE SPEECH DETECTIONen_US
dc.subjectSVMen_US
dc.titleCROSS-PLATFORM ANALYSIS OF HATE SPEECH DETECTIONen_US
dc.typeThesisen_US
Appears in Collections:M.E./M.Tech. Information Technology

Files in This Item:
File Description SizeFormat 
JAYBARDHAN KUMAR M.Tech.pdf3.07 MBAdobe PDFView/Open


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.