Please use this identifier to cite or link to this item:
http://dspace.dtu.ac.in:8080/jspui/handle/repository/20103
Title: | DEEPFAKE VIDEO DETECTION: A MULTI-MODEL APPROACH USING CNN, RNN & LSTM |
Authors: | PODDAR, RAUNAK |
Keywords: | LSTM CNN MODEL RNN VIDEO DETECTION |
Issue Date: | May-2023 |
Series/Report no.: | TD-6658; |
Abstract: | This thesis presents a comprehensive study on the detection of deepfake videos by leveraging the combined power of LSTM and CNN models. The objective is to accurately predict the authenticity of videos by analysing spatial and temporal features. To facilitate the development and evaluation of the proposed approach, a novel dataset is created by augmenting the existing FaceForensics++, Celeb DF, and DFDC datasets. This novel dataset encompasses a diverse range of deepfake manipulation techniques and captures various visual characteristics. Through extensive experimentation and analysis, the results demonstrate the effectiveness of the LSTM-CNN fusion model in accurately distinguishing between deepfake and authentic videos. The model successfully captures subtle visual artifacts and temporal patterns associated with deepfake manipulations, enabling high-performance deepfake detection. The research also acknowledges the significance of deepfake videos in domains such as politics and pornography, highlighting the need to ensure the integrity and trustworthiness of multimedia content. By providing an advanced framework for deepfake detection, this thesis contributes to the field of video forensics, addressing the growing concerns surrounding disinformation threats and safeguarding the authenticity of multimedia content in the digital age. |
URI: | http://dspace.dtu.ac.in:8080/jspui/handle/repository/20103 |
Appears in Collections: | M.E./M.Tech. Information Technology |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
RAUNAK PODDAR M.Tech.pdf | 2.88 MB | Adobe PDF | View/Open |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.