Please use this identifier to cite or link to this item:
http://dspace.dtu.ac.in:8080/jspui/handle/repository/22171
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | PATHAK, HARSHIT | - |
dc.date.accessioned | 2025-09-02T06:38:19Z | - |
dc.date.available | 2025-09-02T06:38:19Z | - |
dc.date.issued | 2025-06 | - |
dc.identifier.uri | http://dspace.dtu.ac.in:8080/jspui/handle/repository/22171 | - |
dc.description.abstract | A wrong or misleading headline can travel further than a wildfire and this has turned fake news into a real problem for people’s confidence and safety. Spreading inaccurate news about healing can be risky and many times, false political updates cause a lot of harm. What made me see how dangerous fake news is was when a friend told me that popular products had a nasty secret ingredient. It turned out that all the claims were invented, but not before it stirred up attention. This exposure made me ask: can we use technology to both diagnose fake news and explain to people its origin? To meet the challenge, this research uses the WELFake dataset which contains over 72,000 news items from various sources, to create a machine learning model. While black-box models just predict what will happen, our model explains the flagging of fake news to users by using BERT and tools such as LIME and counterfactuals. One particular article on a potential tech success was used to test how well the model works and it revealed that terms like “unconfirmed” and “allegedly” made people doubt the news. But the main reason this study is important is that it puts trust at its center. The way it explains its decisions helps people, teachers and journalists understand how to recognize suspicious content.Overall, the model connects what a computer can see and what humans can grasp, presenting effective, transparent help to spot fake news. Knowing the reasoning behind an action matters equally as much as the action itself when it comes to combating misinformation. | en_US |
dc.language.iso | en | en_US |
dc.relation.ispartofseries | TD-8175; | - |
dc.subject | MISINFORMATION DETECTION | en_US |
dc.subject | DEEP MODELS | en_US |
dc.subject | ATTRIBUTION METHODS | en_US |
dc.title | EXPLAINABLE MISINFORMATION DETECTION: UNRAVELING DEEP MODELS WITH COUNTERFACTUAL AND FEATURE ATTRIBUTION METHODS | en_US |
dc.type | Thesis | en_US |
Appears in Collections: | M.E./M.Tech. Computer Engineering |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
HARSHIT PATHAK M.Tech.pdf | 803.28 kB | Adobe PDF | View/Open |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.