Please use this identifier to cite or link to this item:
http://dspace.dtu.ac.in:8080/jspui/handle/repository/19845
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | PANDEY, ALOK | - |
dc.date.accessioned | 2023-06-12T09:33:30Z | - |
dc.date.available | 2023-06-12T09:33:30Z | - |
dc.date.issued | 2023-05 | - |
dc.identifier.uri | http://dspace.dtu.ac.in:8080/jspui/handle/repository/19845 | - |
dc.description.abstract | Question Answering (QA) systems play a crucial role in information retrieval and natural language understanding. These systems aim to provide accurate and relevant answers to user queries, enabling efficient access to information. Over the years, various QA approaches have been developed, ranging from rule-based systems to deep learning models. Deep learning techniques have shown promising results in advancing QA models, leveraging neural networks to capture complex patterns and semantic relationships within textual data. Traditional rule-based and information retrieval-based QA systems have shown promising results, but they often struggle with complex questions and require extensive manual engineering. With the advancements in deep learning, researchers have shifted their focus to neural network-based QA models that can automatically learn patterns and representations from large-scale data. However, one of the key challenges faced by QA models is their vulnerability to adversarial attacks. Adversarial examples are specifically crafted inputs designed to mislead a model's predictions. In the context of QA, adversarial attacks can involve slight modifications to the question or context, leading to incorrect or misleading answers. Such attacks have raised concerns about the reliability and robustness of QA systems in practical applications. The existing QA models, although powerful, are vulnerable to adversarial attacks due to their inability to handle subtle manipulations in the input data. Adversarial attacks can involve alterations such as word substitutions, syntactic modifications, or context changes that are carefully designed to exploit vulnerabilities in the models' reasoning capabilities. To address this challenge, there is a need for robust deep learning approaches that can effectively handle adversarial examples in QA. | en_US |
dc.language.iso | en | en_US |
dc.relation.ispartofseries | TD-6405; | - |
dc.subject | PERT-QA | en_US |
dc.subject | DEEP LEARNING APPROACH | en_US |
dc.subject | ADVERSERIAL QUESTION ANSWERING | en_US |
dc.title | PERT-QA: A DEEP LEARNING APPROACH TO ADVERSERIAL QUESTION ANSWERING | en_US |
dc.type | Thesis | en_US |
Appears in Collections: | M.E./M.Tech. Computer Engineering |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
ALOK PANDEY M.TEch.pdf | 598.33 kB | Adobe PDF | View/Open |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.