Please use this identifier to cite or link to this item: http://dspace.dtu.ac.in:8080/jspui/handle/repository/19845
Full metadata record
DC FieldValueLanguage
dc.contributor.authorPANDEY, ALOK-
dc.date.accessioned2023-06-12T09:33:30Z-
dc.date.available2023-06-12T09:33:30Z-
dc.date.issued2023-05-
dc.identifier.urihttp://dspace.dtu.ac.in:8080/jspui/handle/repository/19845-
dc.description.abstractQuestion Answering (QA) systems play a crucial role in information retrieval and natural language understanding. These systems aim to provide accurate and relevant answers to user queries, enabling efficient access to information. Over the years, various QA approaches have been developed, ranging from rule-based systems to deep learning models. Deep learning techniques have shown promising results in advancing QA models, leveraging neural networks to capture complex patterns and semantic relationships within textual data. Traditional rule-based and information retrieval-based QA systems have shown promising results, but they often struggle with complex questions and require extensive manual engineering. With the advancements in deep learning, researchers have shifted their focus to neural network-based QA models that can automatically learn patterns and representations from large-scale data. However, one of the key challenges faced by QA models is their vulnerability to adversarial attacks. Adversarial examples are specifically crafted inputs designed to mislead a model's predictions. In the context of QA, adversarial attacks can involve slight modifications to the question or context, leading to incorrect or misleading answers. Such attacks have raised concerns about the reliability and robustness of QA systems in practical applications. The existing QA models, although powerful, are vulnerable to adversarial attacks due to their inability to handle subtle manipulations in the input data. Adversarial attacks can involve alterations such as word substitutions, syntactic modifications, or context changes that are carefully designed to exploit vulnerabilities in the models' reasoning capabilities. To address this challenge, there is a need for robust deep learning approaches that can effectively handle adversarial examples in QA.en_US
dc.language.isoenen_US
dc.relation.ispartofseriesTD-6405;-
dc.subjectPERT-QAen_US
dc.subjectDEEP LEARNING APPROACHen_US
dc.subjectADVERSERIAL QUESTION ANSWERINGen_US
dc.titlePERT-QA: A DEEP LEARNING APPROACH TO ADVERSERIAL QUESTION ANSWERINGen_US
dc.typeThesisen_US
Appears in Collections:M.E./M.Tech. Computer Engineering

Files in This Item:
File Description SizeFormat 
ALOK PANDEY M.TEch.pdf598.33 kBAdobe PDFView/Open


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.