Please use this identifier to cite or link to this item: http://dspace.dtu.ac.in:8080/jspui/handle/repository/16696
Full metadata record
DC FieldValueLanguage
dc.contributor.authorSINGH, VARUN-
dc.date.accessioned2019-10-24T04:48:14Z-
dc.date.available2019-10-24T04:48:14Z-
dc.date.issued2019-06-
dc.identifier.urihttp://dspace.dtu.ac.in:8080/jspui/handle/repository/16696-
dc.description.abstractThis proposal thinks about strategies to tackle Visual Inquiry Answer in Medical Space (VQA-Medical) assignments with a Deep Learning system. As a primer advance, we investigate Long Short Memory (LSTM) networks utilized in Natural Language Processing (NLP) to handle Question-Replying (content based). We at that point adjust the past model to acknowledge a picture as a contribution to expansion to the inquiry. For this reason, we investigate the Origin ResNet v2 systems to extricate visual highlights from the picture. These are converged with the word inserting of the inquiry to foresee the appropriate response. This work was a piece of the Visual Inquiry Noting Challenge CLEF2018 and data set which was released by Nature Recently. The created programming has pursued the best programming practices and Python code style, giving a predictable gauge in Keras for various designs.en_US
dc.language.isoenen_US
dc.relation.ispartofseriesTD-4537;-
dc.subjectVISUAL INQUIRY ANSWERen_US
dc.subjectMEDICAL SPACEen_US
dc.subjectLSTMen_US
dc.titleVISUAL INQUIRY ANSWER ON MEDICAL SPACEen_US
dc.typeThesisen_US
Appears in Collections:M.E./M.Tech. Computer Engineering

Files in This Item:
File Description SizeFormat 
6.pdf1.39 MBAdobe PDFView/Open


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.