Please use this identifier to cite or link to this item: http://dspace.dtu.ac.in:8080/jspui/handle/repository/20466
Full metadata record
DC FieldValueLanguage
dc.contributor.authorVERMA, ASHISH-
dc.date.accessioned2024-01-18T05:51:12Z-
dc.date.available2024-01-18T05:51:12Z-
dc.date.issued2023-05-
dc.identifier.urihttp://dspace.dtu.ac.in:8080/jspui/handle/repository/20466-
dc.description.abstractAn image is transformed into words through the practise of "image captioning". It is mostly employed in programmes that automatically require textual information in the form of data from each given image. These days, attention processes are extensively used in picture captioning models. In this case, the word generation may be distorted by the attention models. In this work, we propose a Task-Adaptive Attention module to address this misleading problem in captioning pictures. This project performed over the two datasets, Flickr30k and MS COCO. BLEU, METEOR and CIDEr evaluated the target description sentence's likelihood given the training images. By contrasting the produced caption with the original caption, the BLEU score is determined.en_US
dc.language.isoenen_US
dc.relation.ispartofseriesTD-6994;-
dc.subjectIMPROVED TECHNIQUEen_US
dc.subjectIMAGE CAPTION GENERATIONen_US
dc.subjectIMAGE CAPTIONINGen_US
dc.subjectBLEUen_US
dc.titleAN IMPROVED TECHNIQUE FOR EFFECTIVE IMAGE CAPTION GENERATIONen_US
dc.typeThesisen_US
dc.typeVideoen_US
Appears in Collections:MTech Data Science

Files in This Item:
File Description SizeFormat 
Ashish Verma M.Tech.pdf2.21 MBAdobe PDFView/Open


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.