Please use this identifier to cite or link to this item:
http://dspace.dtu.ac.in:8080/jspui/handle/repository/20466
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | VERMA, ASHISH | - |
dc.date.accessioned | 2024-01-18T05:51:12Z | - |
dc.date.available | 2024-01-18T05:51:12Z | - |
dc.date.issued | 2023-05 | - |
dc.identifier.uri | http://dspace.dtu.ac.in:8080/jspui/handle/repository/20466 | - |
dc.description.abstract | An image is transformed into words through the practise of "image captioning". It is mostly employed in programmes that automatically require textual information in the form of data from each given image. These days, attention processes are extensively used in picture captioning models. In this case, the word generation may be distorted by the attention models. In this work, we propose a Task-Adaptive Attention module to address this misleading problem in captioning pictures. This project performed over the two datasets, Flickr30k and MS COCO. BLEU, METEOR and CIDEr evaluated the target description sentence's likelihood given the training images. By contrasting the produced caption with the original caption, the BLEU score is determined. | en_US |
dc.language.iso | en | en_US |
dc.relation.ispartofseries | TD-6994; | - |
dc.subject | IMPROVED TECHNIQUE | en_US |
dc.subject | IMAGE CAPTION GENERATION | en_US |
dc.subject | IMAGE CAPTIONING | en_US |
dc.subject | BLEU | en_US |
dc.title | AN IMPROVED TECHNIQUE FOR EFFECTIVE IMAGE CAPTION GENERATION | en_US |
dc.type | Thesis | en_US |
dc.type | Video | en_US |
Appears in Collections: | MTech Data Science |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
Ashish Verma M.Tech.pdf | 2.21 MB | Adobe PDF | View/Open |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.