Please use this identifier to cite or link to this item: http://dspace.dtu.ac.in:8080/jspui/handle/repository/20401
Full metadata record
DC FieldValueLanguage
dc.contributor.authorKAINTH, MANAS-
dc.date.accessioned2024-01-15T05:41:39Z-
dc.date.available2024-01-15T05:41:39Z-
dc.date.issued2023-06-
dc.identifier.urihttp://dspace.dtu.ac.in:8080/jspui/handle/repository/20401-
dc.description.abstractThe traditional image predictors used in reversible data hiding (RDH) schemes are limited by their inability to capture context from a larger set of image pixels. This work, proposes a new predictor for reversible data hiding, that uses the Self Attention Convolutional Neural Network (SACNN) for improvement in prediction process. With the help of an image division scheme, our method divides a grayscale image into two separate sets. The first set serves as input to the SACNN predictor, which predicts the second set, which will be utilized for data embedding. Our network efficiently captures both local and global dependencies through the use of self attention mechanism in combination with convolutional layers, allowing for accurate pixel prediction and improving overall prediction accuracy. The predictor is trained on over 1000 images randomly taken from the ImageNet dataset. The experimentation and analysis show that the predictor is able to generate a sharper prediction error histogram and can be utilized for achieving a better embedding performance by using it with a suitable scheme in future.en_US
dc.language.isoenen_US
dc.relation.ispartofseriesTD-6865;-
dc.subjectSELF ATTENTION CONVOLUTIONAL NEURAL NETWORK (SACNN)en_US
dc.subjectREVERSIBLE DATAen_US
dc.subjectPREDICTORen_US
dc.subjectIMAGESen_US
dc.titleSELF ATTENTION CONVOLUTIONAL NEURAL NETWORK (SACNN) BASED PREDICTOR FOR REVERSIBLE DATA HIDING IN IMAGESen_US
dc.typeThesisen_US
Appears in Collections:M.E./M.Tech. Computer Engineering

Files in This Item:
File Description SizeFormat 
MANAS KAINTH M.Tech..pdf1.91 MBAdobe PDFView/Open


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.