Please use this identifier to cite or link to this item:
http://dspace.dtu.ac.in:8080/jspui/handle/repository/22169
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | SINGH, VAIBHAV KUMAR | - |
dc.date.accessioned | 2025-09-02T06:37:42Z | - |
dc.date.available | 2025-09-02T06:37:42Z | - |
dc.date.issued | 2025-05 | - |
dc.identifier.uri | http://dspace.dtu.ac.in:8080/jspui/handle/repository/22169 | - |
dc.description.abstract | The widespread sharing and reuse of pre-trained deep convolutional neural networks (CNNs) have raised serious concerns about model integrity and security. Malicious mod- ifications such as backdoor and model-reuse attacks can compromise the trustworthiness of these models without visibly affecting their performance, making integrity authenti- cation a critical requirement in sensitive domains. While watermarking has emerged as a prominent method for model verification, most existing techniques are irreversible and alter the model’s internal structure permanently, making them unsuitable for integrity validation. This research proposes a reversible watermarking scheme that enables the embedding of authentication information into CNNs without any permanent modifica- tion. The method leverages model pruning theory to identify less critical parameters with low entropy, constructing an optimal host sequence for watermark insertion. Using a his- togram shifting technique adapted from image watermarking, the watermark is embedded in a manner that ensures full recovery of the original model parameters upon extraction. Experimental validation is conducted across several popular CNN architectures, including AlexNet, VGG19, ResNet152, DenseNet121, and MobileNet. The results show that the reversible watermarking process has a negligible effect on model accuracy (within ±0.5%) and demonstrates complete reversibility with zero reconstruction error. Lastly, if crypto- graphic hash values are embedded, identity verification is possible: any modification to the model will make the hash values differ between the original hash and the hash gener- ated after each step. The researchers have made a new approach by reversing the process. using watermarking to check that the structure of CNNs is not tampered with, presented a way to protect sensitive models from deep learning by using secure parts For example, healthcare, finance and defense. This methodology can still be broadened and improved as we go. seen if LSTM is useful for other kinds of neural network structures. ability to help restore models that have had changes made to them unauthorized alterations. | en_US |
dc.language.iso | en | en_US |
dc.relation.ispartofseries | TD-8170; | - |
dc.subject | REVERSIBLE WATERMARKING METHODS | en_US |
dc.subject | DEEP NEURAL NETWORKS | en_US |
dc.subject | CNNs | en_US |
dc.title | STUDY AND DEVELOPMENT OF REVERSIBLE WATERMARKING METHODS FOR SECURING DEEP NEURAL NETWORKS | en_US |
dc.type | Thesis | en_US |
Appears in Collections: | M.E./M.Tech. Computer Engineering |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
VAIBHAV KUMAR SINGH M.Tech..pdf | 2.1 MB | Adobe PDF | View/Open |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.