Please use this identifier to cite or link to this item: http://dspace.dtu.ac.in:8080/jspui/handle/repository/21732
Full metadata record
DC FieldValueLanguage
dc.contributor.authorTANWAR, MOHIT-
dc.date.accessioned2025-06-19T06:25:49Z-
dc.date.available2025-06-19T06:25:49Z-
dc.date.issued2025-05-
dc.identifier.urihttp://dspace.dtu.ac.in:8080/jspui/handle/repository/21732-
dc.description.abstractIt has often been stated that an image is worth more than a thousand words. The details in a picture contain both high-level and low-level elements that help to distinguish across image classes and aid in categorization. Image blurring is an undesired phenomenon caused by the image capture method or equipment. In this research, we compare the feature extraction and classification performance of five modern deep pre-trained models to increasingly blurred pictures of handwritten digits from the MNIST handwritten digits dataset. The photos are gradually blurred, first using a Gaussian blur of Sigma=5, then with a Gaussian blur of Sigma=8. Sigma is also known as standard deviation; the higher the Sigma, the greater the degree of blur. The deep pre-trained models under consideration are VGG-16, DenseNet-121, Xception, ShuffleNet, and SqueezeNet, which were pre-trained on the ImageNet dataset and then shallow-tuned on the blurred pictures. DenseNet-121 had the greatest accuracy across deep learning models, at 98.77% for Sigma=5 and 98.62% for an enhanced blur of Sigma=8. With the exception of ShuffleNet, all other model accuracies decreased dramatically when the degree of blur rose (Sigma from 5 to 8). Comparisons with machine learning models such as Support Vector Machine (SVM), Convolutional Neural Network (CNN), and logistic regression show that logistic regression outperforms other machine learning models, despite having lower accuracies than the majority of deep learning models. We find that DenseNet 121, followed by ShuffleNet, are among the strongest modern models for classifying properly under progressive blur. We conducted experiments to determine the best deblurring technique capable of sharpening images from blurred observations using various deblurring techniques, followed by computer vision image processing applications on the GoPro dataset, which consists of pairings of a realistic fuzzy image and an accompanying ground truth sharp image obtained by a high speed camera. The photos are deblurred using any deblurring method, and PSNR and SSIM values are compared between the blurred and deblurred outputs. The deblurred output is then processed by a computer vision program, which improves the features and sharpens the objects and edges. After incorporating computer vision applications, the parameters SSIM and PSNR are assessed again. To begin, the techniques are used to compute the values and to construct ways for a dependable combination of deblurring methods, which ultimately helps to generate a crisp edged and less distorted image from a hazy image with PSNR 29.84dB and SSIM value 0.70. This study suggests that deblurred approaches used in combination can function more 5 efficiently and reliably than individual methods, resulting in more accurate and less distorted pictures that give greater insights into images and aid in properly identifying objects in photos.en_US
dc.language.isoenen_US
dc.relation.ispartofseriesTD-7964;-
dc.subjectBLURRED IMAGESen_US
dc.subjectGAUSSIAN BLURen_US
dc.subjectSIGMAen_US
dc.subjectDEEP LEARNINGen_US
dc.subjectMACHINE LEARNINGen_US
dc.subjectMNIST DATASETen_US
dc.subjectBLURRED IMAGESen_US
dc.subjectWIENER FILTERen_US
dc.subjectLUCY-RICHARDSON FOR DEBLURRINGen_US
dc.subjectCOMPUTER VISIONen_US
dc.subjectGOPRO DATASETen_US
dc.subjectHISTOGRAM EQUALIZATIONen_US
dc.titlePERFORMANCE EVALUATION OF DEEP LEARNING APPROACHES FOR BLURRED IMAGE PROCESSINGen_US
dc.typeThesisen_US
Appears in Collections:M.E./M.Tech. Information Technology

Files in This Item:
File Description SizeFormat 
MOHIT TANWAR MASTER OF TECHNOLOGY IN IT.pdf1.63 MBAdobe PDFView/Open


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.