Please use this identifier to cite or link to this item: http://dspace.dtu.ac.in:8080/jspui/handle/repository/18857
Title: SEMANTIC SEGMENTATION USING CONDITIONAL GAN WITH PERCEPTUAL LOSS
Authors: SOHALIYA, GAURAV
Keywords: SEMANTIC SEGMENTATION
CONDITIONAL GAN
PERCEPTUAL LOOS
CONDITIONAL GENERATIVE ADVERARIAL NETWORKS (CGAN)
Issue Date: Jul-2021
Publisher: DELHI TECHNOLOGICAL UNIVERSITY
Series/Report no.: TD - 5400;
Abstract: Image-to-semantic labels classification is a very challenging task in image processing. Convolutional neural networks (CNN) have managed to achieve the state-of-the-art quality of the segmented image in semantic segmentation tasks. Still, the classification capability of such algorithms is not satisfactory to segment images that contain complex object boundaries and minimal regions. Recently, the Generative Adversarial Networks (GAN) were introduced, which can solve the overfitting of the generator network using the adversarial loss. In this paper, a GAN-based segmentation model is proposed, in which the Conditional Generative Adversarial Networks (CGAN) model is used as base architecture. Perceptual loss is introduced in this composite model to solve the identification and classification of visually small elements in images. A pre-trained deep convolution neural network is adopted to generate improved segmentation masks to calculate Perceptual loss. The usage of Perceptual loss has demonstrated the high quality of the output labels. The evaluation of the proposed model on the cityscapes dataset has shown the effectiveness of GAN-based architecture in semantic segmentation of multiclass images. The proposed model achieved 83.3% accuracy on the test dataset, which is superior to most semantic segmentation state-of-the-art methods.
URI: http://dspace.dtu.ac.in:8080/jspui/handle/repository/18857
Appears in Collections:M.E./M.Tech. Information Technology

Files in This Item:
File Description SizeFormat 
Thesis_PDF.pdf3.94 MBAdobe PDFView/Open


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.