Please use this identifier to cite or link to this item:
http://dspace.dtu.ac.in:8080/jspui/handle/repository/20781
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | KUMAR, BHAVISHYA | - |
dc.date.accessioned | 2024-08-05T08:51:24Z | - |
dc.date.available | 2024-08-05T08:51:24Z | - |
dc.date.issued | 2024-05 | - |
dc.identifier.uri | http://dspace.dtu.ac.in:8080/jspui/handle/repository/20781 | - |
dc.description.abstract | This thesis presents a method for super-resolution remote sensing images through the use of a complex hybrid architecture called SwinIPTHybrid, which is deftly integrated into a Generative Adversarial Network (GAN) framework. With this novel model, low-resolution aerial images from the UCMerced dataset are greatly improved on both a local and global scale by combining the structural advantages of Swin Transformers with the extensive capabilities of Image Processing Transformers (IPT). exceptional resolution in the field of remote sensing images, is extremely important for applications like disaster response, urban planning, and environmental monitoring, where better image clarity can significantly increase the dependability and accuracy of the analyses. In the context of remote sensing, traditional super-resolution techniques—like different interpolation methods—often fall short because they tend to introduce undesired blurring and artefacts, especially around important features like roads, waterways, and buildings. Advanced deep learning techniques have made some progress by improving detail while trying to suppress artefacts, but they often fail to strike a balance between these two aspects. Swin Transformers are used to carefully refine local textural details and structural subtleties after convolutional layers are first used to expand the features of interest in the SwinIPTHybrid model. IPT blocks support this process in a complementary manner by synthesising the enhanced features globally and effectively capturing long-range dependencies in the image. Extensive experimental analyses performed on the UCMerced dataset confirm that the SwinIPTHybrid model performs as expected. This thesis explores the scalable potential of this novel approach for a wider range of remote sensing applications in addition to exploring the architectural integrations and enhancements made possible by the combination of Swin Transformers and IPT within a GAN framework. This work represents a significant advancement in the field of aerial image recovery by pushing the boundaries of what is possible and providing a stable solution that can be expanded and adapted for different types of remote sensing applications. | en_US |
dc.language.iso | en | en_US |
dc.relation.ispartofseries | TD-7299; | - |
dc.subject | SUPER-RESOLUTION | en_US |
dc.subject | GENERATIVE ADVERSARIAL NETWORKS | en_US |
dc.subject | IMAGE QUALITY | en_US |
dc.subject | DEEP LEARNING | en_US |
dc.title | SUPER-RESOLUTION USING GENERATIVE ADVERSARIAL NETWORKS: ENHANCING IMAGE QUALITY WITH DEEP LEARNING | en_US |
dc.type | Thesis | en_US |
Appears in Collections: | M.E./M.Tech. Computer Engineering |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
Bhavishya Kumar M.Tech.pdf | 2.69 MB | Adobe PDF | View/Open |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.