Please use this identifier to cite or link to this item: http://dspace.dtu.ac.in:8080/jspui/handle/repository/22537
Title: PREDICTION AND DETECTION OF FLOOD USING ARTIFICIAL INTELLIGENCE
Authors: DUBEY, VINAY
Keywords: PREDICTION
DETECTION OF FLOOD
ARTIFICIAL INTELLIGENCE
CNN
Issue Date: Dec-2025
Series/Report no.: TD-8442;
Abstract: Floods remain one of the most catastrophic and recurrent natural disasters globally, resulting in immense economic losses, displacement of populations, infrastructure degradation, and significant environmental disruptions. As climate variability intensifies and urbanization accelerates, the urgency to develop intelligent, efficient, and timely flood prediction and management solutions becomes more critical. This research aims to address these challenges by designing a series of novel, AI-driven models for flood detection, classification, forecasting, and image enhancement, thereby supporting real-time disaster response and long-term urban resilience planning. The research proposes four key contributions, each targeting specific aspects of flood-related problems. The first model, Flood-FireNet, uses the Adaptive Firefly Algorithm (AFA) to optimize feature selection and combines it with a Transformer-based architecture to improve satellite image-based flood classification. This model demonstrated a high accuracy of 97.85%, precision of 98.21%, and F1-score of 97.65%, outperforming conventional deep learning models. The second contribution, MoSWIN, integrates Monkey Search Optimization (MSO) with a SWIN Transformer to enhance classification by capturing hierarchical spatial relationships in flood images. It achieved a classification accuracy of 96.53%, with strong robustness in noisy conditions. The third contribution, the FloodCNN-BiLSTM model, is a hybrid deep learning framework for flood forecasting using environmental sensor data. CNN layers extract spatial features while BiLSTM captures temporal dependencies, enabling accurate urban flood prediction with an F1-score exceeding 96.5% on benchmark datasets. The fourth model, SSR-GAN, introduces a super-resolution-based GAN framework to enhance low-quality SAR images. By improving PSNR, SSIM, and reducing MSE, this model enables clearer flood zone delineation, especially in disaster-struck regions where high-resolution data may be unavailable. These models were rigorously evaluated using multiple performance metrics, including accuracy, recall, precision, F1-score, Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index (SSIM), and Mean Squared Error (MSE). Ablation studies and statistical validation tests, such as paired t-tests and ANOVA, further confirmed the effectiveness and generalizability of the proposed frameworks. The results indicate that the integration of evolutionary optimization vi algorithms, Transformer-based architectures, and GANs substantially improves the system’s ability to detect and predict floods across diverse scenarios. Beyond academic advancement, the research offers substantial industrial and societal impact. Applications include integration into smart city surveillance systems, automated flood insurance damage assessment, early warning and disaster response platforms, urban planning for flood mitigation, and real-time remote sensing analysis. Moreover, the work directly supports several United Nations Sustainable Development Goals (SDGs), notably SDG 11 (Sustainable Cities), SDG 13 (Climate Action), SDG 9 (Industry and Innovation), SDG 6 (Clean Water), and SDG 3 (Good Health). By bridging the gap between advanced AI methodologies and real-world flood disaster management, this thesis contributes a comprehensive, scalable, and intelligent solution for building climate-resilient infrastructure in flood-prone regions. Overall, this study presents a unified, intelligent system that combines environmental data analysis, image processing, and artificial intelligence to predict and detect floods more accurately and efficiently. The proposed approach not only strengthens disaster response mechanisms but also contributes to sustainable risk management practices. The framework developed in this research has the potential to be adapted for other natural disaster applications, marking a significant step forward in the use of AI for environmental monitoring and public safety. Objectives: The objectives of this study are structured into four key segments: • The first objective of the study is to develop a model for flood assessment by considering environmental parameters. which aims to develop a efficient flood classification model. • The second objective focuses on to design a flood detection technique using Artificial Intelligence., aiming to improve the accuracy and efficiency of flood detection. • The third objective is to improve flood detection technique by enhancing flood images to enhance the image quality for better flood detection model. • The final objective is to perform the comparative analysis of our proposed work with the existing work. Methodology: To accomplish the stated objectives, this study leverages advanced machine learning and deep learning methods, such as nature inspired algorithms, neural networks, attention mechanisms, and transformer-based architectures, due to their significant potential in vii addressing complex challenges in flood assessment on datasets lie environmental parameters and image dataset. In our work, we employ a Generative Adversarial Network (GAN)-based super-resolution technique to enhance low-quality flood images, improving their clarity and detail for more accurate detection and analysis. The strategies employed to meet these objectives are as follows: • To accomplish the first objective, the proposed hybrid model integrates Convolutional Neural Networks (CNN) with Bidirectional Long Short-Term Memory (BiLSTM) networks within a transfer learning framework. This combination effectively captures both spatial and temporal features from environmental data, enabling accurate multi- class classification. The model demonstrates superior performance when evaluated against advanced existing benchmark methods. • For the second objective, two flood detection models were developed, each utilizing different nature-inspired approaches and a transformer. The first model introduces Flood-FireNet, a transformer-based model enhanced by a nature-inspired optimization strategy for distinguishing flooded and non-flooded regions. The second model integrated The proposed MoSWIN model classifies flooded and non-flooded regions by integrating the Monkey Search Optimization (MSO) algorithm for effective feature extraction and the SWIN Transformer for deep learning-based classification. • To address the third objective, we propose a novel super-resolution approach using generative adversarial networks (GAN) to enhance satellite flood images. To optimize image generation, we employ perceptual loss calculated via VGG Net’s intermediate feature maps, guiding the model to minimize perceptual differences between generated and target images, resulting in more visually accurate enhancements. • For the fourth objective, a comparative analysis was conducted, evaluating the performance of the above-developed models against existing flood assessment and detection techniques. Key performance metrics, such as accuracy, sensitivity, specificity, F1-score, MSE, RMSE, PSNR, and SSIM were used to compare the effectiveness of the proposed models with current state-of-the-art methods. Results: The outcomes of the study are as follows: • The integration of CNN and BiLSTM within a transfer learning framework has resulted in high accuracy for multi-class flood classification using environmental parameters, outperforming several advanced benchmark models. viii • The Flood-FireNet model, which combines transformer architecture, attention mechanisms, and AFA, demonstrates superior performance on flood image datasets compared to existing deep learning models. • The Adaptive Firefly Algorithm (AFA) successfully extracts rich, high-level features from flood images, significantly improving classification performance and generalization by minimizing overfitting. • The proposed MoSWIN model effectively integrates Monkey Search Optimization with the SWIN Transformer, enabling the extraction of hierarchical and discriminative features, resulting in significantly better accuracy, precision, and recall than models like ResNet and Vision Transformer. • Across all proposed models, the use of nature-inspired optimization (MSO, AFA), attention mechanisms, and image enhancement techniques collectively reduce overfitting, leading to improved generalization on unseen data. • The integration of deep learning and optimization algorithms has enabled the models to uncover complex spatial and visual patterns in flood-affected regions, aiding in more reliable classification and prediction. • The GAN-based super-resolution approach improves the visual and structural quality of satellite flood images, restoring fine details and outperforming traditional upscaling methods through perceptual loss optimization. • Image enhancement techniques such as Histogram Equalization (HE) and Adaptive Histogram Equalization (AHE) effectively improve the visibility and quality of SAR flood images, aiding in more accurate flood detection and classification. • The integration of deep learning and optimization algorithms has enabled the models to uncover complex spatial and visual patterns in flood-affected regions, aiding in more reliable classification and prediction. • Experimental results and performance comparisons show that the proposed models consistently outperform standard architectures like ResNet, Vision Transformer, and baseline CNNs across all key metrics. • The proposed techniques, especially MoSWIN and Flood-FireNet, offer a scalable framework adaptable to different flood datasets and regions, making them suitable for real-world deployment in disaster management systems.
URI: http://dspace.dtu.ac.in:8080/jspui/handle/repository/22537
Appears in Collections:Ph.D. Computer Engineering

Files in This Item:
File Description SizeFormat 
VINAY DUBEY Ph.D..pdf7.08 MBAdobe PDFView/Open
VINAY DUBEY Plag..pdf7.17 MBAdobe PDFView/Open


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.