Please use this identifier to cite or link to this item:
http://dspace.dtu.ac.in:8080/jspui/handle/repository/20963
Title: | EFFICIENT TECHNIQUES FOR CLASSIFICATION OF MEDICAL IMAGES |
Authors: | RAUTELA, KAMAKSHI |
Keywords: | MEDICAL IMAGES BREAST CANCER CLASSIFICATION DARTS ULTRASOUND IMAGES CNN AI |
Issue Date: | Aug-2024 |
Series/Report no.: | TD-7498; |
Abstract: | Breast cancer is a major global health concern due to the potential for early detection to dramatically improve patient prognosis. Recent advances in screening methods and technologies have boosted the precision and efficacy of detecting and characterizing breast cancer. This study provides a thorough examination of reliable screening methods such as ultrasound, mammography, MRI, and thermography for the detection and classification of breast cancer. Numerous artificial intelligence (AI) and computational techniques are investigated to improve the efficiency of screening procedures. Convolutional neural networks (CNNs), a type of deep learning algorithm, have shown remarkable promise for automatically classifying and detecting breast cancer in medical images. Combining AI with screening methods can decrease human error, increase diagnostic precision, and speed up the detection of malignant cases. The development of new strategies for detecting and classifying breast cancer is highlighted in this work, along with the significance of ongoing research and collaboration between the medical and technical communities to enhance existing screening methods. It also highlights the importance of stringent validation and regulatory compliance to ensure the safe and efficient implementation of these technologies into clinical practice. This emphasizes how important it is to create an efficient preprocessing and enhancement strategy. Here, synthetic images are generated using a multimodal medical dataset-based method. To better detect cancer, researchers are exploring multimodal image fusion. It offers a wide range of visual qualities for precise medical diagnosis. However, this method necessitates precise registration of all image modalities involved. To solve this problem, a new method is proposed for building synthetic mammograms. The image quality is improved using an vi image enhancement technique. The thermal image segment is converted into a mammogram using a mapping function based on dual-modality structural features (DMSF). This study also proposes a modified Differentiable ARchiTecture Search (DARTS) called (U-DARTS) to further aid in the detection and classification of breast lesions. U-DARTS makes use of a stochastic gradient descent optimizer. The proposed method is evaluated using both DMR and INbreast datasets. Based on the obtained data, the proposed model outperforms the currently used methods by a wide margin. Accuracy levels of 98% in validation and 91% in testing are attained. The proposed approach is unrivaled for creating mammograms and subsequently detecting lesions. The concept of fusing two different modality datasets is inspired by the results of synthetic mammograms created using a mammography-thermography multi-modal dataset. By combining the images, more specific information about the tumor's location can be gleaned. However, the output image may have spectral variations, making it challenging to use in the medical field when fusing two images from different modalities. Multimodal image fusion is a crucial topic of research since it has been demonstrated to be effective in producing high-quality results for healthcare diagnostics and treatment. In medicine, however, fusing images from different modalities has always been difficult due to the resulting image's distorted spectral information. In this work, Super-Pixel Segmentation (SPS-AWT) is proposed using an advanced wavelet transformation method to combine breast cancer images taken in different settings and at different times. Discrete wavelet transformation (DWT) is used to combine spectral and spatial information from both mammographic and thermal images in order to make the evaluation. The obtained coefficients are divided into spherical patches using super-pixel segmentation to generate pixels with similar visual characteristics. The effectiveness of the proposed fusion method vii is measured using a standard data set. Images fused using the proposed method are of high quality. However, image enhancement using the ultrasonic modality is a challenging task. Ultrasound images contain noise that is visible in the form of dots, and shadows are noticeable as tissue-related textures. This makes it hard to understand. To make the breast ultrasound image more informative, a method is proposed to combine Active Contour and Texture Feature Vectors for finding the discriminative patterns. A comprehensive set of discriminative features for cancer detection in ultrasound images is created by combining the two learning models. Breast Ultrasound Images dataset is used to evaluate the suggested method and compare it to the recently developed algorithms. Experimental results reveal that the proposed approach outperforms the existing algorithms in terms of accuracy, recall, precision, Jaccard index, and F1 score. Next, a deep-learning model with a modified transformer is proposed for breast lesions detection in order to classify the pre-processed and enhanced medical images efficiently. A deep learning model with a tweaked transformer is proposed to identify breast lesions based on the benefits of residual convolutional networks and the multiple-layer perceptron (MLP)- based transformer. The support residual deep learning network generates the deep features, and the transformer classifies breast cancer using self- and cross-attention mechanisms. The proposed model is effective at detecting breast cancer across both the basic (3-stage) and multi-classification (5-stage) settings. Data collection, preprocessing, patch creation, and the creating stage for identifying breast lesions all adhere to the same framework. Positive evaluation results are obtained using the INbreast mammograms, with the basic and multi class approaches achieving accuracies of 98.17% and 96.74%, respectively. The experimental results demonstrate the proposed model can differentiate between cancerous, viii noncancerous, and benign breast tissues. In addition, the modified transformer showed promising results in evaluating multiple classes of cancer. In the next step, deep neural networks and thermographic images are used to create a real time solution for diagnosing breast cancer. For this, two different experiments are performed. Firstly, thermal imaging is used as a method of breast cancer detection in this study. The model first applies a memory-efficient network to the entire image to determine where the most relevant information is likely to be found. The dataset of thermal images is then passed through a relatively deep CNN to extract relevant information. The model achieved an accuracy of 92.52%. When it comes to modeling dependencies, particularly long-range ones like those required for accurately determining or recognizing corresponding breast lesion features, CNNs typically perform poorly due to the inherent locality of the convolution operation. Due to this, the Vision Transformer block is used in conjunction with VGG19. In addition, this work introduces a powerful model that integrates global and local features. Finally, the model is trained separately using the Database for Mastology Research and INbreast. The model is then trained with 80% training data and 20% test data from both datasets using transfer learning. To train the network, a learning rate of 0.01, a batch size of 50, and 100 epochs are used. Test accuracy of 98% and 89.9% are achieved for the INbreast and DMR datasets, respectively. |
URI: | http://dspace.dtu.ac.in:8080/jspui/handle/repository/20963 |
Appears in Collections: | Ph.D. Electronics & Communication Engineering |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
Kamakshi Rautela pH.d..pdf | 4.77 MB | Adobe PDF | View/Open |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.