<?xml version="1.0" encoding="UTF-8"?>
<feed xmlns="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <title>DSpace Collection:</title>
  <link rel="alternate" href="http://dspace.dtu.ac.in:8080/jspui/handle/123456789/99" />
  <subtitle />
  <id>http://dspace.dtu.ac.in:8080/jspui/handle/123456789/99</id>
  <updated>2026-04-28T04:03:52Z</updated>
  <dc:date>2026-04-28T04:03:52Z</dc:date>
  <entry>
    <title>ENHANCEMENT OF REVERSIBLE IMAGE STEGANOGRAPHY AND OPTIMIZATION OF QUANTUM IMAGE REPRESENTATION USING THE NEQR MODEL</title>
    <link rel="alternate" href="http://dspace.dtu.ac.in:8080/jspui/handle/repository/22163" />
    <author>
      <name>SINGH, SUMITRA</name>
    </author>
    <id>http://dspace.dtu.ac.in:8080/jspui/handle/repository/22163</id>
    <updated>2025-09-02T06:36:46Z</updated>
    <published>2025-05-01T00:00:00Z</published>
    <summary type="text">Title: ENHANCEMENT OF REVERSIBLE IMAGE STEGANOGRAPHY AND OPTIMIZATION OF QUANTUM IMAGE REPRESENTATION USING THE NEQR MODEL
Authors: SINGH, SUMITRA
Abstract: Reversible steganography allows for exact reconstruction of the cover media after&#xD;
hidden data extraction, making it vital for applications such as content authentication,&#xD;
medical imaging, and military communications. Various reversible steganography&#xD;
techniques include histogram shifting, image interpolation, and difference expansion.&#xD;
Histogram shifting methods apply shifting to pixel-domain histograms or prediction&#xD;
error histograms. Prediction error histogram methods offer higher embedding capacity,&#xD;
but they are more complex, lack a guaranteed lower bound on PSNR, and are more&#xD;
susceptible to histogram-based steganalysis. Pixel-domain histogram shifting&#xD;
techniques, though simpler and more efficient with a theoretical PSNR bound,&#xD;
generally have lower embedding capacity.&#xD;
Under this project, experiments are conducted on pixel-domain histogram shifting-&#xD;
based techniques. The capacity and histogram for varying number of non-overlapping&#xD;
image blocks and histogram blocks are analyzed. Experimental results show that&#xD;
embedding in image blocks does not significantly enhance the capacity compared to&#xD;
embedding in histogram blocks. Analysis of histogram blocks shows that embedding&#xD;
in two blocks yields the optimal results. A method is developed for making histogram&#xD;
shifting adaptive to payload size and a two layer embedding is developed for improved&#xD;
hiding capacity. Compared to previous methods, the two-layer embedding achieves&#xD;
higher capacity, better resistance to steganalysis, and maintains the PSNR acceptable&#xD;
for real-world applications.&#xD;
Quantum computing is an advancing field that offers significant speed advantages for&#xD;
certain computational tasks over classical computing. Notable examples include&#xD;
Shor’s algorithm, which efficiently solves integer factorization and discrete logarithm&#xD;
problems, and Grover’s algorithm, which accelerates the search process in&#xD;
unstructured databases.&#xD;
Quantum computing is based on quantum arithmetic operations where addition forms&#xD;
the core of all operations, as subtraction, multiplication, exponentiation, and division&#xD;
ix&#xD;
can all be reduced to repeated or modified forms of addition. Experiments are&#xD;
conducted for performance analysis of quantum addition on quantum hardware.&#xD;
Development of quantum circuits for addition and comparison, including half adders,&#xD;
full adders, Toffoli-based adders, QFT-based adders (utilizing the Quantum Fourier&#xD;
Transform), and quantum comparators is carried out using IBM Qiskit. The circuits&#xD;
are first validated on ideal simulators to confirm correctness, followed by testing on&#xD;
noisy simulators to emulate real quantum hardware conditions. Final execution is&#xD;
carried out on IBM's Eagle 127-qubit Quantum Processing Unit (QPU). Results show&#xD;
that computation accuracy on actual hardware is limited by physical constraints such&#xD;
as short qubit coherence times and instability. A performance comparison shows that&#xD;
Toffoli-based adders outperform QFT-based adders in terms of accuracy, making them&#xD;
more reliable for precise arithmetic computations.&#xD;
Quantum image representation provides exponential efficiency in image storage and&#xD;
processing. It relies on the fundamental principles of superposition and entanglement.&#xD;
NEQR (Novel Enhanced Quantum Representation) is a lossless encoding method used&#xD;
to represent digital images on a quantum computer. It is widely applicable in domains&#xD;
such as quantum machine learning, image steganography, and quantum image&#xD;
analysis.&#xD;
This work introduces two enhancements to the NEQR framework: (1) Optimizing the&#xD;
decomposition of Multi-Controlled NOT (MCX) gates into Toffoli gates, and (2)&#xD;
Parallelizing the NEQR by parallel bit-plane encoding of the NEQR circuit, where the&#xD;
NEQR circuit is simultaneously constructed for each of the eight bit-planes of an&#xD;
image, thereby reducing overall circuit depth. Experimental results demonstrate that&#xD;
these enhancements lead to reduced circuit depth and faster execution, thereby&#xD;
mitigating decoherence-related errors. Additionally, quantum image processing&#xD;
operations that demonstrate exponential speedup over classical approaches — such as&#xD;
image negation, rotation, and intensity superposition — are also implemented and&#xD;
evaluated as part of this work.</summary>
    <dc:date>2025-05-01T00:00:00Z</dc:date>
  </entry>
  <entry>
    <title>A STUDY ON DEEP LEARNING AND TRANSFORMER BASED MODELS FOR HAND GESTURE AND ACTION RECOGNITION</title>
    <link rel="alternate" href="http://dspace.dtu.ac.in:8080/jspui/handle/repository/21847" />
    <author>
      <name>SUTTY, SAHIL</name>
    </author>
    <id>http://dspace.dtu.ac.in:8080/jspui/handle/repository/21847</id>
    <updated>2025-07-08T08:48:56Z</updated>
    <published>2025-06-01T00:00:00Z</published>
    <summary type="text">Title: A STUDY ON DEEP LEARNING AND TRANSFORMER BASED MODELS FOR HAND GESTURE AND ACTION RECOGNITION
Authors: SUTTY, SAHIL
Abstract: Fundamental technologies in the evolution of human-computer interaction (HCI), hand&#xD;
gestures and human action recognition enable more natural, intuitive, and accessible&#xD;
interfaces across sectors including assistive technologies, robotics, virtual reality, and&#xD;
surveillance. Using the MSRA Hand Gesture Dataset and the UCF101 Dataset, this&#xD;
paper presents a thorough comparative analysis of state-of- the-art deep learning and&#xD;
transformer-based models for hand gesture recognition and for human action recognition.&#xD;
Comprising 76,500 depth images distributed over 17 gesture classes, the MSRA Hand&#xD;
Gesture Dataset offers a strong basis for spatial feature extraction. ResNet101 obtained&#xD;
the highest F1-score (0.9978) among all architectures; closely followed by DenseNet 169&#xD;
(0.9919) and DenseNet 201 (0.9901). MobileNetV2 demonstrated a good balance between&#xD;
computational efficiency and accuracy with an F1-score of 0.9847; VGG variants lagged&#xD;
since they lacked sophisticated architectural elements.&#xD;
Human action recognition using the UCF101 dataset—which consists of over 13,000&#xD;
video clips in 101 action categories—was driven with an eye toward the 50 most frequent&#xD;
classes to guarantee computational feasibility and class balance.With F1-score 0.9997,&#xD;
transformer-based models especially ViT Tiny Patch surpassed even the deepest CNNs.&#xD;
While MobileNetV2 once shown efficiency in settings with limited resources, VGG16bn’s&#xD;
performance revealed the limits of older CNN architectures for demanding tasks.&#xD;
The results underline how architectural innovations including residual connections,&#xD;
dense connectivity, and attention mechanisms help to raise recognition accuracy and&#xD;
computational efficiency. The paper claims that transformer-based models are redefin ing benchmarks even if deep CNNs continue to be strong candidates. More particularly,&#xD;
considering hybrid CNN-transformer designs, explicit temporal modeling, and advanced&#xD;
augmentation techniques helps to increase recognition capacities in pragmatic settings.</summary>
    <dc:date>2025-06-01T00:00:00Z</dc:date>
  </entry>
  <entry>
    <title>FISH SPECIES CLASSIFICATION USING MACHINE LEARNING AND DEEP LEARNING MODELS</title>
    <link rel="alternate" href="http://dspace.dtu.ac.in:8080/jspui/handle/repository/21833" />
    <author>
      <name>MEHEVI, AZIZ</name>
    </author>
    <id>http://dspace.dtu.ac.in:8080/jspui/handle/repository/21833</id>
    <updated>2025-07-08T08:47:12Z</updated>
    <published>2025-05-01T00:00:00Z</published>
    <summary type="text">Title: FISH SPECIES CLASSIFICATION USING MACHINE LEARNING AND DEEP LEARNING MODELS
Authors: MEHEVI, AZIZ
Abstract: Fish species classification is fundamental to ecological monitoring, biodiversity con servation, and sustainable fishery management. Conventionally, the identification of fish&#xD;
species is based upon manual observations by domain experts and, hence, incurs heavy&#xD;
expenses with time, labor, and considerable lows in scalability. This study provides broad&#xD;
coverage of the fish species classification problem, with a huge focus on both traditional&#xD;
machine learning (ML) models and architectures of deep learning (DL). A custom dataset&#xD;
of underwater fish images was composed, with images of nine different fish species to&#xD;
cater to various environmental condition-based considerations in the training and testing&#xD;
of several models. The ML models discussed include Support Vector Machines (SVM),&#xD;
Random Forests (RF), Decision Trees (DT), Logistic Regression (LR), and Naive Bayes&#xD;
(NB): all of these were subject to two types of dimension reduction, namely Principal&#xD;
Component Analysis (PCA) and Linear Discriminant Analysis (LDA). These were com pared against DL models including VGG-19, DenseNet121, EfficientNet B0, Inception&#xD;
V3, ResNet150 V2, and LSTMs, with evaluations conducted for accuracy, precision, re call, and F1-score. The experimental results demonstrated that DL models significantly&#xD;
outclassed conventional ML algorithms in classification accuracy and the ability to han dle variability in images. VGG-19 attained 99.4% overall accuracy; DenseNet121 and&#xD;
EfficientNet B0 followed closely and are considered fit for deployment in real-world fish&#xD;
classification systems. Image preprocessing, normalization, and data augmentation were&#xD;
deemed critical in improving model performance. This study emphasizes the possibility of&#xD;
having deep learning automate fish species recognition with high accuracy under difficult&#xD;
underwater conditions. The present findings open several avenues for real-time marine&#xD;
surveillance, automated ecological data analysis, and smart decision support systems for&#xD;
marine biologists and conservationists.</summary>
    <dc:date>2025-05-01T00:00:00Z</dc:date>
  </entry>
  <entry>
    <title>COMPREHENSIVE STUDY OF DEEP LEARNING-BASED SUPER-RESOLUTION WITH EMPHASIS ON GANS</title>
    <link rel="alternate" href="http://dspace.dtu.ac.in:8080/jspui/handle/repository/21830" />
    <author>
      <name>SHUKLA, GAURAV</name>
    </author>
    <id>http://dspace.dtu.ac.in:8080/jspui/handle/repository/21830</id>
    <updated>2025-07-08T08:46:52Z</updated>
    <published>2025-05-01T00:00:00Z</published>
    <summary type="text">Title: COMPREHENSIVE STUDY OF DEEP LEARNING-BASED SUPER-RESOLUTION WITH EMPHASIS ON GANS
Authors: SHUKLA, GAURAV
Abstract: Image super-resolution using Generative Adversarial Networks (GANs) has been ex tensively researched in recent years due to its ability to recover high-perceptual-quality&#xD;
high-resolution images from low-resolution inputs. Various GAN-based methods have&#xD;
been proposed over the years, which employ di!erent architectures and loss functions to&#xD;
increase the fidelity and realism of output images. This work integrates these develop ments and investigates their impact on various categories of images in various application&#xD;
domains. By extensive experimentation, we compare three highly acclaimed GAN-based&#xD;
super-resolution models SRGAN, ESRGAN, and Real-ESRGAN on twelve disparate im age classes. The results confirm that the performance of the model varies significantly&#xD;
depending on the image features and domain, which calls for the need of domain-specific&#xD;
methods that are capable of learning to generalize across varying image content. To&#xD;
address these findings, we add a new component to loss functions with orthogonal reg ularization for, Wide Activation SRGAN (WDSR-GAN), which employs wide activation&#xD;
residual blocks to increase feature representa-tion and training stability. Furthermore,&#xD;
in this work we explore how various loss functions impact super-resolution quality and&#xD;
illustrate how various combinations impact image sharpness and perceptual detail. To&#xD;
quantitatively compare model performance, we use a collection of metrics consisting of&#xD;
PSNR and SSIM, which collectively capture pixel-level accuracy and structural integrity.&#xD;
The findings of this thesis provide valuable insights into the problems and opportuni- con nections of GAN-based image super-resolution. By extensive analysis of di!erent models&#xD;
and loss functions in di!erent domains and metrics, this work lays a strong foundation&#xD;
for the design of more e”cient and flexible super-resolution algorithms. Such e!orts seek&#xD;
to steer future research towards more fidelity, improved perceptual quality, and increased&#xD;
adaptability to real-world imaging applications.</summary>
    <dc:date>2025-05-01T00:00:00Z</dc:date>
  </entry>
</feed>

