<?xml version="1.0" encoding="UTF-8"?>
<rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns="http://purl.org/rss/1.0/" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel rdf:about="http://dspace.dtu.ac.in:8080/jspui/handle/123456789/100">
    <title>DSpace Collection:</title>
    <link>http://dspace.dtu.ac.in:8080/jspui/handle/123456789/100</link>
    <description />
    <items>
      <rdf:Seq>
        <rdf:li rdf:resource="http://dspace.dtu.ac.in:8080/jspui/handle/repository/22696" />
        <rdf:li rdf:resource="http://dspace.dtu.ac.in:8080/jspui/handle/repository/22673" />
        <rdf:li rdf:resource="http://dspace.dtu.ac.in:8080/jspui/handle/repository/22527" />
        <rdf:li rdf:resource="http://dspace.dtu.ac.in:8080/jspui/handle/repository/22523" />
      </rdf:Seq>
    </items>
    <dc:date>2026-04-28T04:03:22Z</dc:date>
  </channel>
  <item rdf:about="http://dspace.dtu.ac.in:8080/jspui/handle/repository/22696">
    <title>FACE DETECTION AND TRACKING</title>
    <link>http://dspace.dtu.ac.in:8080/jspui/handle/repository/22696</link>
    <description>Title: FACE DETECTION AND TRACKING
Authors: MOOL, AKSHAY; Panda, Jeebananda (SUPERVISOR); Sharma, Kapil (CO-SUPERVISOR)
Abstract: With the use of various technological advancements and devices in today’s&#xD;
routine life, detection and tracking of human faces and facial features become&#xD;
very essential areas of focus. They become important so that more techniques&#xD;
could be developed for increasing their working efficiency. The field of Computer&#xD;
Vision uses these intermediary processes of face detection and tracking to track&#xD;
and analyze the input of visual information about humans, their faces and/or&#xD;
body movements, and correspondingly proceed to the desired application.&#xD;
The present thesis work has been taken up for the development of (i) an optimiz-&#xD;
able face detection and tracking model based on facial landmark localisation and&#xD;
feature tracking, for better and efficient processing of faces in high quality video&#xD;
streams, and (ii) a Non-Neighbourhood Background Elimination component us-&#xD;
ing the built model with mathematical and statistical modelling, for reducing the&#xD;
processing time and computations required for finding the target face in a frame&#xD;
when it has already been detected.&#xD;
Many algorithms have been developed to facilitate Face Detection and Track-&#xD;
ing applications. Viola and Jones were able to develop an algorithm in 2004, that&#xD;
achieves real-time performance with decent accuracy in detecting faces. It was&#xD;
one of the first algorithms to achieve such efficient performance, that is why it is&#xD;
still used as a standard algorithm to compare against other upcoming algorithms,&#xD;
and therefore has been specifically discussed in this thesis.&#xD;
vii&#xD;
There is a lot of visual information that is generated in various fields, rang-&#xD;
ing from daily routine to specialized applications. Processing all these types of&#xD;
information efficiently is a valid concern and need focused research. Most face&#xD;
detection algorithms have to deal with low quality data in videos, since they’re&#xD;
mainly focused on surveillance applications, whose information capturing devices&#xD;
capture less information per frame. Consequently, this thesis reviews some state-&#xD;
of-the-art face detection algorithms and compares their processing efficiency on&#xD;
low and high quality videos. The comparative analysis reveals that these recent&#xD;
and modern algorithms do not work as effectively on high quality videos as they&#xD;
do on lower quality videos. Therefore, there is an increasing need to focus re-&#xD;
search on analysis of high quality information in videos in an efficient manner, so&#xD;
as to keep up the pace of their analysis with the information that is generated.&#xD;
High quality videos (data generated by current applications like social media,&#xD;
Multimedia content, etc) mostly exist in offline mode, that could be used for post&#xD;
processing by the Computer Vision applications. To address this need, an effort&#xD;
has been made to focus on developing such an algorithm that gives faster results&#xD;
on high quality videos, at par with the algorithms working on live low quality&#xD;
video feeds. The proposed algorithm uses Convolutional-MTCNN as base algo-&#xD;
rithm, and speeds it up for high definition videos. The proposed model speeds&#xD;
up the face detection process really fast, up to 19+ FPS, while still maintaining&#xD;
above 90% accuracy. This paper also presents a novel solution to the problem&#xD;
of occlusion and detecting partial or fully hidden faces in the videos. This is&#xD;
achieved by using statistical and probabilistic approaches, given that the face has&#xD;
been identified in first few frames, to give the algorithm an estimate of where the&#xD;
face should be in the occluded region.&#xD;
Since the focus of our research is to efficiently process high quality data, some&#xD;
viii&#xD;
commercially used face detection algorithms in open literature have also been&#xD;
considered in our research. Models like FaceNet, HOG, YuNet, alongwith Viola-&#xD;
Jones algorithm and MTCNN, have been discussed and analysed in our thesis.&#xD;
The research done is compared against these models, in an effort to improve their&#xD;
performance in commercial settings.&#xD;
Further analysis lead to the conclusion that modern face detection algorithms&#xD;
fail to provide optimal results when they have to deal with larger amounts of&#xD;
data per frame while processing higher quality videos. This thesis discusses an-&#xD;
other proposed work that tackles discussed problem and offers a solution to deploy&#xD;
commercially used state-of-the-art face detection algorithms to process only the&#xD;
regions of interest in a frame, and discard the rest to decrease the data to be&#xD;
processed. The model maintains the accuracy of the base algorithm while de-&#xD;
creasing the processing time per frame, thereby increasing the overall efficiency.&#xD;
The selection of region of interest is dependent on the detection of facial window&#xD;
in the previous frame. Therefore, the choice of base algorithm plays an important&#xD;
role in determining the speed of the model. The model achieves increased pro-&#xD;
cessing speeds of about 69–76% more than the standalone usage of the detection&#xD;
algorithms for analyzed frame rates.</description>
    <dc:date>2024-10-01T00:00:00Z</dc:date>
  </item>
  <item rdf:about="http://dspace.dtu.ac.in:8080/jspui/handle/repository/22673">
    <title>ANALYSIS OF EEG SIGNALS IN SPATIAL, SPECTRAL AND TEMPORAL DOMAINS FOR CLASSIFICATION</title>
    <link>http://dspace.dtu.ac.in:8080/jspui/handle/repository/22673</link>
    <description>Title: ANALYSIS OF EEG SIGNALS IN SPATIAL, SPECTRAL AND TEMPORAL DOMAINS FOR CLASSIFICATION
Authors: HOLKER, RUCHI
Abstract: EEG signals serve as a non-invasive, real-time biomarker of brain function, offering&#xD;
sensitive metrics for diagnosing and monitoring various neurological and psychiatric&#xD;
disorders. Their ability to capture subtle changes in electrical brain activity makes EEG an&#xD;
invaluable tool for detecting patterns and dysfunctions underlying conditions like Alcohol&#xD;
Use Disorder (AUD) and Attention-Deficit/Hyperactivity Disorder (ADHD). Unlike&#xD;
subjective behavioral assessments, EEG provides objective, quantifiable metrics that reflect&#xD;
the dynamic interplay of neural networks across temporal, spectral, and spatial domains.&#xD;
This thesis introduces a comprehensive set of twenty seven Quantitative EEG (QEEG)&#xD;
features to create a detailed and multifaceted representation of brain activity. These neuro-&#xD;
biomarkers are grouped into three main categories. Power features quantify the signal's&#xD;
strength and statistical properties, including total amplitude power, standard deviation,&#xD;
skewness, kurtosis, and both the mean and standard deviation of the signal envelope,&#xD;
reflecting the strength, variability, and asymmetry of neural oscillations. In a similar&#xD;
category, Range EEG features (rEEG) further probe peak-to-peak dynamics with statistics&#xD;
like mean, median, lower and upper percentile margins, width, coefficient of variation,&#xD;
asymmetry, and standard deviation, offering a richly detailed view of voltage fluctuations&#xD;
across windows. Spectral features analyze the frequency components and complexity by&#xD;
measuring spectral absolute power and relative power, Shannon entropy, spectral flatness,&#xD;
spectral difference, spectral edge frequency, permutation entropy and fractal dimension,&#xD;
revealing abnormalities in brain rhythms. The third category is Inter-Hemispherical&#xD;
Connectivity features that measure the interaction between the brain's left and right&#xD;
hemispheres using metrics like the Brain Symmetry Index (BSI), correlation, mean and&#xD;
maximum coherence, and the frequency of maximum coherence, which are crucial for&#xD;
understanding network-level dysfunction.&#xD;
Another significant contribution is the development of a robust, generalized end-to-end&#xD;
signal processing and feature selection pipeline that converts raw EEG recordings into&#xD;
QEEG biomarkers for accurate diagnosis of behavioral and neurological disorders. The&#xD;
process begins with artifact removal and referencing to prepare high-quality, clean EEG data.&#xD;
This is followed by sub-time segmentation using overlapping temporal windows to preserve&#xD;
the continuity of neural dynamics over time. The resulting EEG signal is subjected to a&#xD;
vii&#xD;
broad-band spectral filter bank, dividing the signal into ten non-overlapping frequency bands&#xD;
covering the full range from 0–100Hz, ensuring that both slower and faster oscillations&#xD;
(including high-gamma activity) are analyzed. Within each spectral band, common spatial&#xD;
pattern (CSP) filtering pinpoints the most class-informative spatial components, maximizing&#xD;
discriminability between healthy and patient subject groups and reducing the influence of&#xD;
irrelevant channels. From these spectrally and spatially filtered signals, the twenty seven&#xD;
QEEG features are extracted, and subsequently averaged over different temporal windows,&#xD;
producing a high-dimensional feature space that enables comprehensive modelling of neural&#xD;
function. To address redundancy and highlight only the most predictive features, advanced&#xD;
filter-wrapper feature selection is employed to identify the most discriminant features. An&#xD;
ensemble feature selection approach is used: initially, filter-based methods such as ANOVA,&#xD;
Chi-square, Gini Index, and Information Gain Ratio statistically rank the features, the&#xD;
obtained ranks are averaged, and a wrapper technique—typically Sequential Forward&#xD;
Selection (SFS)—iteratively builds the optimal feature subset by maximizing classifier&#xD;
performance with cross-validation. This process reduces the feature set into a compact,&#xD;
highly informative set, supporting models that generalizes accurately across independent&#xD;
patient cohorts and multiple brain disorders.&#xD;
Finally, a novel and generalized framework is designed to extract Functional Connectivity&#xD;
features by capturing linear monotonic inter-channel associations, enabling robust&#xD;
identification of functional interactions between distinct brain regions. Functional&#xD;
connectivity in the time domain is quantified using the Pearson correlation coefficient,&#xD;
serving as a quantitative EEG (QEEG) feature that captures inter-electrode synchrony and&#xD;
underlying neural patterns to enhance the accuracy of disorder classification. The result is a&#xD;
fully integrated and computationally efficient pipeline that combines broad-spectrum feature&#xD;
capture, performs network-level analysis, and rigorous feature reduction to reach state-of-&#xD;
the-art accuracy in classifying behavioral and neurological disorders successfully.</description>
    <dc:date>2026-02-01T00:00:00Z</dc:date>
  </item>
  <item rdf:about="http://dspace.dtu.ac.in:8080/jspui/handle/repository/22527">
    <title>AUTOMATIC MEDICAL TRANSCRIPTS SUMMARIZATION USING MACHINE LEARNING TECHNIQUES</title>
    <link>http://dspace.dtu.ac.in:8080/jspui/handle/repository/22527</link>
    <description>Title: AUTOMATIC MEDICAL TRANSCRIPTS SUMMARIZATION USING MACHINE LEARNING TECHNIQUES
Authors: BEDI, PARMINDER PAL SINGH
Abstract: The healthcare sector and biomedical domain are essential for public health and medical&#xD;
advancement, providing services from clinical care to research. Healthcare facilities offer&#xD;
crucial services like check-ups and disease management, while the biomedical domain&#xD;
drives medical innovation through research and experimentation. With the increasing&#xD;
volume of biomedical literature, automatic text summarization is vital for efficiently&#xD;
extracting insights. These algorithms, equipped with domain-specific knowledge, simplify&#xD;
complex information, facilitating knowledge dissemination and collaboration. Additionally,&#xD;
in the rapidly evolving field of biomedical research, automatic summarization systems&#xD;
ensure timely access to up-to-date information by monitoring and summarizing the latest&#xD;
literature and databases. There are two main approaches of Automatic Text Summarization:&#xD;
Extractive and Abstractive. Extractive summarization involves selecting and extracting&#xD;
specific sentences or phrases directly from the source text, prioritizing their frequency or&#xD;
relevance to compose the summary. In contrast, Abstractive summarization interprets and&#xD;
paraphrases the content to create new sentences conveying the essential meaning in a concise&#xD;
form.&#xD;
In this research work, extractive text summarization techniques in biomedical domain are&#xD;
explored, focusing on issues such as redundancy, coherence, and the risk of overlooking&#xD;
crucial information. Extractive summarization techniques in the biomedical domain utilize&#xD;
various algorithms and approaches, including Frequency-based Methods, Graph-based&#xD;
Algorithms, and Machine Learning Approaches, to identify and extract key sentences or&#xD;
phrases from biomedical documents. Hybrid approaches combine multiple techniques to&#xD;
improve accuracy and coverage, effectively summarizing complex biomedical texts while&#xD;
addressing challenges such as redundancy and information loss.&#xD;
To address the identified research gaps, numerous novel approaches have been proposed for&#xD;
biomedical text summarization. Firstly, a novel approach using the Methathesaurus from&#xD;
UMLS to extract named entity concepts is proposed which applies the BERT method to&#xD;
generate concise summaries from Pubmed and Mtsamples. Further, an unsupervised&#xD;
approach focusing on semantic similarity and keyword-phrase extraction for both single-&#xD;
document and multi-document summarization is proposed. Furthermore, to further improve&#xD;
vi&#xD;
upon the results, a distinctive framework utilizing deep neural networks for contextually&#xD;
aware summarization of biomedical literature is proposed which employs a binary classifier&#xD;
and bidirectional long-short term memory recurrent neural network.&#xD;
To validate the proposed approaches, comparisons are made with baseline methods in&#xD;
biomedical text summarization, including a recent graph-based approach with the FP-&#xD;
Growth method. The results indicate that the last proposed approach outperforms state-of-&#xD;
the-art methods, achieving the highest ROUGE score of 0.96, surpassing the scores of the&#xD;
first and second approach (0.74, 0.76).&#xD;
The research concludes that the proposed methods demonstrate superior results in the&#xD;
medical domain compared to existing state-of-the-art techniques, highlighting the efficacy&#xD;
of the developed summarization approaches for biomedical literature.</description>
    <dc:date>2024-05-01T00:00:00Z</dc:date>
  </item>
  <item rdf:about="http://dspace.dtu.ac.in:8080/jspui/handle/repository/22523">
    <title>DESIGN AND DEVELOPMENT OF HEALTHCARE FRAMEWORK USING DIGITAL TWIN</title>
    <link>http://dspace.dtu.ac.in:8080/jspui/handle/repository/22523</link>
    <description>Title: DESIGN AND DEVELOPMENT OF HEALTHCARE FRAMEWORK USING DIGITAL TWIN
Authors: SHARMA, VIKAS
Abstract: The integration of Digital Twin (DT) technology in healthcare has paved the way for&#xD;
significant advancements in patient care, security, and disease detection. This&#xD;
compilation of four research studies presents a holistic view of the evolving role of DT&#xD;
in healthcare, emphasizing its applications in security, artificial intelligence-driven&#xD;
diagnostics, and personalized treatment frameworks. The studies collectively highlight&#xD;
the importance of secure and efficient healthcare ecosystems leveraging machine&#xD;
learning, blockchain, and deep learning architectures. The first study explores the role&#xD;
of DT in healthcare security through a Metaverse-DT-based framework, addressing&#xD;
privacy concerns and data protection challenges. The study outlines how Internet of&#xD;
Things (IoT) sensors enable real-time data collection for personalized digital models,&#xD;
enhancing patient monitoring and decision-making. Blockchain integration within DT&#xD;
provides an additional layer of security, ensuring reliable simulation environments for&#xD;
healthcare applications. The second study presents an automated DT framework for&#xD;
cervical cancer detection using the CervixNet classifier model. The proposed model,&#xD;
employing machine learning and deep learning techniques, demonstrates exceptional&#xD;
performance in diagnosing cervical abnormalities. Utilizing the SIPaKMeD dataset, the&#xD;
model achieves a classification accuracy of 98.91% with support vector machines&#xD;
(SVM), underscoring the potential of DT in enhancing diagnostic precision and&#xD;
supporting clinical decision-making. The third study investigates the security of IoT&#xD;
networks in healthcare through a DT framework integrating Elliptic Curve&#xD;
Cryptography (ECC) and blockchain. By employing a Genetic Algorithm-Optimized&#xD;
Random Forest (GAO-RF) model for intrusion detection, the system enhances the&#xD;
safety of healthcare data while maintaining scalability and efficiency. The proposed&#xD;
model achieves high accuracy rates (98.4% detection accuracy, 97.3% F1-score),&#xD;
demonstrating its robustness in mitigating cybersecurity threats in healthcare IoT&#xD;
environments. The fourth study introduces the Monkeypox Skin Lesion Detector&#xD;
Network (MxSLDNet) within a DT framework for automated early detection of&#xD;
monkeypox. The model, tested on the "Monkeypox Skin Lesion Dataset," surpasses&#xD;
traditional pre-trained deep-learning architectures such as VGG-19, ResNet-101, and&#xD;
DenseNet-121 in terms of precision, recall, and accuracy. MxSLDNet achieves an&#xD;
vii&#xD;
accuracy of 95.67%, addressing the critical need for a lightweight, storage-efficient,&#xD;
and scalable solution for infectious disease detection in resource-limited healthcare&#xD;
settings. By synthesizing insights from these studies, this research underscores the&#xD;
transformative potential of DT in various healthcare domains. The integration of AI-&#xD;
driven models, blockchain security mechanisms, and digital simulation frameworks&#xD;
fosters a secure, intelligent, and scalable healthcare ecosystem. Future advancements in&#xD;
DT will likely focus on expanding real-time clinical decision support systems,&#xD;
enhancing interoperability with electronic health records (EHRs), and integrating&#xD;
federated learning for secure, large-scale data processing. The findings provide a strong&#xD;
foundation for the continued exploration of DT in revolutionizing digital healthcare.</description>
    <dc:date>2025-03-01T00:00:00Z</dc:date>
  </item>
</rdf:RDF>

