Please use this identifier to cite or link to this item: http://dspace.dtu.ac.in:8080/jspui/handle/repository/21659
Title: PRINCIPAL COMPONENT ANALYSIS AND ITS EXTENSIONS IN MACHINE LEARNING: THEORY, APPLICATIONS AND ADVANCES
Authors: JASMINE
Keywords: MACHINE LEARNING
PRINCIPAL COMPONENT ANALYSIS
DISCRIMINANT ANALYSIS
PCA
Issue Date: May-2025
Series/Report no.: TD-7860;
Abstract: In today’s vast and complex data landscapes, every model or prediction we make is influenced by a multitude of factors, often referred to as dimensions in the context of machine learning. In such cases, reducing the dimensionality of the data can significantly simplify the model without losing important information. By reducing the dimensions, we can focus only on the most crucial factors, making the model more efficient and accurate. This is where PCA comes in – it helps reduce dimensionality while retaining the most important features, making models more efficient, easier to interpret and computationally feasible. Therefore, PCA becomes an essential tool in handling high-dimensional data. To give learners an extensive understanding of PCA function in today's machine learning, this dissertation studies the mathematical basis, expansions, and applications of the method. Different properties of PCA and its variants like sparse PCA, kernel PCA, incremental PCA and robust PCA are highlighted with detailed derivations and graphical interpretations. We have Applied PCA to different types of datasets across multiple domains, including bioinformatics, genome analysis or computer vision to evaluate its practical effectiveness. In recent years, the combinations with PCA has proved or solved many problems. And, these advancements like PCA with cluster analysis or PCA combined with discriminant analysis help in reducing dimension obviously, reducing noise and better feature extraction since data in this huge space is very complex. While PCA has various advantages, it is not always the perfect solution for every problem. Researchers often face several challenges when applying PCA in experiments. One major limitation is its assumption of linearity. Additionally, PCA is highly sensitive to noisy or irrelevant features, which can lead to misleading results. Another challenge is the difficulty in interpretability of new components, as they are linear combinations of the original variables rather than direct meaningful features.
URI: http://dspace.dtu.ac.in:8080/jspui/handle/repository/21659
Appears in Collections:M Sc Applied Maths

Files in This Item:
File Description SizeFormat 
Jasmine Msc.pdf937.16 kBAdobe PDFView/Open


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.