Please use this identifier to cite or link to this item: http://dspace.dtu.ac.in:8080/jspui/handle/repository/20719
Full metadata record
DC FieldValueLanguage
dc.contributor.authorNEGI, ROHIT-
dc.date.accessioned2024-08-05T08:40:16Z-
dc.date.available2024-08-05T08:40:16Z-
dc.date.issued2024-05-
dc.identifier.urihttp://dspace.dtu.ac.in:8080/jspui/handle/repository/20719-
dc.description.abstractRecent progress towards large language models (LLMs) have brought significant improvents towards natural language processing (NLP), making it possible to perform tasks such as translating languages, generating text, and classifying information. To help these LLMs work efficiently with limited computational resources for specific tasks, parameter-efficient fine-tuning (PEFT) techniques have plays a major role. In our thesis we explore the history, methods, experiments, and real-world applications of PEFT techniques in detail, with a specific focus on optimizing them for creating content and understanding text better. It also covers the ethical and social consequences of using these techniques, along with strategies for adapting and collaborating on models. We analyze different aspects of PEFT techniques & compare different words to understand the changes it brings in differnet conditions so others can get help in their work while using these methods.en_US
dc.language.isoenen_US
dc.relation.ispartofseriesTD-7220;-
dc.subjectPEFT TECHNIQUESen_US
dc.subjectTEXTUAL COMPREHENSIONen_US
dc.subjectOPTIMIZATIONen_US
dc.subjectLLMsen_US
dc.titleA STUDY ON PEFT TECHNIQUES FOR OPTIMIZING CONTENT GENERATION AND TEXTUAL COMPREHENSIONen_US
dc.typeThesisen_US
Appears in Collections:M.E./M.Tech. Computer Engineering

Files in This Item:
File Description SizeFormat 
ROHIT NEGI M.Tech.pdf4.96 MBAdobe PDFView/Open


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.