Please use this identifier to cite or link to this item:
http://dspace.dtu.ac.in:8080/jspui/handle/repository/21856
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | PAWAR, NIVRUTTI | - |
dc.date.accessioned | 2025-07-08T08:49:56Z | - |
dc.date.available | 2025-07-08T08:49:56Z | - |
dc.date.issued | 2025-05 | - |
dc.identifier.uri | http://dspace.dtu.ac.in:8080/jspui/handle/repository/21856 | - |
dc.description.abstract | Large language models (LLMs) have demonstrated remarkable potential for domain specific tasks, but optimizing them for specialized applications requires efficient tuning strategies. In medical report summarization, where accuracy, brevity, and clinical rele vance are critical, fine-tuning pre-trained LLMs offers a pathway to adapt these models while minimizing computational costs. Traditional approaches involving full-model train ing or instruction tuning with task-specific instruction-completion pairs remain resource intensive, prompting the need for parameter-efficient fine-tuning (PEFT) techniques. This study explores the integration of instruction-based adaptation and PEFT meth ods, such as Low-Rank Adaptation (LoRA), to refine pre-trained LLMs for medical text summarization. By leveraging structured datasets of clinical notes, discharge summaries, and radiology reports, we systematically train a subset of model parameters to capture domain-specific terminology, contextual relationships, and summarization patterns. In struction tuning is employed to align model outputs with clinical guidelines, ensuring summaries prioritize key diagnostic findings, treatment plans, and patient history. Experimental results demonstrate that PEFT methods reduce memory usage by 65–70% and training time by 40% compared to full-model fine-tuning, without compromising sum marization quality. Instruction tuning further enhances task-specific performance, im proving ROUGE-L scores by 15% and BERTScore by 12% over baseline models. Notably, the fine-tuned LLMs exhibit improved handling of medical jargon, negation detection, and multi-document coherence, addressing common challenges in clinical report generation. This work underscores the viability of combining PEFT and instruction tuning to cre ate resource-efficient, domain-optimized LLMs. For medical applications, these strategies iv enable scalable adaptation of general-purpose models to specialized workflows, ensuring reliable and contextually precise summarization. The framework proposed here can be extended to other healthcare NLP tasks, such as patient risk stratification or automated diagnosis coding, fostering broader adoption of AI in clinical settings. | en_US |
dc.language.iso | en | en_US |
dc.relation.ispartofseries | TD-8079; | - |
dc.subject | LARGE LANGUAGE MODELS (LLMS) | en_US |
dc.subject | MEDICAL REPORT SUMMARIZATION | en_US |
dc.subject | PEFT | en_US |
dc.title | FINE TUNING LLMs FOR CONTEXT-AWARE MEDICAL REPORT SUMMARIZATION | en_US |
dc.type | Thesis | en_US |
Appears in Collections: | M.E./M.Tech. Computer Engineering |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
Nivrutti Pawar M.Tech.pdf | 726.98 kB | Adobe PDF | View/Open |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.