Please use this identifier to cite or link to this item: http://dspace.dtu.ac.in:8080/jspui/handle/repository/16054
Full metadata record
DC FieldValueLanguage
dc.contributor.authorSUDHIR-
dc.date.accessioned2017-11-17T17:33:22Z-
dc.date.available2017-11-17T17:33:22Z-
dc.date.issued2017-07-
dc.identifier.urihttp://dspace.dtu.ac.in:8080/jspui/handle/repository/16054-
dc.description.abstractWithin the last decades substantial progress was achieved in the imagery sensor field. Improved robustness and increased resolution of modern imaging sensors and, more importantly, cheap fabrication costs have made the use of multiple sensors common in a wide range of imaging applications. This development led to the availability of a vast amount of data, depicting the same scene coming from multiple sensors. However, the subsequent processing of the gathered sensor information can be cumbersome since an increase in the number of sensors automatically leads to an increase in the raw amount of sensor data which needs to be stored and processed. This means that longer execution times have to be accepted or the number of processing units and storage devices has to be increased, leading to solutions which may be quite expensive. In addition, when imaging systems are operated by humans, presenting various images may be an overwhelming task for a single observer and may lead to a significant performance drop. One solution for these problems is to replace the entire set of sensor information by a single composite representation which incorporates all relevant sensor data. In imagebased applications this plethora of techniques became generally known as image fusion and is nowadays a promising research area. Image fusion can be summarized as the process of integrating complementary and redundant information from multiple images into one composite image that contains a ‘better’ description of the underlying scene than any of the individual source images could provide. Hence, the fused image should be more useful for visual inspection or further machine processing. Nevertheless, fusing images is often not a trivial process, since: a) the source images may come from different types of sensors (e.g. with different dynamic range and resolution); b) they tend to exhibit complementary information (e.g. features which appear in some source images but not in all) or c) they may show common information but with reversed contrast, which significantly complicates the fusion process. Furthermore, a fusion approach which is independent of a priori information about the inputs and produces.en_US
dc.language.isoenen_US
dc.relation.ispartofseriesTD-2040;-
dc.subjectFUSIONen_US
dc.subjectVISIBLE IMAGEen_US
dc.subjectREGIONen_US
dc.subjectSENSORSen_US
dc.titleREGION BASED FUSION OF THERMAL AND VISIBLE IMAGEen_US
dc.typeThesisen_US
Appears in Collections:M.E./M.Tech. Electronics & Communication Engineering

Files in This Item:
File Description SizeFormat 
sudhir2k15spd17.pdf1.79 MBAdobe PDFView/Open


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.