Please use this identifier to cite or link to this item:
http://dspace.dtu.ac.in:8080/jspui/handle/repository/16054
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | SUDHIR | - |
dc.date.accessioned | 2017-11-17T17:33:22Z | - |
dc.date.available | 2017-11-17T17:33:22Z | - |
dc.date.issued | 2017-07 | - |
dc.identifier.uri | http://dspace.dtu.ac.in:8080/jspui/handle/repository/16054 | - |
dc.description.abstract | Within the last decades substantial progress was achieved in the imagery sensor field. Improved robustness and increased resolution of modern imaging sensors and, more importantly, cheap fabrication costs have made the use of multiple sensors common in a wide range of imaging applications. This development led to the availability of a vast amount of data, depicting the same scene coming from multiple sensors. However, the subsequent processing of the gathered sensor information can be cumbersome since an increase in the number of sensors automatically leads to an increase in the raw amount of sensor data which needs to be stored and processed. This means that longer execution times have to be accepted or the number of processing units and storage devices has to be increased, leading to solutions which may be quite expensive. In addition, when imaging systems are operated by humans, presenting various images may be an overwhelming task for a single observer and may lead to a significant performance drop. One solution for these problems is to replace the entire set of sensor information by a single composite representation which incorporates all relevant sensor data. In imagebased applications this plethora of techniques became generally known as image fusion and is nowadays a promising research area. Image fusion can be summarized as the process of integrating complementary and redundant information from multiple images into one composite image that contains a ‘better’ description of the underlying scene than any of the individual source images could provide. Hence, the fused image should be more useful for visual inspection or further machine processing. Nevertheless, fusing images is often not a trivial process, since: a) the source images may come from different types of sensors (e.g. with different dynamic range and resolution); b) they tend to exhibit complementary information (e.g. features which appear in some source images but not in all) or c) they may show common information but with reversed contrast, which significantly complicates the fusion process. Furthermore, a fusion approach which is independent of a priori information about the inputs and produces. | en_US |
dc.language.iso | en | en_US |
dc.relation.ispartofseries | TD-2040; | - |
dc.subject | FUSION | en_US |
dc.subject | VISIBLE IMAGE | en_US |
dc.subject | REGION | en_US |
dc.subject | SENSORS | en_US |
dc.title | REGION BASED FUSION OF THERMAL AND VISIBLE IMAGE | en_US |
dc.type | Thesis | en_US |
Appears in Collections: | M.E./M.Tech. Electronics & Communication Engineering |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
sudhir2k15spd17.pdf | 1.79 MB | Adobe PDF | View/Open |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.