Please use this identifier to cite or link to this item: http://dspace.dtu.ac.in:8080/jspui/handle/repository/20437
Full metadata record
DC FieldValueLanguage
dc.contributor.authorSHASHANK-
dc.date.accessioned2024-01-15T05:49:16Z-
dc.date.available2024-01-15T05:49:16Z-
dc.date.issued2023-11-
dc.identifier.urihttp://dspace.dtu.ac.in:8080/jspui/handle/repository/20437-
dc.description.abstractEfficiency of a computer vision system depends on accuracy of information extraction and data processing capabilities of the computer vision system. A camera frame capturing an object of interest in the center of a field of view carries the maximum information of the object. Thus, to enhance the accuracy of the information, the system requires continuous reconfiguration of configuration spaces of one or more camera sensors deployed in the system. However, identification of the object requires processing images, and the position of an object of interest in the frame can change dynamically. Thus, reconfiguration of configuration space is difficult for real time applications. A computer vision system capable of re-configuring its configuration space (i.e., through dynamic calibration of camera sensors) is known as an active vision system. For a better understanding of an event (or scene), the active vision system further requires association of various activities detected by the camera sensors temporally, which requires high processing capabilities to perform accurate spatiotemporal analysis of various image frames captured temporally by different camera sensors. Such systems rely on high computational complexity models and require enormous resources (such as Artificial Intelligence based systems). Systems deployed in mobile environment have limited resources (i.e., limited power supply, storage and processing capabilities), and thus are incapable of performing tasks with higher computational complexity, and thus lack efficient reconfiguration of camera sensor parameters (i.e., the configuration space) which leads to images (or frames) being captured with very low information of the objects of interest, yielding low performance and accuracy of the system. To address the aforementioned problem, this thesis presents a computer implemented framework (i.e., Spatiotemporal Activity Mapping (SAM) framework) that enables pixel-wise sensitivity allotment based on spatiotemporal activity analysis of frames captured over a flexible time period. The SAM framework presents various filters efficiently ix designed with very low computational complexity for accurate detection of areas of interest, for re-configuration of calibration parameters. The SAM framework presents a flexibility of selection of the criticality of activities detected by the system, and thus is effective in a variety of computer vision applications such as road surveillance, sports analysis, ambient living applications, and the like. Model-based systems only work in known conditions and fail miserably in unforeseen conditions. Systems employing Artificial Intelligence (AI) can manage to tackle unforeseen environments, however, such systems require iterative training to learn and train to develop understanding of new events and activities over a long period of time, and thus are not reliable for real-time applications. Thus, the contemporary systems lack real-time reconfiguration of configuration space for an adequate scene understanding of a new activity or event. To address the aforementioned problem, this thesis presents another computer implemented framework (i.e., Adaptive Self-Reconfiguration (AdapSR)) framework that enables a number of computer vision systems to exchange information and data sharing, and thus learn to tackle an unforeseen condition at a very high rate. The AdapSR framework is fairly efficient in performance for applications employing higher computational and storage capabilities for high levels of accuracy and fast learning such as driverless navigation, adaptive activity analysis, and the like. The AdapSR framework further provides a concept of decentralized network of active vision systems that enables establishment of standardization of protocols for a plurality of computer vision applications associated together over a blockchain network in near future. Thus, by developing these novel techniques and framework models, all major issues regarding self-reconfiguration of computer vision systems have been addressed. This thesis incorporates the developed techniques and their performance evaluation along with future directions.en_US
dc.language.isoenen_US
dc.relation.ispartofseriesTD-6978;-
dc.subjectACTIVE VISION SYSTEMen_US
dc.subjectSMART CAMERA NETWORKen_US
dc.subjectRECONFIGURATIONen_US
dc.subjectSAM FRAMEWORKen_US
dc.titleACTIVE VISION USING SELF RECONFIGURABLE SMART CAMERA NETWORKen_US
dc.typeThesisen_US
Appears in Collections:Ph.D. Electronics & Communication Engineering

Files in This Item:
File Description SizeFormat 
SHASHANK pH.d..pdf3.23 MBAdobe PDFView/Open


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.