Please use this identifier to cite or link to this item:
http://dspace.dtu.ac.in:8080/jspui/handle/repository/21790
Title: | MOVING OBJECT DETECTION USING DEEP LEARNING |
Authors: | SONI, LOKESH |
Keywords: | MOVING OBJECT DETECTION DEEP LEARNING YOLO |
Issue Date: | May-2025 |
Series/Report no.: | TD-8001; |
Abstract: | Detecting moving objects correctly within video sequences is an important challenge in computer vision, one that supports smart surveillance, self-driving vehicles, robots and the monitoring of traffic. Still, it gets much more difficult when the camera is in motion, as on a drone, worn by a person or on a robot, this can cause background changes that may result in wrong detections and reduced accuracy for traditional vision systems. Background subtraction from traditional methods normally requires a still camera and object detectors that use deep learning mainly depend on appearance. Although YOLO and similar detectors are very accurate in recognizing objects, they frequently find it hard to identify objects from dynamic video streams where motion plays a key role. However, in noisy environments or where there is a lot of motion, relying on just this may not be sufficient This thesis introduces a hybrid system for detecting moving objects that uses the advantages of both deep learning and classical computer vision to boost both the accuracy and adaptability of the system in motion-rich situations. This system uses YOLOv10n, a lightweight, real-time object detector, MOG2 to remove moving objects and Farneback optical flow to match the detected objects to actual movement in the scene. Its capability to use both static and dynamic camera options is valuable because it maintains consistent results in all different types of scenes. Because it is flexible, it fits well in situations where both parties in observation are always changing their positions. The research having practical applications is another important achievement. The fact that a lightweight detector and basic motion techniques are used means the whole process can run in real time using only modest hardware which enables its use in edge computing, UAVs and robots running small systems. Qualitative analysis of the annotated video and visual overlay also shows that the system works well, is clear to interpret and is useful in real time. |
URI: | http://dspace.dtu.ac.in:8080/jspui/handle/repository/21790 |
Appears in Collections: | M.E./M.Tech. Computer Engineering |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
Lokesh Soni M.Tech..pdf | 1.94 MB | Adobe PDF | View/Open |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.