Please use this identifier to cite or link to this item: http://dspace.dtu.ac.in:8080/jspui/handle/repository/18045
Full metadata record
DC FieldValueLanguage
dc.contributor.authorADARSH, PRANAV-
dc.date.accessioned2020-12-28T06:14:40Z-
dc.date.available2020-12-28T06:14:40Z-
dc.date.issued2020-07-
dc.identifier.urihttp://dspace.dtu.ac.in:8080/jspui/handle/repository/18045-
dc.description.abstractObject detection has seen many changes in algorithms to improve performance both on speed and accuracy. By the continuous effort of so many researchers, deep learning algorithms are growing rapidly with an improved object detection performance. Various popular applications like pedestrian detection, medical imaging, robotics, self-driving cars, face detection, etc. reduces the efforts of humans in many areas. Due to the vast field and various state-of-the-art algorithms, it is a tedious task to cover all at once. This paper presents the fundamental overview of object detection methods by including two classes of object detectors. In two stage detector covered algorithms are RCNN, Fast RCNN, and Faster RCNN, whereas in one stage detector YOLO v1, v2, v3, and SSD are covered. Two stage detectors focus more on accuracy, whereas the primary concern of one stage detectors is speed. We will explain an improved YOLO version called YOLO v3-Tiny, and then its comparison with previous methods for detection and recognition of object is described graphically.en_US
dc.language.isoenen_US
dc.relation.ispartofseriesTD-4901;-
dc.subjectYOLO V3en_US
dc.subjectFASTER RCNNen_US
dc.subjectIMAGE PROCESSINGen_US
dc.subjectDEEP LEARNINGen_US
dc.subjectCONVOLUTIONAL NETWORKSen_US
dc.titleYOLO V3-TINY: AN IMPROVED ONE STAGE MODEL FOR DETECTION AND RECOGNITION OF OBJECTSen_US
dc.typeThesisen_US
Appears in Collections:M.E./M.Tech. Computer Engineering

Files in This Item:
File Description SizeFormat 
M.Tech. by Pranav Adarsh.pdf1.33 MBAdobe PDFView/Open


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.