Please use this identifier to cite or link to this item: http://dspace.dtu.ac.in:8080/jspui/handle/repository/16710
Full metadata record
DC FieldValueLanguage
dc.contributor.authorZAKARIYA, ABBAS MUHAMMAD-
dc.date.accessioned2019-10-24T04:50:42Z-
dc.date.available2019-10-24T04:50:42Z-
dc.date.issued2019-06-
dc.identifier.urihttp://dspace.dtu.ac.in:8080/jspui/handle/repository/16710-
dc.description.abstractDeaf And other verbally challenged people face challenges most of the time communicating with the society, sign language is what they commonly use between them to represent what they want to say to each other for example numbers, words or phrase. To bridge this communication Barrier between them and the society an automated system to stand as a translator between them and the society is needed, which will translate the sign language into text or speech so that the communication would be easier. Recently many researches have been done in such area, but most of the developed Systems are only executable on computers, which are difficult and impractical to take around. Research on sign language recognition for Arabic language is relevantly few compared to other languages, we are proposing in our study the use of smartphone as a platform for Arabic Sign Language recognition system, because of its portability and availability in the society, as previews studies shows the power and computational constraints of smartphones, we propose a system where most processing task is taken off the smartphone, a client server application system is to be implemented where the client would be a smartphone application that will Capture an image of the Sign to be recognized and sends it over to the server and in turn the server returns the predicted sign. On the server application where most of the sign recognition task takes place, sign image background is detected under HSV color space and set to black, the sign gesture is segmented by detecting the largest connected component in the frame, features extracted from the frame are the binary pixels, Support Vector Machine is used to classify our sign images, we are able to classify 10 Arabic Sign Language with an experimental accuracy result of 92.5%.en_US
dc.language.isoenen_US
dc.relation.ispartofseriesTD-4557;-
dc.subjectARABIC SIGN LANGUAGEen_US
dc.subjectSIGN LANGUAGE RECOGNITIONen_US
dc.subjectSMARTPHONEen_US
dc.titleARABIC SIGN LANGUAGE RECOGNITION SYSTEM ON SMARTPHONEen_US
dc.typeThesisen_US
Appears in Collections:M.E./M.Tech. Computer Engineering

Files in This Item:
File Description SizeFormat 
Arabic Sign Language Recognition System on SmartPhone.pdf4.94 MBAdobe PDFView/Open


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.