Please use this identifier to cite or link to this item: http://dspace.dtu.ac.in:8080/jspui/handle/repository/18020
Title: PERSIAN SIGN GESTURE TRANSLATION TO ENGLISH SPOKEN LANGUAGE ON SMARTPHONE
Authors: JAFARI, MUHAMMAD REZA
Keywords: PERSIAN SIGN
ENGLISH SPOKEN LANGUAGE
SMARTPHONE
CNN
Issue Date: Jun-2020
Series/Report no.: TD-4884;
Abstract: Hearing impaired and others with verbal challenges face difficulty to communicate with society; Sign Language represents their communication such as numbers or phrases. The communication becomes a challenge with people from other countries using different languages. Additionally, the sign language is different from one country to another. That is, learning one sign language doesn’t mean learning all sign languages. To translate a word from sign language to a spoken language is a challenge and to change a particular word from that language to another language is even a bigger challenge. In such cases, there is necessity for 2 interpreters: One from sign language to the source-spoken language and one from the source language to the target language. There is ample research done on sign recognition, yet this paper focuses on translating gestures from one language to another. In this study, a smartphone approach is proposed for Sign Language recognition, because smartphones are available worldwide. Smartphones are limited in computational power so, a client server application is proposed where most of processing tasks are done on the server side. In client-server application system, client could be a smartphone application that captures images of sign gestures to be recognized and sent to a server. In turn, the server processes the data and returns the translation Sign to client. On the server application side, where most of the sign recognition tasks take place, background of the sign image is deleted, and under Hue, Saturation, Value (HSV) color space is set to black. The sign gesture then separate by detecting the biggest linked constituent in the frame. Extracted feature are in binary form pixels, and Convolutional Neural Network (CNN) is used to classify sign images. After classification, the letter for a given sign is assigned, and by putting the sequence of letters, a word is created. The word translates to target language, in this case English, and the result returns to client application.
URI: http://dspace.dtu.ac.in:8080/jspui/handle/repository/18020
Appears in Collections:M.E./M.Tech. Computer Engineering

Files in This Item:
File Description SizeFormat 
M.Tech Muhammad Reza Jafari.pdf6.44 MBAdobe PDFView/Open


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.