Pakistan sign language to Urdu translator using Kinect
DOI:
https://doi.org/10.11591/csit.v3i3.pp186-193Keywords:
Kinect, Long short-term memory, Model garden, Object detection, OpenCV, Sign language translatorAbstract
The lack of a standardized sign language, and the inability to communicate with the hearing community through sign language, are the two major issues confronting Pakistan's deaf and dumb society. In this research, we have proposed an approach to help eradicate one of the issues. Now, using the proposed framework, the deaf community can communicate with normal people. The purpose of this work is to reduce the struggles of hearing-impaired people in Pakistan. A Kinect-based Pakistan sign language (PSL) to Urdu language translator is being developed to accomplish this. The system’s dynamic sign language segment works in three phases: acquiring key points from the dataset, training a long short-term memory (LSTM) model, and making real-time predictions using sequences through openCV integrated with the Kinect device. The system’s static sign language segment works in three phases: acquiring an image-based dataset, training a model garden, and making real-time predictions using openCV integrated with the Kinect device. It also allows the hearing user to input Urdu audio to the Kinect microphone. The proposed sign language translator can detect and predict the PSL performed in front of the Kinect device and produce translations in Urdu.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2022 Institute of Advanced Engineering and Science

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.