Pakistan sign language to Urdu translator using Kinect

Authors

DOI:

https://doi.org/10.11591/csit.v3i3.pp186-193

Keywords:

Kinect, Long short-term memory, Model garden, Object detection, OpenCV, Sign language translator

Abstract

The lack of a standardized sign language, and the inability to communicate with the hearing community through sign language, are the two major issues confronting Pakistan's deaf and dumb society. In this research, we have proposed an approach to help eradicate one of the issues. Now, using the proposed framework, the deaf community can communicate with normal people. The purpose of this work is to reduce the struggles of hearing-impaired people in Pakistan. A Kinect-based Pakistan sign language (PSL) to Urdu language translator is being developed to accomplish this. The system’s dynamic sign language segment works in three phases: acquiring key points from the dataset, training a long short-term memory (LSTM) model, and making real-time predictions using sequences through openCV integrated with the Kinect device. The system’s static sign language segment works in three phases: acquiring an image-based dataset, training a model garden, and making real-time predictions using openCV integrated with the Kinect device. It also allows the hearing user to input Urdu audio to the Kinect microphone. The proposed sign language translator can detect and predict the PSL performed in front of the Kinect device and produce translations in Urdu.

Downloads

Published

2022-11-01

How to Cite

[1]
Saad Ahmed, Hasnain Shafiq, Yamna Raheel, Noor Chishti, and Syed Muhammad Asad, “Pakistan sign language to Urdu translator using Kinect”, Comput Sci Inf Technol, vol. 3, no. 3, pp. 186–193, Nov. 2022.

Issue

Section

Articles

Similar Articles

1 2 3 4 5 6 7 8 > >> 

You may also start an advanced similarity search for this article.