Open Access
ARTICLE
Sign Language to Sentence Formation: A Real Time Solution for Deaf People
1 Bahauddin Zakariya University, Department of Computer Science, Multan, 60,000, Pakistan
2 Air University, Department of Computer Science, Multan, 60,000, Pakistan
3 Air University, Department of Computer Science, Islamabad, 44,000, Pakistan
4 Centre for Research in Data Science, Department of Computer and Information Sciences, Universiti Teknologi PETRONAS, Seri Iskandar, 32610, Perak, Malaysia
5 Department of Intelligent Mechatronics Engineering, Sejong University, Seoul, 05006, Korea
* Corresponding Author: Muhammad Sanaullah. Email:
Computers, Materials & Continua 2022, 72(2), 2501-2519. https://doi.org/10.32604/cmc.2022.021990
Received 23 July 2021; Accepted 29 September 2021; Issue published 29 March 2022
Abstract
Communication is a basic need of every human being to exchange thoughts and interact with the society. Acute peoples usually confab through different spoken languages, whereas deaf people cannot do so. Therefore, the Sign Language (SL) is the communication medium of such people for their conversation and interaction with the society. The SL is expressed in terms of specific gesture for every word and a gesture is consisted in a sequence of performed signs. The acute people normally observe these signs to understand the difference between single and multiple gestures for singular and plural words respectively. The signs for singular words such as I, eat, drink, home are unalike the plural words as school, cars, players. A special training is required to gain the sufficient knowledge and practice so that people can differentiate and understand every gesture/sign appropriately. Innumerable researches have been performed to articulate the computer-based solution to understand the single gesture with the help of a single hand enumeration. The complete understanding of such communications are possible only with the help of this differentiation of gestures in computer-based solution of SL to cope with the real world environment. Hence, there is still a demand for specific environment to automate such a communication solution to interact with such type of special people. This research focuses on facilitating the deaf community by capturing the gestures in video format and then mapping and differentiating as single or multiple gestures used in words. Finally, these are converted into the respective words/sentences within a reasonable time. This provide a real time solution for the deaf people to communicate and interact with the society.Keywords
Cite This Article
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.