Open Access iconOpen Access

ARTICLE

crossmark

Deep Learning-Based Sign Language Recognition for Hearing and Speaking Impaired People

Mrim M. Alnfiai*

Department of Information Technology, College of Computers and Information Technology, Taif University, P.O. Box 11099, Taif, 21944, Saudi Arabia

* Corresponding Author: Mrim M. Alnfiai. Email: email

Intelligent Automation & Soft Computing 2023, 36(2), 1653-1669. https://doi.org/10.32604/iasc.2023.033577

Abstract

Sign language is mainly utilized in communication with people who have hearing disabilities. Sign language is used to communicate with people having developmental impairments who have some or no interaction skills. The interaction via Sign language becomes a fruitful means of communication for hearing and speech impaired persons. A Hand gesture recognition system finds helpful for deaf and dumb people by making use of human computer interface (HCI) and convolutional neural networks (CNN) for identifying the static indications of Indian Sign Language (ISL). This study introduces a shark smell optimization with deep learning based automated sign language recognition (SSODL-ASLR) model for hearing and speaking impaired people. The presented SSODL-ASLR technique majorly concentrates on the recognition and classification of sign language provided by deaf and dumb people. The presented SSODL-ASLR model encompasses a two stage process namely sign language detection and sign language classification. In the first stage, the Mask Region based Convolution Neural Network (Mask RCNN) model is exploited for sign language recognition. Secondly, SSO algorithm with soft margin support vector machine (SM-SVM) model can be utilized for sign language classification. To assure the enhanced classification performance of the SSODL-ASLR model, a brief set of simulations was carried out. The extensive results portrayed the supremacy of the SSODL-ASLR model over other techniques.

Keywords


Cite This Article

APA Style
Alnfiai, M.M. (2023). Deep learning-based sign language recognition for hearing and speaking impaired people. Intelligent Automation & Soft Computing, 36(2), 1653-1669. https://doi.org/10.32604/iasc.2023.033577
Vancouver Style
Alnfiai MM. Deep learning-based sign language recognition for hearing and speaking impaired people. Intell Automat Soft Comput . 2023;36(2):1653-1669 https://doi.org/10.32604/iasc.2023.033577
IEEE Style
M.M. Alnfiai, “Deep Learning-Based Sign Language Recognition for Hearing and Speaking Impaired People,” Intell. Automat. Soft Comput. , vol. 36, no. 2, pp. 1653-1669, 2023. https://doi.org/10.32604/iasc.2023.033577



cc Copyright © 2023 The Author(s). Published by Tech Science Press.
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 1331

    View

  • 604

    Download

  • 0

    Like

Share Link