Open Access iconOpen Access

ARTICLE

crossmark

A Robust Model for Translating Arabic Sign Language into Spoken Arabic Using Deep Learning

Khalid M. O. Nahar1, Ammar Almomani2,3,*, Nahlah Shatnawi1, Mohammad Alauthman4

1 Department of Computer Sciences, Faculty of Information Technology and Computer Sciences, Yarmouk University–Irbid, 21163, Jordan
2 School of Computing, Skyline University College, Sharjah, P. O. Box 1797, United Arab Emirates
3 IT-Department-Al-Huson University College, Al-Balqa Applied University, P. O. Box 50, Irbid, Jordan
4 Department of Information Security, Faculty of Information Technology, University of Petra, Amman, Jordan

* Corresponding Authors: Ammar Almomani. Email: email,email

Intelligent Automation & Soft Computing 2023, 37(2), 2037-2057. https://doi.org/10.32604/iasc.2023.038235

Abstract

This study presents a novel and innovative approach to automatically translating Arabic Sign Language (ATSL) into spoken Arabic. The proposed solution utilizes a deep learning-based classification approach and the transfer learning technique to retrain 12 image recognition models. The image-based translation method maps sign language gestures to corresponding letters or words using distance measures and classification as a machine learning technique. The results show that the proposed model is more accurate and faster than traditional image-based models in classifying Arabic-language signs, with a translation accuracy of 93.7%. This research makes a significant contribution to the field of ATSL. It offers a practical solution for improving communication for individuals with special needs, such as the deaf and mute community. This work demonstrates the potential of deep learning techniques in translating sign language into natural language and highlights the importance of ATSL in facilitating communication for individuals with disabilities.

Keywords


Cite This Article

APA Style
Nahar, K.M.O., Almomani, A., Shatnawi, N., Alauthman, M. (2023). A robust model for translating arabic sign language into spoken arabic using deep learning. Intelligent Automation & Soft Computing, 37(2), 2037-2057. https://doi.org/10.32604/iasc.2023.038235
Vancouver Style
Nahar KMO, Almomani A, Shatnawi N, Alauthman M. A robust model for translating arabic sign language into spoken arabic using deep learning. Intell Automat Soft Comput . 2023;37(2):2037-2057 https://doi.org/10.32604/iasc.2023.038235
IEEE Style
K.M.O. Nahar, A. Almomani, N. Shatnawi, and M. Alauthman, “A Robust Model for Translating Arabic Sign Language into Spoken Arabic Using Deep Learning,” Intell. Automat. Soft Comput. , vol. 37, no. 2, pp. 2037-2057, 2023. https://doi.org/10.32604/iasc.2023.038235



cc Copyright © 2023 The Author(s). Published by Tech Science Press.
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 934

    View

  • 699

    Download

  • 0

    Like

Share Link