Open Access iconOpen Access

ARTICLE

A Light-Weight Deep Learning-Based Architecture for Sign Language Classification

by M. Daniel Nareshkumar1,*, B. Jaison2

1 Department of Electronics and Communication Engineering, R.M.K. Engineering College, Kavaraipettai, 601206, India
2 Department of Computer Science and Engineering, R.M.K. Engineering College, Kavaraipettai, 601206, India

* Corresponding Author: M. Daniel Nareshkumar. Email: email

Intelligent Automation & Soft Computing 2023, 35(3), 3501-3515. https://doi.org/10.32604/iasc.2023.027848

Abstract

With advancements in computing powers and the overall quality of images captured on everyday cameras, a much wider range of possibilities has opened in various scenarios. This fact has several implications for deaf and dumb people as they have a chance to communicate with a greater number of people much easier. More than ever before, there is a plethora of info about sign language usage in the real world. Sign languages, and by extension the datasets available, are of two forms, isolated sign language and continuous sign language. The main difference between the two types is that in isolated sign language, the hand signs cover individual letters of the alphabet. In continuous sign language, entire words’ hand signs are used. This paper will explore a novel deep learning architecture that will use recently published large pre-trained image models to quickly and accurately recognize the alphabets in the American Sign Language (ASL). The study will focus on isolated sign language to demonstrate that it is possible to achieve a high level of classification accuracy on the data, thereby showing that interpreters can be implemented in the real world. The newly proposed MobileNetV2 architecture serves as the backbone of this study. It is designed to run on end devices like mobile phones and infer signals (what does it infer) from images in a relatively short amount of time. With the proposed architecture in this paper, the classification accuracy of 98.77% in the Indian Sign Language (ISL) and American Sign Language (ASL) is achieved, outperforming the existing state-of-the-art systems.

Keywords


Cite This Article

APA Style
Daniel Nareshkumar, M., Jaison, B. (2023). A light-weight deep learning-based architecture for sign language classification. Intelligent Automation & Soft Computing, 35(3), 3501-3515. https://doi.org/10.32604/iasc.2023.027848
Vancouver Style
Daniel Nareshkumar M, Jaison B. A light-weight deep learning-based architecture for sign language classification. Intell Automat Soft Comput . 2023;35(3):3501-3515 https://doi.org/10.32604/iasc.2023.027848
IEEE Style
M. Daniel Nareshkumar and B. Jaison, “A Light-Weight Deep Learning-Based Architecture for Sign Language Classification,” Intell. Automat. Soft Comput. , vol. 35, no. 3, pp. 3501-3515, 2023. https://doi.org/10.32604/iasc.2023.027848



cc Copyright © 2023 The Author(s). Published by Tech Science Press.
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 1688

    View

  • 573

    Download

  • 4

    Like

Share Link