Open Access iconOpen Access

ARTICLE

Rotation, Translation and Scale Invariant Sign Word Recognition Using Deep Learning

Abu Saleh Musa Miah1, Jungpil Shin1,*, Md. Al Mehedi Hasan1, Md Abdur Rahim2, Yuichi Okuyama1

1 School of Computer Science and Engineering, The University of Aizu, Aizuwakamatsu, Fukushima, 965-8580, Japan
2 Department of Computer Science and Engineering, Pabna University of Science and Technology, Pabna, Bangladesh

* Corresponding Author: Jungpil Shin. Email: email

Computer Systems Science and Engineering 2023, 44(3), 2521-2536. https://doi.org/10.32604/csse.2023.029336

Abstract

Communication between people with disabilities and people who do not understand sign language is a growing social need and can be a tedious task. One of the main functions of sign language is to communicate with each other through hand gestures. Recognition of hand gestures has become an important challenge for the recognition of sign language. There are many existing models that can produce a good accuracy, but if the model test with rotated or translated images, they may face some difficulties to make good performance accuracy. To resolve these challenges of hand gesture recognition, we proposed a Rotation, Translation and Scale-invariant sign word recognition system using a convolutional neural network (CNN). We have followed three steps in our work: rotated, translated and scaled (RTS) version dataset generation, gesture segmentation, and sign word classification. Firstly, we have enlarged a benchmark dataset of 20 sign words by making different amounts of Rotation, Translation and Scale of the original images to create the RTS version dataset. Then we have applied the gesture segmentation technique. The segmentation consists of three levels, i) Otsu Thresholding with YCbCr, ii) Morphological analysis: dilation through opening morphology and iii) Watershed algorithm. Finally, our designed CNN model has been trained to classify the hand gesture as well as the sign word. Our model has been evaluated using the twenty sign word dataset, five sign word dataset and the RTS version of these datasets. We achieved 99.30% accuracy from the twenty sign word dataset evaluation, 99.10% accuracy from the RTS version of the twenty sign word evolution, 100% accuracy from the five sign word dataset evaluation, and 98.00% accuracy from the RTS version five sign word dataset evolution. Furthermore, the influence of our model exists in competitive results with state-of-the-art methods in sign word recognition.

Keywords


Cite This Article

APA Style
Miah, A.S.M., Shin, J., Hasan, M.A.M., Rahim, M.A., Okuyama, Y. (2023). Rotation, translation and scale invariant sign word recognition using deep learning. Computer Systems Science and Engineering, 44(3), 2521-2536. https://doi.org/10.32604/csse.2023.029336
Vancouver Style
Miah ASM, Shin J, Hasan MAM, Rahim MA, Okuyama Y. Rotation, translation and scale invariant sign word recognition using deep learning. Comput Syst Sci Eng. 2023;44(3):2521-2536 https://doi.org/10.32604/csse.2023.029336
IEEE Style
A.S.M. Miah, J. Shin, M.A.M. Hasan, M.A. Rahim, and Y. Okuyama, “Rotation, Translation and Scale Invariant Sign Word Recognition Using Deep Learning,” Comput. Syst. Sci. Eng., vol. 44, no. 3, pp. 2521-2536, 2023. https://doi.org/10.32604/csse.2023.029336



cc Copyright © 2023 The Author(s). Published by Tech Science Press.
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 1991

    View

  • 784

    Download

  • 1

    Like

Share Link