Home / Advanced Search

  • Title/Keywords

  • Author/Affliations

  • Journal

  • Article Type

  • Start Year

  • End Year

Update SearchingClear
  • Articles
  • Online
Search Results (20)
  • Open Access

    REVIEW

    A Survey on Chinese Sign Language Recognition: From Traditional Methods to Artificial Intelligence

    Xianwei Jiang1, Yanqiong Zhang1,*, Juan Lei1, Yudong Zhang2,3,*

    CMES-Computer Modeling in Engineering & Sciences, Vol.140, No.1, pp. 1-40, 2024, DOI:10.32604/cmes.2024.047649

    Abstract Research on Chinese Sign Language (CSL) provides convenience and support for individuals with hearing impairments to communicate and integrate into society. This article reviews the relevant literature on Chinese Sign Language Recognition (CSLR) in the past 20 years. Hidden Markov Models (HMM), Support Vector Machines (SVM), and Dynamic Time Warping (DTW) were found to be the most commonly employed technologies among traditional identification methods. Benefiting from the rapid development of computer vision and artificial intelligence technology, Convolutional Neural Networks (CNN), 3D-CNN, YOLO, Capsule Network (CapsNet) and various deep neural networks have sprung up. Deep Neural… More >

  • Open Access

    ARTICLE

    Japanese Sign Language Recognition by Combining Joint Skeleton-Based Handcrafted and Pixel-Based Deep Learning Features with Machine Learning Classification

    Jungpil Shin1,*, Md. Al Mehedi Hasan2, Abu Saleh Musa Miah1, Kota Suzuki1, Koki Hirooka1

    CMES-Computer Modeling in Engineering & Sciences, Vol.139, No.3, pp. 2605-2625, 2024, DOI:10.32604/cmes.2023.046334

    Abstract Sign language recognition is vital for enhancing communication accessibility among the Deaf and hard-of-hearing communities. In Japan, approximately 360,000 individuals with hearing and speech disabilities rely on Japanese Sign Language (JSL) for communication. However, existing JSL recognition systems have faced significant performance limitations due to inherent complexities. In response to these challenges, we present a novel JSL recognition system that employs a strategic fusion approach, combining joint skeleton-based handcrafted features and pixel-based deep learning features. Our system incorporates two distinct streams: the first stream extracts crucial handcrafted features, emphasizing the capture of hand and body… More >

  • Open Access

    REVIEW

    Recent Advances on Deep Learning for Sign Language Recognition

    Yanqiong Zhang, Xianwei Jiang*

    CMES-Computer Modeling in Engineering & Sciences, Vol.139, No.3, pp. 2399-2450, 2024, DOI:10.32604/cmes.2023.045731

    Abstract Sign language, a visual-gestural language used by the deaf and hard-of-hearing community, plays a crucial role in facilitating communication and promoting inclusivity. Sign language recognition (SLR), the process of automatically recognizing and interpreting sign language gestures, has gained significant attention in recent years due to its potential to bridge the communication gap between the hearing impaired and the hearing world. The emergence and continuous development of deep learning techniques have provided inspiration and momentum for advancing SLR. This paper presents a comprehensive and up-to-date analysis of the advancements, challenges, and opportunities in deep learning-based sign… More >

  • Open Access

    ARTICLE

    Deep Learning Approach for Hand Gesture Recognition: Applications in Deaf Communication and Healthcare

    Khursheed Aurangzeb1, Khalid Javeed2, Musaed Alhussein1, Imad Rida3, Syed Irtaza Haider1, Anubha Parashar4,*

    CMC-Computers, Materials & Continua, Vol.78, No.1, pp. 127-144, 2024, DOI:10.32604/cmc.2023.042886

    Abstract Hand gestures have been used as a significant mode of communication since the advent of human civilization. By facilitating human-computer interaction (HCI), hand gesture recognition (HGRoc) technology is crucial for seamless and error-free HCI. HGRoc technology is pivotal in healthcare and communication for the deaf community. Despite significant advancements in computer vision-based gesture recognition for language understanding, two considerable challenges persist in this field: (a) limited and common gestures are considered, (b) processing multiple channels of information across a network takes huge computational time during discriminative feature extraction. Therefore, a novel hand vision-based convolutional neural network… More >

  • Open Access

    ARTICLE

    Alphabet-Level Indian Sign Language Translation to Text Using Hybrid-AO Thresholding with CNN

    Seema Sabharwal1,2,*, Priti Singla1

    Intelligent Automation & Soft Computing, Vol.37, No.3, pp. 2567-2582, 2023, DOI:10.32604/iasc.2023.035497

    Abstract Sign language is used as a communication medium in the field of trade, defence, and in deaf-mute communities worldwide. Over the last few decades, research in the domain of translation of sign language has grown and become more challenging. This necessitates the development of a Sign Language Translation System (SLTS) to provide effective communication in different research domains. In this paper, novel Hybrid Adaptive Gaussian Thresholding with Otsu Algorithm (Hybrid-AO) for image segmentation is proposed for the translation of alphabet-level Indian Sign Language (ISLTS) with a 5-layer Convolution Neural Network (CNN). The focus of this… More >

  • Open Access

    ARTICLE

    A Robust Model for Translating Arabic Sign Language into Spoken Arabic Using Deep Learning

    Khalid M. O. Nahar1, Ammar Almomani2,3,*, Nahlah Shatnawi1, Mohammad Alauthman4

    Intelligent Automation & Soft Computing, Vol.37, No.2, pp. 2037-2057, 2023, DOI:10.32604/iasc.2023.038235

    Abstract This study presents a novel and innovative approach to automatically translating Arabic Sign Language (ATSL) into spoken Arabic. The proposed solution utilizes a deep learning-based classification approach and the transfer learning technique to retrain 12 image recognition models. The image-based translation method maps sign language gestures to corresponding letters or words using distance measures and classification as a machine learning technique. The results show that the proposed model is more accurate and faster than traditional image-based models in classifying Arabic-language signs, with a translation accuracy of 93.7%. This research makes a significant contribution to the More >

  • Open Access

    ARTICLE

    An Efficient and Robust Hand Gesture Recognition System of Sign Language Employing Finetuned Inception-V3 and Efficientnet-B0 Network

    Adnan Hussain1, Sareer Ul Amin2, Muhammad Fayaz3, Sanghyun Seo4,*

    Computer Systems Science and Engineering, Vol.46, No.3, pp. 3509-3525, 2023, DOI:10.32604/csse.2023.037258

    Abstract Hand Gesture Recognition (HGR) is a promising research area with an extensive range of applications, such as surgery, video game techniques, and sign language translation, where sign language is a complicated structured form of hand gestures. The fundamental building blocks of structured expressions in sign language are the arrangement of the fingers, the orientation of the hand, and the hand’s position concerning the body. The importance of HGR has increased due to the increasing number of touchless applications and the rapid growth of the hearing-impaired population. Therefore, real-time HGR is one of the most effective… More >

  • Open Access

    ARTICLE

    Arabic Sign Language Gesture Classification Using Deer Hunting Optimization with Machine Learning Model

    Badriyya B. Al-onazi1, Mohamed K. Nour2, Hussain Alshahran3, Mohamed Ahmed Elfaki3, Mrim M. Alnfiai4, Radwa Marzouk5, Mahmoud Othman6, Mahir M. Sharif7, Abdelwahed Motwakel8,*

    CMC-Computers, Materials & Continua, Vol.75, No.2, pp. 3413-3429, 2023, DOI:10.32604/cmc.2023.035303

    Abstract Sign language includes the motion of the arms and hands to communicate with people with hearing disabilities. Several models have been available in the literature for sign language detection and classification for enhanced outcomes. But the latest advancements in computer vision enable us to perform signs/gesture recognition using deep neural networks. This paper introduces an Arabic Sign Language Gesture Classification using Deer Hunting Optimization with Machine Learning (ASLGC-DHOML) model. The presented ASLGC-DHOML technique mainly concentrates on recognising and classifying sign language gestures. The presented ASLGC-DHOML model primarily pre-processes the input gesture images and generates feature More >

  • Open Access

    ARTICLE

    Deep Learning-Based Sign Language Recognition for Hearing and Speaking Impaired People

    Mrim M. Alnfiai*

    Intelligent Automation & Soft Computing, Vol.36, No.2, pp. 1653-1669, 2023, DOI:10.32604/iasc.2023.033577

    Abstract Sign language is mainly utilized in communication with people who have hearing disabilities. Sign language is used to communicate with people having developmental impairments who have some or no interaction skills. The interaction via Sign language becomes a fruitful means of communication for hearing and speech impaired persons. A Hand gesture recognition system finds helpful for deaf and dumb people by making use of human computer interface (HCI) and convolutional neural networks (CNN) for identifying the static indications of Indian Sign Language (ISL). This study introduces a shark smell optimization with deep learning based automated… More >

  • Open Access

    ARTICLE

    A Novel Action Transformer Network for Hybrid Multimodal Sign Language Recognition

    Sameena Javaid*, Safdar Rizvi

    CMC-Computers, Materials & Continua, Vol.74, No.1, pp. 523-537, 2023, DOI:10.32604/cmc.2023.031924

    Abstract Sign language fills the communication gap for people with hearing and speaking ailments. It includes both visual modalities, manual gestures consisting of movements of hands, and non-manual gestures incorporating body movements including head, facial expressions, eyes, shoulder shrugging, etc. Previously both gestures have been detected; identifying separately may have better accuracy, but much communicational information is lost. A proper sign language mechanism is needed to detect manual and non-manual gestures to convey the appropriate detailed message to others. Our novel proposed system contributes as Sign Language Action Transformer Network (SLATN), localizing hand, body, and facial… More >

Displaying 1-10 on page 1 of 20. Per Page