Home / Advanced Search

  • Title/Keywords

  • Author/Affliations

  • Journal

  • Article Type

  • Start Year

  • End Year

Update SearchingClear
  • Articles
  • Online
Search Results (14)
  • Open Access

    ARTICLE

    Multiscale Feature Fusion for Gesture Recognition Using Commodity Millimeter-Wave Radar

    Lingsheng Li1, Weiqing Bai2, Chong Han2,*

    CMC-Computers, Materials & Continua, Vol.81, No.1, pp. 1613-1640, 2024, DOI:10.32604/cmc.2024.056073 - 15 October 2024

    Abstract Gestures are one of the most natural and intuitive approach for human-computer interaction. Compared with traditional camera-based or wearable sensors-based solutions, gesture recognition using the millimeter wave radar has attracted growing attention for its characteristics of contact-free, privacy-preserving and less environment-dependence. Although there have been many recent studies on hand gesture recognition, the existing hand gesture recognition methods still have recognition accuracy and generalization ability shortcomings in short-range applications. In this paper, we present a hand gesture recognition method named multiscale feature fusion (MSFF) to accurately identify micro hand gestures. In MSFF, not only the More >

  • Open Access

    ARTICLE

    Virtual Keyboard: A Real-Time Hand Gesture Recognition-Based Character Input System Using LSTM and Mediapipe Holistic

    Bijon Mallik1, Md Abdur Rahim1, Abu Saleh Musa Miah2, Keun Soo Yun3,*, Jungpil Shin2

    Computer Systems Science and Engineering, Vol.48, No.2, pp. 555-570, 2024, DOI:10.32604/csse.2023.045981 - 19 March 2024

    Abstract In the digital age, non-touch communication technologies are reshaping human-device interactions and raising security concerns. A major challenge in current technology is the misinterpretation of gestures by sensors and cameras, often caused by environmental factors. This issue has spurred the need for advanced data processing methods to achieve more accurate gesture recognition and predictions. Our study presents a novel virtual keyboard allowing character input via distinct hand gestures, focusing on two key aspects: hand gesture recognition and character input mechanisms. We developed a novel model with LSTM and fully connected layers for enhanced sequential data… More >

  • Open Access

    ARTICLE

    Appearance Based Dynamic Hand Gesture Recognition Using 3D Separable Convolutional Neural Network

    Muhammad Rizwan1,*, Sana Ul Haq1,*, Noor Gul1,2, Muhammad Asif1, Syed Muslim Shah3, Tariqullah Jan4, Naveed Ahmad5

    CMC-Computers, Materials & Continua, Vol.76, No.1, pp. 1213-1247, 2023, DOI:10.32604/cmc.2023.038211 - 08 June 2023

    Abstract Appearance-based dynamic Hand Gesture Recognition (HGR) remains a prominent area of research in Human-Computer Interaction (HCI). Numerous environmental and computational constraints limit its real-time deployment. In addition, the performance of a model decreases as the subject’s distance from the camera increases. This study proposes a 3D separable Convolutional Neural Network (CNN), considering the model’s computational complexity and recognition accuracy. The 20BN-Jester dataset was used to train the model for six gesture classes. After achieving the best offline recognition accuracy of 94.39%, the model was deployed in real-time while considering the subject’s attention, the instant of… More >

  • Open Access

    ARTICLE

    Mobile Communication Voice Enhancement Under Convolutional Neural Networks and the Internet of Things

    Jiajia Yu*

    Intelligent Automation & Soft Computing, Vol.37, No.1, pp. 777-797, 2023, DOI:10.32604/iasc.2023.037354 - 29 April 2023

    Abstract This study aims to reduce the interference of ambient noise in mobile communication, improve the accuracy and authenticity of information transmitted by sound, and guarantee the accuracy of voice information delivered by mobile communication. First, the principles and techniques of speech enhancement are analyzed, and a fast lateral recursive least square method (FLRLS method) is adopted to process sound data. Then, the convolutional neural networks (CNNs)-based noise recognition CNN (NR-CNN) algorithm and speech enhancement model are proposed. Finally, related experiments are designed to verify the performance of the proposed algorithm and model. The experimental results… More >

  • Open Access

    ARTICLE

    Automated Disabled People Fall Detection Using Cuckoo Search with Mobile Networks

    Mesfer Al Duhayyim*

    Intelligent Automation & Soft Computing, Vol.36, No.3, pp. 2473-2489, 2023, DOI:10.32604/iasc.2023.033585 - 15 March 2023

    Abstract Falls are the most common concern among older adults or disabled people who use scooters and wheelchairs. The early detection of disabled persons’ falls is required to increase the living rate of an individual or provide support to them whenever required. In recent times, the arrival of the Internet of Things (IoT), smartphones, Artificial Intelligence (AI), wearables and so on make it easy to design fall detection mechanisms for smart homecare. The current study develops an Automated Disabled People Fall Detection using Cuckoo Search Optimization with Mobile Networks (ADPFD-CSOMN) model. The proposed model’s major aim… More >

  • Open Access

    ARTICLE

    Impediments of Cognitive System Engineering in Machine-Human Modeling

    Fayaz Ahmad Fayaz1,2, Arun Malik2, Isha Batra2, Akber Abid Gardezi3, Syed Immamul Ansarullah4, Shafiq Ahmad5, Mejdal Alqahtani5, Muhammad Shafiq6,*

    CMC-Computers, Materials & Continua, Vol.74, No.3, pp. 6689-6701, 2023, DOI:10.32604/cmc.2023.032998 - 28 December 2022

    Abstract A comprehensive understanding of human intelligence is still an ongoing process, i.e., human and information security are not yet perfectly matched. By understanding cognitive processes, designers can design humanized cognitive information systems (CIS). The need for this research is justified because today’s business decision makers are faced with questions they cannot answer in a given amount of time without the use of cognitive information systems. The researchers aim to better strengthen cognitive information systems with more pronounced cognitive thresholds by demonstrating the resilience of cognitive resonant frequencies to reveal possible responses to improve the efficiency More >

  • Open Access

    ARTICLE

    Human-Computer Interaction Using Deep Fusion Model-Based Facial Expression Recognition System

    Saiyed Umer1,*, Ranjeet Kumar Rout2, Shailendra Tiwari3, Ahmad Ali AlZubi4, Jazem Mutared Alanazi4, Kulakov Yurii5

    CMES-Computer Modeling in Engineering & Sciences, Vol.135, No.2, pp. 1165-1185, 2023, DOI:10.32604/cmes.2022.023312 - 27 October 2022

    Abstract A deep fusion model is proposed for facial expression-based human-computer Interaction system. Initially, image preprocessing, i.e., the extraction of the facial region from the input image is utilized. Thereafter, the extraction of more discriminative and distinctive deep learning features is achieved using extracted facial regions. To prevent overfitting, in-depth features of facial images are extracted and assigned to the proposed convolutional neural network (CNN) models. Various CNN models are then trained. Finally, the performance of each CNN model is fused to obtain the final decision for the seven basic classes of facial expressions, i.e., fear, More >

  • Open Access

    ARTICLE

    Empathic Responses of Behavioral-Synchronization in Human-Agent Interaction

    Sung Park1,*, Seongeon Park2, Mincheol Whang2

    CMC-Computers, Materials & Continua, Vol.71, No.2, pp. 3761-3784, 2022, DOI:10.32604/cmc.2022.023738 - 07 December 2021

    Abstract Artificial entities, such as virtual agents, have become more pervasive. Their long-term presence among humans requires the virtual agent's ability to express appropriate emotions to elicit the necessary empathy from the users. Affective empathy involves behavioral mimicry, a synchronized co-movement between dyadic pairs. However, the characteristics of such synchrony between humans and virtual agents remain unclear in empathic interactions. Our study evaluates the participant's behavioral synchronization when a virtual agent exhibits an emotional expression congruent with the emotional context through facial expressions, behavioral gestures, and voice. Participants viewed an emotion-eliciting video stimulus (negative or positive)… More >

  • Open Access

    ARTICLE

    Multi-View Multi-Modal Head-Gaze Estimation for Advanced Indoor User Interaction

    Jung-Hwa Kim1, Jin-Woo Jeong2,*

    CMC-Computers, Materials & Continua, Vol.70, No.3, pp. 5107-5132, 2022, DOI:10.32604/cmc.2022.021107 - 11 October 2021

    Abstract Gaze estimation is one of the most promising technologies for supporting indoor monitoring and interaction systems. However, previous gaze estimation techniques generally work only in a controlled laboratory environment because they require a number of high-resolution eye images. This makes them unsuitable for welfare and healthcare facilities with the following challenging characteristics: 1) users’ continuous movements, 2) various lighting conditions, and 3) a limited amount of available data. To address these issues, we introduce a multi-view multi-modal head-gaze estimation system that translates the user’s head orientation into the gaze direction. The proposed system captures the… More >

  • Open Access

    ARTICLE

    An Architecture Supporting Intelligent Mobile Healthcare Using Human-Computer Interaction HCI Principles

    Mesfer Alrizq1, Shauban Ali Solangi2, Abdullah Alghamdi1,*, Muhammad Ali Nizamani2, Muhammad Ali Memon2, Mohammed Hamdi1

    Computer Systems Science and Engineering, Vol.40, No.2, pp. 557-569, 2022, DOI:10.32604/csse.2022.018800 - 09 September 2021

    Abstract Recent advancements in the Internet of Things IoT and cloud computing have paved the way for mobile Healthcare (mHealthcare) services. A patient within the hospital is monitored by several devices. Moreover, upon leaving the hospital, the patient can be remotely monitored whether directly using body wearable sensors or using a smartphone equipped with sensors to monitor different user-health parameters. This raises potential challenges for intelligent monitoring of patient’s health. In this paper, an improved architecture for smart mHealthcare is proposed that is supported by HCI design principles. The HCI also provides the support for the… More >

Displaying 1-10 on page 1 of 14. Per Page