Home / Advanced Search

  • Title/Keywords

  • Author/Affliations

  • Journal

  • Article Type

  • Start Year

  • End Year

Update SearchingClear
  • Articles
  • Online
Search Results (3)
  • Open Access

    ARTICLE

    Development of Voice Control Algorithm for Robotic Wheelchair Using NIN and LSTM Models

    Mohsen Bakouri1,2,*

    CMC-Computers, Materials & Continua, Vol.73, No.2, pp. 2441-2456, 2022, DOI:10.32604/cmc.2022.025106 - 16 June 2022

    Abstract In this work, we developed and implemented a voice control algorithm to steer smart robotic wheelchairs (SRW) using the neural network technique. This technique used a network in network (NIN) and long short-term memory (LSTM) structure integrated with a built-in voice recognition algorithm. An Android Smartphone application was designed and configured with the proposed method. A Wi-Fi hotspot was used to connect the software and hardware components of the system in an offline mode. To operate and guide SRW, the design technique proposed employing five voice commands (yes, no, left, right, no, and stop) via… More >

  • Open Access

    ARTICLE

    Automatic Speaker Recognition Using Mel-Frequency Cepstral Coefficients Through Machine Learning

    Uğur Ayvaz1, Hüseyin Gürüler2, Faheem Khan3, Naveed Ahmed4, Taegkeun Whangbo3,*, Abdusalomov Akmalbek Bobomirzaevich3

    CMC-Computers, Materials & Continua, Vol.71, No.3, pp. 5511-5521, 2022, DOI:10.32604/cmc.2022.023278 - 14 January 2022

    Abstract Automatic speaker recognition (ASR) systems are the field of Human-machine interaction and scientists have been using feature extraction and feature matching methods to analyze and synthesize these signals. One of the most commonly used methods for feature extraction is Mel Frequency Cepstral Coefficients (MFCCs). Recent researches show that MFCCs are successful in processing the voice signal with high accuracies. MFCCs represents a sequence of voice signal-specific features. This experimental analysis is proposed to distinguish Turkish speakers by extracting the MFCCs from the speech recordings. Since the human perception of sound is not linear, after the More >

  • Open Access

    ARTICLE

    Dynamic Audio-Visual Biometric Fusion for Person Recognition

    Najlaa Hindi Alsaedi*, Emad Sami Jaha

    CMC-Computers, Materials & Continua, Vol.71, No.1, pp. 1283-1311, 2022, DOI:10.32604/cmc.2022.021608 - 03 November 2021

    Abstract Biometric recognition refers to the process of recognizing a person’s identity using physiological or behavioral modalities, such as face, voice, fingerprint, gait, etc. Such biometric modalities are mostly used in recognition tasks separately as in unimodal systems, or jointly with two or more as in multimodal systems. However, multimodal systems can usually enhance the recognition performance over unimodal systems by integrating the biometric data of multiple modalities at different fusion levels. Despite this enhancement, in real-life applications some factors degrade multimodal systems’ performance, such as occlusion, face poses, and noise in voice data. In this… More >

Displaying 1-10 on page 1 of 3. Per Page