Open Access iconOpen Access

ARTICLE

crossmark

A Real-Time Oral Cavity Gesture Based Words Synthesizer Using Sensors

by Palli Padmini1, C. Paramasivam1, G. Jyothish Lal2, Sadeen Alharbi3,*, Kaustav Bhowmick4

1 Department of Electronics & Communication Engineering, Amrita School of Engineering, Bengaluru, Amrita Vishwa Vidyapeetham, India
2 Center for Computational Engineering and Networking (CEN), Amrita School of Engineering, Coimbatore, Amrita Vishwa Vidyapeetham, India
3 Department of Software Engineering, College of Computer and Information Sciences, King Saud University, Riyadh, Saudi Arabia
4 Department of Electronics and Communication Engineering, PES University, Bengaluru, India

* Corresponding Author: Sadeen Alharbi. Email: email

Computers, Materials & Continua 2022, 71(3), 4523-4554. https://doi.org/10.32604/cmc.2022.022857

Abstract

The present system experimentally demonstrates a synthesis of syllables and words from tongue manoeuvers in multiple languages, captured by four oral sensors only. For an experimental demonstration of the system used in the oral cavity, a prototype tooth model was used. Based on the principle developed in a previous publication by the author(s), the proposed system has been implemented using the oral cavity (tongue, teeth, and lips) features alone, without the glottis and the larynx. The positions of the sensors in the proposed system were optimized based on articulatory (oral cavity) gestures estimated by simulating the mechanism of human speech. The system has been tested for all English alphabets and several words with sensor-based input along with an experimental demonstration of the developed algorithm, with limit switches, potentiometer, and flex sensors emulating the tongue in an artificial oral cavity. The system produces the sounds of vowels, consonants, and words in English, along with the pronunciation of meanings of their translations in four major Indian languages, all from oral cavity mapping. The experimental setup also caters to gender mapping of voice. The sound produced from the hardware has been validated by a perceptual test to verify the gender and word of the speech sample by listeners, with ∼ 98% and ∼ 95% accuracy, respectively. Such a model may be useful to interpret speech for those who are speech-disabled because of accidents, neuron disorder, spinal cord injury, or larynx disorder.

Keywords


Cite This Article

APA Style
Padmini, P., Paramasivam, C., Jyothish Lal, G., Alharbi, S., Bhowmick, K. (2022). A real-time oral cavity gesture based words synthesizer using sensors. Computers, Materials & Continua, 71(3), 4523-4554. https://doi.org/10.32604/cmc.2022.022857
Vancouver Style
Padmini P, Paramasivam C, Jyothish Lal G, Alharbi S, Bhowmick K. A real-time oral cavity gesture based words synthesizer using sensors. Comput Mater Contin. 2022;71(3):4523-4554 https://doi.org/10.32604/cmc.2022.022857
IEEE Style
P. Padmini, C. Paramasivam, G. Jyothish Lal, S. Alharbi, and K. Bhowmick, “A Real-Time Oral Cavity Gesture Based Words Synthesizer Using Sensors,” Comput. Mater. Contin., vol. 71, no. 3, pp. 4523-4554, 2022. https://doi.org/10.32604/cmc.2022.022857



cc Copyright © 2022 The Author(s). Published by Tech Science Press.
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 1810

    View

  • 1232

    Download

  • 0

    Like

Share Link