Open Access
ARTICLE
Age-Based Automatic Voice Conversion Using Blood Relation for Voice Impaired
1 Department of Electronics & Communication Engineering, Amrita School of Engineering, Bengaluru, Amrita Vishwa Vidyapeetham, India
2 Center for Computational Engineering and Networking (CEN), Amrita School of Engineering, Coimbatore, Amrita Vishwa Vidyapeetham, India
3 Department of Software Engineering, College of Computer and Information Sciences, King Saud University, Riyadh, Saudi Arabia
4 Department of Electronics and Communication Engineering, PES University, Bengaluru, India
* Corresponding Author: Sadeen Alharbi. Email:
Computers, Materials & Continua 2022, 70(2), 4027-4051. https://doi.org/10.32604/cmc.2022.020065
Received 07 May 2021; Accepted 16 July 2021; Issue published 27 September 2021
Abstract
The present work presents a statistical method to translate human voices across age groups, based on commonalities in voices of blood relations. The age-translated voices have been naturalized extracting the blood relation features e.g., pitch, duration, energy, using Mel Frequency Cepstrum Coefficients (MFCC), for social compatibility of the voice-impaired. The system has been demonstrated using standard English and an Indian language. The voice samples for resynthesis were derived from 12 families, with member ages ranging from 8–80 years. The voice-age translation, performed using the Pitch synchronous overlap and add (PSOLA) approach, by modulation of extracted voice features, was validated by perception test. The translated and resynthesized voices were correlated using Linde, Buzo, Gray (LBG), and Kekre’s Fast Codebook generation (KFCG) algorithms. For translated voice targets, a strong (θ >~93% and θ >~96%) correlation was found with blood relatives, whereas, a weak (θ <~78% and θ <~80%) correlation range was found between different families and different gender from same families. The study further subcategorized the sampling and synthesis of the voices into similar or dissimilar gender groups, using a support vector machine (SVM) choosing between available voice samples. Finally, ~96%, ~93%, and ~94% accuracies were obtained in the identification of the gender of the voice sample, the age group samples, and the correlation between the original and converted voice samples, respectively. The results obtained were close to the natural voice sample features and are envisaged to facilitate a near-natural voice for speech-impaired easily.Keywords
Cite This Article
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.