Open Access
ARTICLE
Microphone Array Speech Separation Algorithm Based on TC-ResNet
1 School of Information Science and Engineering, Southeast University, Nanjing, 210096, China
2 Center for Machine Vision and Signal Analysis, University of Oulu, Oulu, FI-90014, Finland
* Corresponding Author: Lin Zhou. Email:
Computers, Materials & Continua 2021, 69(2), 2705-2716. https://doi.org/10.32604/cmc.2021.017080
Received 20 January 2021; Accepted 19 April 2021; Issue published 21 July 2021
Abstract
Traditional separation methods have limited ability to handle the speech separation problem in high reverberant and low signal-to-noise ratio (SNR) environments, and thus achieve unsatisfactory results. In this study, a convolutional neural network with temporal convolution and residual network (TC-ResNet) is proposed to realize speech separation in a complex acoustic environment. A simplified steered-response power phase transform, denoted as GSRP-PHAT, is employed to reduce the computational cost. The extracted features are reshaped to a special tensor as the system inputs and implements temporal convolution, which not only enlarges the receptive field of the convolution layer but also significantly reduces the network computational cost. Residual blocks are used to combine multiresolution features and accelerate the training procedure. A modified ideal ratio mask is applied as the training target. Simulation results demonstrate that the proposed microphone array speech separation algorithm based on TC-ResNet achieves a better performance in terms of distortion ratio, source-to-interference ratio, and short-time objective intelligibility in low SNR and high reverberant environments, particularly in untrained situations. This indicates that the proposed method has generalization to untrained conditions.Keywords
Cite This Article
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.