Home / Advanced Search

  • Title/Keywords

  • Author/Affliations

  • Journal

  • Article Type

  • Start Year

  • End Year

Update SearchingClear
  • Articles
  • Online
Search Results (5)
  • Open Access

    ARTICLE

    Speech Separation Algorithm Using Gated Recurrent Network Based on Microphone Array

    Xiaoyan Zhao1,*, Lin Zhou2, Yue Xie1, Ying Tong1, Jingang Shi3

    Intelligent Automation & Soft Computing, Vol.36, No.3, pp. 3087-3100, 2023, DOI:10.32604/iasc.2023.030180 - 15 March 2023

    Abstract Speech separation is an active research topic that plays an important role in numerous applications, such as speaker recognition, hearing prosthesis, and autonomous robots. Many algorithms have been put forward to improve separation performance. However, speech separation in reverberant noisy environment is still a challenging task. To address this, a novel speech separation algorithm using gate recurrent unit (GRU) network based on microphone array has been proposed in this paper. The main aim of the proposed algorithm is to improve the separation performance and reduce the computational cost. The proposed algorithm extracts the sub-band steered… More >

  • Open Access

    ARTICLE

    Speech Separation Methodology for Hearing Aid

    Joseph Sathiadhas Esra1,*, Y. Sukhi2

    Computer Systems Science and Engineering, Vol.44, No.2, pp. 1659-1678, 2023, DOI:10.32604/csse.2023.025969 - 15 June 2022

    Abstract In the design of hearing aids (HA), the real-time speech-enhancement is done. The digital hearing aids should provide high signal-to-noise ratio, gain improvement and should eliminate feedback. In generic hearing aids the performance towards different frequencies varies and non uniform. Existing noise cancellation and speech separation methods drops the voice magnitude under the noise environment. The performance of the HA for frequency response is non uniform. Existing noise suppression methods reduce the required signal strength also. So, the performance of uniform sub band analysis is poor when hearing aid is concern. In this paper, a More >

  • Open Access

    ARTICLE

    Binaural Speech Separation Algorithm Based on Deep Clustering

    Lin Zhou1,*, Kun Feng1, Tianyi Wang1, Yue Xu1, Jingang Shi2

    Intelligent Automation & Soft Computing, Vol.30, No.2, pp. 527-537, 2021, DOI:10.32604/iasc.2021.018414 - 11 August 2021

    Abstract Neutral network (NN) and clustering are the two commonly used methods for speech separation based on supervised learning. Recently, deep clustering methods have shown promising performance. In our study, considering that the spectrum of the sound source has time correlation, and the spatial position of the sound source has short-term stability, we combine the spectral and spatial features for deep clustering. In this work, the logarithmic amplitude spectrum (LPS) and the interaural phase difference (IPD) function of each time frequency (TF) unit for the binaural speech signal are extracted as feature. Then, these features of… More >

  • Open Access

    ARTICLE

    Microphone Array Speech Separation Algorithm Based on TC-ResNet

    Lin Zhou1,*, Yue Xu1, Tianyi Wang1, Kun Feng1, Jingang Shi2

    CMC-Computers, Materials & Continua, Vol.69, No.2, pp. 2705-2716, 2021, DOI:10.32604/cmc.2021.017080 - 21 July 2021

    Abstract Traditional separation methods have limited ability to handle the speech separation problem in high reverberant and low signal-to-noise ratio (SNR) environments, and thus achieve unsatisfactory results. In this study, a convolutional neural network with temporal convolution and residual network (TC-ResNet) is proposed to realize speech separation in a complex acoustic environment. A simplified steered-response power phase transform, denoted as GSRP-PHAT, is employed to reduce the computational cost. The extracted features are reshaped to a special tensor as the system inputs and implements temporal convolution, which not only enlarges the receptive field of the convolution layer More >

  • Open Access

    ARTICLE

    Binaural Speech Separation Algorithm Based on Long and Short Time Memory Networks

    Lin Zhou1, *, Siyuan Lu1, Qiuyue Zhong1, Ying Chen1, 2, Yibin Tang3, Yan Zhou3

    CMC-Computers, Materials & Continua, Vol.63, No.3, pp. 1373-1386, 2020, DOI:10.32604/cmc.2020.010182 - 30 April 2020

    Abstract Speaker separation in complex acoustic environment is one of challenging tasks in speech separation. In practice, speakers are very often unmoving or moving slowly in normal communication. In this case, the spatial features among the consecutive speech frames become highly correlated such that it is helpful for speaker separation by providing additional spatial information. To fully exploit this information, we design a separation system on Recurrent Neural Network (RNN) with long short-term memory (LSTM) which effectively learns the temporal dynamics of spatial features. In detail, a LSTM-based speaker separation algorithm is proposed to extract the… More >

Displaying 1-10 on page 1 of 5. Per Page