Open Access iconOpen Access

ARTICLE

crossmark

Adversarial Examples Protect Your Privacy on Speech Enhancement System

by Mingyu Dong, Diqun Yan*, Rangding Wang

Department of Information Science and Engineering, Ningbo University, Zhejiang, 315000, China

* Corresponding Author: Diqun Yan. Email: email

Computer Systems Science and Engineering 2023, 46(1), 1-12. https://doi.org/10.32604/csse.2023.034568

Abstract

Speech is easily leaked imperceptibly. When people use their phones, the personal voice assistant is constantly listening and waiting to be activated. Private content in speech may be maliciously extracted through automatic speech recognition (ASR) technology by some applications on phone devices. To guarantee that the recognized speech content is accurate, speech enhancement technology is used to denoise the input speech. Speech enhancement technology has developed rapidly along with deep neural networks (DNNs), but adversarial examples can cause DNNs to fail. Considering that the vulnerability of DNN can be used to protect the privacy in speech. In this work, we propose an adversarial method to degrade speech enhancement systems, which can prevent the malicious extraction of private information in speech. Experimental results show that the generated enhanced adversarial examples can be removed most content of the target speech or replaced with target speech content by speech enhancement. The word error rate (WER) between the enhanced original example and enhanced adversarial example recognition result can reach 89.0%. WER of target attack between enhanced adversarial example and target example is low at 33.75%. The adversarial perturbation in the adversarial example can bring much more change than itself. The rate of difference between two enhanced examples and adversarial perturbation can reach more than 1.4430. Meanwhile, the transferability between different speech enhancement models is also investigated. The low transferability of the method can be used to ensure the content in the adversarial example is not damaged, the useful information can be extracted by the friendly ASR. This work can prevent the malicious extraction of speech.

Keywords


Cite This Article

APA Style
Dong, M., Yan, D., Wang, R. (2023). Adversarial examples protect your privacy on speech enhancement system. Computer Systems Science and Engineering, 46(1), 1-12. https://doi.org/10.32604/csse.2023.034568
Vancouver Style
Dong M, Yan D, Wang R. Adversarial examples protect your privacy on speech enhancement system. Comput Syst Sci Eng. 2023;46(1):1-12 https://doi.org/10.32604/csse.2023.034568
IEEE Style
M. Dong, D. Yan, and R. Wang, “Adversarial Examples Protect Your Privacy on Speech Enhancement System,” Comput. Syst. Sci. Eng., vol. 46, no. 1, pp. 1-12, 2023. https://doi.org/10.32604/csse.2023.034568



cc Copyright © 2023 The Author(s). Published by Tech Science Press.
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 1768

    View

  • 730

    Download

  • 0

    Like

Share Link