Home / Advanced Search

  • Title/Keywords

  • Author/Affliations

  • Journal

  • Article Type

  • Start Year

  • End Year

Update SearchingClear
  • Articles
  • Online
Search Results (11)
  • Open Access

    ARTICLE

    Adversarial Examples Protect Your Privacy on Speech Enhancement System

    Mingyu Dong, Diqun Yan*, Rangding Wang

    Computer Systems Science and Engineering, Vol.46, No.1, pp. 1-12, 2023, DOI:10.32604/csse.2023.034568

    Abstract Speech is easily leaked imperceptibly. When people use their phones, the personal voice assistant is constantly listening and waiting to be activated. Private content in speech may be maliciously extracted through automatic speech recognition (ASR) technology by some applications on phone devices. To guarantee that the recognized speech content is accurate, speech enhancement technology is used to denoise the input speech. Speech enhancement technology has developed rapidly along with deep neural networks (DNNs), but adversarial examples can cause DNNs to fail. Considering that the vulnerability of DNN can be used to protect the privacy in speech. In this work, we… More >

  • Open Access

    ARTICLE

    Defending Adversarial Examples by a Clipped Residual U-Net Model

    Kazim Ali1,*, Adnan N. Qureshi1, Muhammad Shahid Bhatti2, Abid Sohail2, Mohammad Hijji3

    Intelligent Automation & Soft Computing, Vol.35, No.2, pp. 2237-2256, 2023, DOI:10.32604/iasc.2023.028810

    Abstract Deep learning-based systems have succeeded in many computer vision tasks. However, it is found that the latest study indicates that these systems are in danger in the presence of adversarial attacks. These attacks can quickly spoil deep learning models, e.g., different convolutional neural networks (CNNs), used in various computer vision tasks from image classification to object detection. The adversarial examples are carefully designed by injecting a slight perturbation into the clean images. The proposed CRU-Net defense model is inspired by state-of-the-art defense mechanisms such as MagNet defense, Generative Adversarial Network Defense, Deep Regret Analytic Generative Adversarial Networks Defense, Deep Denoising… More >

  • Open Access

    ARTICLE

    Deep Image Restoration Model: A Defense Method Against Adversarial Attacks

    Kazim Ali1,*, Adnan N. Qureshi1, Ahmad Alauddin Bin Arifin2, Muhammad Shahid Bhatti3, Abid Sohail3, Rohail Hassan4

    CMC-Computers, Materials & Continua, Vol.71, No.2, pp. 2209-2224, 2022, DOI:10.32604/cmc.2022.020111

    Abstract These days, deep learning and computer vision are much-growing fields in this modern world of information technology. Deep learning algorithms and computer vision have achieved great success in different applications like image classification, speech recognition, self-driving vehicles, disease diagnostics, and many more. Despite success in various applications, it is found that these learning algorithms face severe threats due to adversarial attacks. Adversarial examples are inputs like images in the computer vision field, which are intentionally slightly changed or perturbed. These changes are humanly imperceptible. But are misclassified by a model with high probability and severely affects the performance or prediction.… More >

  • Open Access

    ARTICLE

    Restoration of Adversarial Examples Using Image Arithmetic Operations

    Kazim Ali*, Adnan N. Quershi

    Intelligent Automation & Soft Computing, Vol.32, No.1, pp. 271-284, 2022, DOI:10.32604/iasc.2022.021296

    Abstract The current development of artificial intelligence is largely based on deep Neural Networks (DNNs). Especially in the computer vision field, DNNs now occur in everything from autonomous vehicles to safety control systems. Convolutional Neural Network (CNN) is based on DNNs mostly used in different computer vision applications, especially for image classification and object detection. The CNN model takes the photos as input and, after training, assigns it a suitable class after setting traceable parameters like weights and biases. CNN is derived from Human Brain's Part Visual Cortex and sometimes performs even better than Haman visual system. However, recent research shows… More >

  • Open Access

    ARTICLE

    A Parametric Study of Arabic Text-Based CAPTCHA Difficulty for Humans

    Suliman A. Alsuhibany*, Hessah Abdulaziz Alhodathi

    Intelligent Automation & Soft Computing, Vol.31, No.1, pp. 523-537, 2022, DOI:10.32604/iasc.2022.019913

    Abstract The Completely Automated Public Turing test to tell Computers and Humans Apart (CAPTCHA) technique has been an interesting topic for several years. An Arabic CAPTCHA has recently been proposed to serve Arab users. Since there have been few scientific studies supporting a systematic design or tuning for users, this paper aims to analyze the Arabic text-based CAPTCHA at the parameter level by conducting an experimental study. Based on the results of this study, we propose an Arabic text-based CAPTCHA scheme with Fast Gradient Sign Method (FGSM) adversarial images. To evaluate the security of the proposed scheme, we ran four filter… More >

  • Open Access

    ARTICLE

    Adversarial Examples Generation Algorithm through DCGAN

    Biying Deng1, Ziyong Ran1, Jixin Chen1, Desheng Zheng1,*, Qiao Yang2, Lulu Tian3

    Intelligent Automation & Soft Computing, Vol.30, No.3, pp. 889-898, 2021, DOI:10.32604/iasc.2021.019727

    Abstract In recent years, due to the popularization of deep learning technology, more and more attention has been paid to the security of deep neural networks. A wide variety of machine learning algorithms can attack neural networks and make its classification and judgement of target samples wrong. However, the previous attack algorithms are based on the calculation of the corresponding model to generate unique adversarial examples, and cannot extract attack features and generate corresponding samples in batches. In this paper, Generative Adversarial Networks (GAN) is used to learn the distribution of adversarial examples generated by FGSM and establish a generation model,… More >

  • Open Access

    ARTICLE

    An Adversarial Network-based Multi-model Black-box Attack

    Bin Lin1, Jixin Chen2, Zhihong Zhang3, Yanlin Lai2, Xinlong Wu2, Lulu Tian4, Wangchi Cheng5,*

    Intelligent Automation & Soft Computing, Vol.30, No.2, pp. 641-649, 2021, DOI:10.32604/iasc.2021.016818

    Abstract Researches have shown that Deep neural networks (DNNs) are vulnerable to adversarial examples. In this paper, we propose a generative model to explore how to produce adversarial examples that can deceive multiple deep learning models simultaneously. Unlike most of popular adversarial attack algorithms, the one proposed in this paper is based on the Generative Adversarial Networks (GAN). It can quickly produce adversarial examples and perform black-box attacks on multi-model. To enhance the transferability of the samples generated by our approach, we use multiple neural networks in the training process. Experimental results on MNIST showed that our method can efficiently generate… More >

  • Open Access

    ARTICLE

    A Generation Method of Letter-Level Adversarial Samples

    Huixuan Xu1, Chunlai Du1, Yanhui Guo2,*, Zhijian Cui1, Haibo Bai1

    Journal on Artificial Intelligence, Vol.3, No.2, pp. 45-53, 2021, DOI:10.32604/jai.2021.016305

    Abstract In recent years, with the rapid development of natural language processing, the security issues related to it have attracted more and more attention. Character perturbation is a common security problem. It can try to completely modify the input classification judgment of the target program without people’s attention by adding, deleting, or replacing several characters, which can reduce the effectiveness of the classifier. Although the current research has provided various methods of perturbation attacks on characters, the success rate of some methods is still not ideal. This paper mainly studies the sample generation of optimal perturbation characters and proposes a characterlevel… More >

  • Open Access

    ARTICLE

    Deep Learning Approach for COVID-19 Detection in Computed Tomography Images

    Mohamad Mahmoud Al Rahhal1, Yakoub Bazi2,*, Rami M. Jomaa3, Mansour Zuair2, Naif Al Ajlan2

    CMC-Computers, Materials & Continua, Vol.67, No.2, pp. 2093-2110, 2021, DOI:10.32604/cmc.2021.014956

    Abstract With the rapid spread of the coronavirus disease 2019 (COVID-19) worldwide, the establishment of an accurate and fast process to diagnose the disease is important. The routine real-time reverse transcription-polymerase chain reaction (rRT-PCR) test that is currently used does not provide such high accuracy or speed in the screening process. Among the good choices for an accurate and fast test to screen COVID-19 are deep learning techniques. In this study, a new convolutional neural network (CNN) framework for COVID-19 detection using computed tomography (CT) images is proposed. The EfficientNet architecture is applied as the backbone structure of the proposed network,… More >

  • Open Access

    ARTICLE

    A Survey on Adversarial Examples in Deep Learning

    Kai Chen1,*, Haoqi Zhu2, Leiming Yan1, Jinwei Wang1

    Journal on Big Data, Vol.2, No.2, pp. 71-84, 2020, DOI:10.32604/jbd.2020.012294

    Abstract Adversarial examples are hot topics in the field of security in deep learning. The feature, generation methods, attack and defense methods of the adversarial examples are focuses of the current research on adversarial examples. This article explains the key technologies and theories of adversarial examples from the concept of adversarial examples, the occurrences of the adversarial examples, the attacking methods of adversarial examples. This article lists the possible reasons for the adversarial examples. This article also analyzes several typical generation methods of adversarial examples in detail: Limited-memory BFGS (L-BFGS), Fast Gradient Sign Method (FGSM), Basic Iterative Method (BIM), Iterative Least-likely… More >

Displaying 1-10 on page 1 of 11. Per Page  

Share Link