Home / Advanced Search

  • Title/Keywords

  • Author/Affliations

  • Journal

  • Article Type

  • Start Year

  • End Year

Update SearchingClear
  • Articles
  • Online
Search Results (2)
  • Open Access

    ARTICLE

    AMA: Adaptive Multimodal Adversarial Attack with Dynamic Perturbation Optimization

    Yufei Shi, Ziwen He*, Teng Jin, Haochen Tong, Zhangjie Fu

    CMES-Computer Modeling in Engineering & Sciences, Vol.144, No.2, pp. 1831-1848, 2025, DOI:10.32604/cmes.2025.067658 - 31 August 2025

    Abstract This article proposes an innovative adversarial attack method, AMA (Adaptive Multimodal Attack), which introduces an adaptive feedback mechanism by dynamically adjusting the perturbation strength. Specifically, AMA adjusts perturbation amplitude based on task complexity and optimizes the perturbation direction based on the gradient direction in real time to enhance attack efficiency. Experimental results demonstrate that AMA elevates attack success rates from approximately 78.95% to 89.56% on visual question answering and from 78.82% to 84.96% on visual reasoning tasks across representative vision-language benchmarks. These findings demonstrate AMA’s superior attack efficiency and reveal the vulnerability of current visual More >

  • Open Access

    ARTICLE

    An Adversarial Network-based Multi-model Black-box Attack

    Bin Lin1, Jixin Chen2, Zhihong Zhang3, Yanlin Lai2, Xinlong Wu2, Lulu Tian4, Wangchi Cheng5,*

    Intelligent Automation & Soft Computing, Vol.30, No.2, pp. 641-649, 2021, DOI:10.32604/iasc.2021.016818 - 11 August 2021

    Abstract Researches have shown that Deep neural networks (DNNs) are vulnerable to adversarial examples. In this paper, we propose a generative model to explore how to produce adversarial examples that can deceive multiple deep learning models simultaneously. Unlike most of popular adversarial attack algorithms, the one proposed in this paper is based on the Generative Adversarial Networks (GAN). It can quickly produce adversarial examples and perform black-box attacks on multi-model. To enhance the transferability of the samples generated by our approach, we use multiple neural networks in the training process. Experimental results on MNIST showed that More >

Displaying 1-10 on page 1 of 2. Per Page