Home / Advanced Search

  • Title/Keywords

  • Author/Affliations

  • Journal

  • Article Type

  • Start Year

  • End Year

Update SearchingClear
  • Articles
  • Online
Search Results (4)
  • Open Access

    ARTICLE

    A Gaussian Noise-Based Algorithm for Enhancing Backdoor Attacks

    Hong Huang, Yunfei Wang*, Guotao Yuan, Xin Li

    CMC-Computers, Materials & Continua, Vol.80, No.1, pp. 361-387, 2024, DOI:10.32604/cmc.2024.051633 - 18 July 2024

    Abstract Deep Neural Networks (DNNs) are integral to various aspects of modern life, enhancing work efficiency. Nonetheless, their susceptibility to diverse attack methods, including backdoor attacks, raises security concerns. We aim to investigate backdoor attack methods for image categorization tasks, to promote the development of DNN towards higher security. Research on backdoor attacks currently faces significant challenges due to the distinct and abnormal data patterns of malicious samples, and the meticulous data screening by developers, hindering practical attack implementation. To overcome these challenges, this study proposes a Gaussian Noise-Targeted Universal Adversarial Perturbation (GN-TUAP) algorithm. This approach… More >

  • Open Access

    ARTICLE

    Adaptive Backdoor Attack against Deep Neural Networks

    Honglu He, Zhiying Zhu, Xinpeng Zhang*

    CMES-Computer Modeling in Engineering & Sciences, Vol.136, No.3, pp. 2617-2633, 2023, DOI:10.32604/cmes.2023.025923 - 09 March 2023

    Abstract In recent years, the number of parameters of deep neural networks (DNNs) has been increasing rapidly. The training of DNNs is typically computation-intensive. As a result, many users leverage cloud computing and outsource their training procedures. Outsourcing computation results in a potential risk called backdoor attack, in which a welltrained DNN would perform abnormally on inputs with a certain trigger. Backdoor attacks can also be classified as attacks that exploit fake images. However, most backdoor attacks design a uniform trigger for all images, which can be easily detected and removed. In this paper, we propose… More >

  • Open Access

    ARTICLE

    Byte-Level Function-Associated Method for Malware Detection

    Jingwei Hao*, Senlin Luo, Limin Pan

    Computer Systems Science and Engineering, Vol.46, No.1, pp. 719-734, 2023, DOI:10.32604/csse.2023.033923 - 20 January 2023

    Abstract The byte stream is widely used in malware detection due to its independence of reverse engineering. However, existing methods based on the byte stream implement an indiscriminate feature extraction strategy, which ignores the byte function difference in different segments and fails to achieve targeted feature extraction for various byte semantic representation modes, resulting in byte semantic confusion. To address this issue, an enhanced adversarial byte function associated method for malware backdoor attack is proposed in this paper by categorizing various function bytes into three functions involving structure, code, and data. The Minhash algorithm, grayscale mapping, More >

  • Open Access

    ARTICLE

    An Improved Optimized Model for Invisible Backdoor Attack Creation Using Steganography

    Daniyal M. Alghazzawi1, Osama Bassam J. Rabie1, Surbhi Bhatia2, Syed Hamid Hasan1,*

    CMC-Computers, Materials & Continua, Vol.72, No.1, pp. 1173-1193, 2022, DOI:10.32604/cmc.2022.022748 - 24 February 2022

    Abstract The Deep Neural Networks (DNN) training process is widely affected by backdoor attacks. The backdoor attack is excellent at concealing its identity in the DNN by performing well on regular samples and displaying malicious behavior with data poisoning triggers. The state-of-art backdoor attacks mainly follow a certain assumption that the trigger is sample-agnostic and different poisoned samples use the same trigger. To overcome this problem, in this work we are creating a backdoor attack to check their strength to withstand complex defense strategies, and in order to achieve this objective, we are developing an improved… More >

Displaying 1-10 on page 1 of 4. Per Page