Home / Advanced Search

  • Title/Keywords

  • Author/Affliations

  • Journal

  • Article Type

  • Start Year

  • End Year

Update SearchingClear
  • Articles
  • Online
Search Results (4)
  • Open Access

    ARTICLE

    CMAES-WFD: Adversarial Website Fingerprinting Defense Based on Covariance Matrix Adaptation Evolution Strategy

    Di Wang, Yuefei Zhu, Jinlong Fei*, Maohua Guo

    CMC-Computers, Materials & Continua, Vol.79, No.2, pp. 2253-2276, 2024, DOI:10.32604/cmc.2024.049504

    Abstract Website fingerprinting, also known as WF, is a traffic analysis attack that enables local eavesdroppers to infer a user’s browsing destination, even when using the Tor anonymity network. While advanced attacks based on deep neural network (DNN) can perform feature engineering and attain accuracy rates of over 98%, research has demonstrated that DNN is vulnerable to adversarial samples. As a result, many researchers have explored using adversarial samples as a defense mechanism against DNN-based WF attacks and have achieved considerable success. However, these methods suffer from high bandwidth overhead or require access to the target… More >

  • Open Access

    ARTICLE

    Optimized Generative Adversarial Networks for Adversarial Sample Generation

    Daniyal M. Alghazzawi1, Syed Hamid Hasan1,*, Surbhi Bhatia2

    CMC-Computers, Materials & Continua, Vol.72, No.2, pp. 3877-3897, 2022, DOI:10.32604/cmc.2022.024613

    Abstract Detecting the anomalous entity in real-time network traffic is a popular area of research in recent times. Very few researches have focused on creating malware that fools the intrusion detection system and this paper focuses on this topic. We are using Deep Convolutional Generative Adversarial Networks (DCGAN) to trick the malware classifier to believe it is a normal entity. In this work, a new dataset is created to fool the Artificial Intelligence (AI) based malware detectors, and it consists of different types of attacks such as Denial of Service (DoS), scan 11, scan 44, botnet,… More >

  • Open Access

    ARTICLE

    A Generation Method of Letter-Level Adversarial Samples

    Huixuan Xu1, Chunlai Du1, Yanhui Guo2,*, Zhijian Cui1, Haibo Bai1

    Journal on Artificial Intelligence, Vol.3, No.2, pp. 45-53, 2021, DOI:10.32604/jai.2021.016305

    Abstract In recent years, with the rapid development of natural language processing, the security issues related to it have attracted more and more attention. Character perturbation is a common security problem. It can try to completely modify the input classification judgment of the target program without people’s attention by adding, deleting, or replacing several characters, which can reduce the effectiveness of the classifier. Although the current research has provided various methods of perturbation attacks on characters, the success rate of some methods is still not ideal. This paper mainly studies the sample generation of optimal perturbation More >

  • Open Access

    ARTICLE

    Defend Against Adversarial Samples by Using Perceptual Hash

    Changrui Liu1, Dengpan Ye1, *, Yueyun Shang2, Shunzhi Jiang1, Shiyu Li1, Yuan Mei1, Liqiang Wang3

    CMC-Computers, Materials & Continua, Vol.62, No.3, pp. 1365-1386, 2020, DOI:10.32604/cmc.2020.07421

    Abstract Image classifiers that based on Deep Neural Networks (DNNs) have been proved to be easily fooled by well-designed perturbations. Previous defense methods have the limitations of requiring expensive computation or reducing the accuracy of the image classifiers. In this paper, we propose a novel defense method which based on perceptual hash. Our main goal is to destroy the process of perturbations generation by comparing the similarities of images thus achieve the purpose of defense. To verify our idea, we defended against two main attack methods (a white-box attack and a black-box attack) in different DNN-based More >

Displaying 1-10 on page 1 of 4. Per Page