Home / Advanced Search

  • Title/Keywords

  • Author/Affliations

  • Journal

  • Article Type

  • Start Year

  • End Year

Update SearchingClear
  • Articles
  • Online
Search Results (2)
  • Open Access

    ARTICLE

    An Empirical Study on the Effectiveness of Adversarial Examples in Malware Detection

    Younghoon Ban, Myeonghyun Kim, Haehyun Cho*

    CMES-Computer Modeling in Engineering & Sciences, Vol.139, No.3, pp. 3535-3563, 2024, DOI:10.32604/cmes.2023.046658 - 11 March 2024

    Abstract Antivirus vendors and the research community employ Machine Learning (ML) or Deep Learning (DL)-based static analysis techniques for efficient identification of new threats, given the continual emergence of novel malware variants. On the other hand, numerous researchers have reported that Adversarial Examples (AEs), generated by manipulating previously detected malware, can successfully evade ML/DL-based classifiers. Commercial antivirus systems, in particular, have been identified as vulnerable to such AEs. This paper firstly focuses on conducting black-box attacks to circumvent ML/DL-based malware classifiers. Our attack method utilizes seven different perturbations, including Overlay Append, Section Append, and Break Checksum,… More >

  • Open Access

    ARTICLE

    Cryptographic Based Secure Model on Dataset for Deep Learning Algorithms

    Muhammad Tayyab1,*, Mohsen Marjani1, N. Z. Jhanjhi1, Ibrahim Abaker Targio Hashim2, Abdulwahab Ali Almazroi3, Abdulaleem Ali Almazroi4

    CMC-Computers, Materials & Continua, Vol.69, No.1, pp. 1183-1200, 2021, DOI:10.32604/cmc.2021.017199 - 04 June 2021

    Abstract Deep learning (DL) algorithms have been widely used in various security applications to enhance the performances of decision-based models. Malicious data added by an attacker can cause several security and privacy problems in the operation of DL models. The two most common active attacks are poisoning and evasion attacks, which can cause various problems, including wrong prediction and misclassification of decision-based models. Therefore, to design an efficient DL model, it is crucial to mitigate these attacks. In this regard, this study proposes a secure neural network (NN) model that provides data security during model training… More >

Displaying 1-10 on page 1 of 2. Per Page