Open Access iconOpen Access

ARTICLE

crossmark

An Empirical Study on the Effectiveness of Adversarial Examples in Malware Detection

Younghoon Ban, Myeonghyun Kim, Haehyun Cho*

School of Software, Soongsil University, Seoul, 06978, Korea

* Corresponding Author: Haehyun Cho. Email: email

(This article belongs to the Special Issue: Advanced Security for Future Mobile Internet: A Key Challenge for the Digital Transformation)

Computer Modeling in Engineering & Sciences 2024, 139(3), 3535-3563. https://doi.org/10.32604/cmes.2023.046658

Abstract

Antivirus vendors and the research community employ Machine Learning (ML) or Deep Learning (DL)-based static analysis techniques for efficient identification of new threats, given the continual emergence of novel malware variants. On the other hand, numerous researchers have reported that Adversarial Examples (AEs), generated by manipulating previously detected malware, can successfully evade ML/DL-based classifiers. Commercial antivirus systems, in particular, have been identified as vulnerable to such AEs. This paper firstly focuses on conducting black-box attacks to circumvent ML/DL-based malware classifiers. Our attack method utilizes seven different perturbations, including Overlay Append, Section Append, and Break Checksum, capitalizing on the ambiguities present in the PE format, as previously employed in evasion attack research. By directly applying the perturbation techniques to PE binaries, our attack method eliminates the need to grapple with the problem-feature space dilemma, a persistent challenge in many evasion attack studies. Being a black-box attack, our method can generate AEs that successfully evade both DL-based and ML-based classifiers. Also, AEs generated by the attack method retain their executability and malicious behavior, eliminating the need for functionality verification. Through thorogh evaluations, we confirmed that the attack method achieves an evasion rate of 65.6% against well-known ML-based malware detectors and can reach a remarkable 99% evasion rate against well-known DL-based malware detectors. Furthermore, our AEs demonstrated the capability to bypass detection by 17% of vendors out of the 64 on VirusTotal (VT). In addition, we propose a defensive approach that utilizes Trend Locality Sensitive Hashing (TLSH) to construct a similarity-based defense model. Through several experiments on the approach, we verified that our defense model can effectively counter AEs generated by the perturbation techniques. In conclusion, our defense model alleviates the limitation of the most promising defense method, adversarial training, which is only effective against the AEs that are included in the training classifiers.

Keywords


Cite This Article

APA Style
Ban, Y., Kim, M., Cho, H. (2024). An empirical study on the effectiveness of adversarial examples in malware detection. Computer Modeling in Engineering & Sciences, 139(3), 3535-3563. https://doi.org/10.32604/cmes.2023.046658
Vancouver Style
Ban Y, Kim M, Cho H. An empirical study on the effectiveness of adversarial examples in malware detection. Comput Model Eng Sci. 2024;139(3):3535-3563 https://doi.org/10.32604/cmes.2023.046658
IEEE Style
Y. Ban, M. Kim, and H. Cho, “An Empirical Study on the Effectiveness of Adversarial Examples in Malware Detection,” Comput. Model. Eng. Sci., vol. 139, no. 3, pp. 3535-3563, 2024. https://doi.org/10.32604/cmes.2023.046658



cc Copyright © 2024 The Author(s). Published by Tech Science Press.
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 960

    View

  • 468

    Download

  • 0

    Like

Share Link