Open Access iconOpen Access

ARTICLE

crossmark

Adversarial Attacks on Featureless Deep Learning Malicious URLs Detection

by Bader Rasheed1, Adil Khan1, S. M. Ahsan Kazmi2, Rasheed Hussain2, Md. Jalil Piran3,*, Doug Young Suh4

1 Institute of Data Science and Artificial Intelligence, Innopolis University, Innopolis, 420500, Russia
2 Institute of Information Security and Cyberphysical Systems, Innopolis University, Innopolis, 420500, Russia
3 Department of Computer Science and Engineering, Sejong University, Seoul, Korea
4 Department of Electronics Engineering, Kyung Hee University, Yongin, Korea

* Corresponding Author: Md. Jalil Piran. Email: email

(This article belongs to the Special Issue: AI for Wearable Sensing--Smartphone / Smartwatch User Identification / Authentication)

Computers, Materials & Continua 2021, 68(1), 921-939. https://doi.org/10.32604/cmc.2021.015452

Abstract

Detecting malicious Uniform Resource Locators (URLs) is crucially important to prevent attackers from committing cybercrimes. Recent researches have investigated the role of machine learning (ML) models to detect malicious URLs. By using ML algorithms, first, the features of URLs are extracted, and then different ML models are trained. The limitation of this approach is that it requires manual feature engineering and it does not consider the sequential patterns in the URL. Therefore, deep learning (DL) models are used to solve these issues since they are able to perform featureless detection. Furthermore, DL models give better accuracy and generalization to newly designed URLs; however, the results of our study show that these models, such as any other DL models, can be susceptible to adversarial attacks. In this paper, we examine the robustness of these models and demonstrate the importance of considering this susceptibility before applying such detection systems in real-world solutions. We propose and demonstrate a black-box attack based on scoring functions with greedy search for the minimum number of perturbations leading to a misclassification. The attack is examined against different types of convolutional neural networks (CNN)-based URL classifiers and it causes a tangible decrease in the accuracy with more than 56% reduction in the accuracy of the best classifier (among the selected classifiers for this work). Moreover, adversarial training shows promising results in reducing the influence of the attack on the robustness of the model to less than 7% on average.

Keywords


Cite This Article

APA Style
Rasheed, B., Khan, A., Kazmi, S.M.A., Hussain, R., Piran, M.J. et al. (2021). Adversarial attacks on featureless deep learning malicious urls detection. Computers, Materials & Continua, 68(1), 921-939. https://doi.org/10.32604/cmc.2021.015452
Vancouver Style
Rasheed B, Khan A, Kazmi SMA, Hussain R, Piran MJ, Suh DY. Adversarial attacks on featureless deep learning malicious urls detection. Comput Mater Contin. 2021;68(1):921-939 https://doi.org/10.32604/cmc.2021.015452
IEEE Style
B. Rasheed, A. Khan, S. M. A. Kazmi, R. Hussain, M. J. Piran, and D. Y. Suh, “Adversarial Attacks on Featureless Deep Learning Malicious URLs Detection,” Comput. Mater. Contin., vol. 68, no. 1, pp. 921-939, 2021. https://doi.org/10.32604/cmc.2021.015452



cc Copyright © 2021 The Author(s). Published by Tech Science Press.
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 3680

    View

  • 1647

    Download

  • 1

    Like

Share Link