Open Access
ARTICLE
An Efficient Character-Level Adversarial Attack Inspired by Textual Variations in Online Social Media Platforms
1 Department of Artificial Intelligence, Ajou University, Suwon, Korea
2 Department of Computer Science, Munster Technological University, Cork, Ireland
3 Department of Software and Computer Engineering, Ajou University, Suwon, Korea
* Corresponding Author: Kyung-Ah Sohn. Email:
(This article belongs to the Special Issue: Intelligent Uni-modal and Multi-modal Agents against Adversarial Cyber Attacks)
Computer Systems Science and Engineering 2023, 47(3), 2869-2894. https://doi.org/10.32604/csse.2023.040159
Received 07 March 2023; Accepted 17 May 2023; Issue published 09 November 2023
Abstract
In recent years, the growing popularity of social media platforms has led to several interesting natural language processing (NLP) applications. However, these social media-based NLP applications are subject to different types of adversarial attacks due to the vulnerabilities of machine learning (ML) and NLP techniques. This work presents a new low-level adversarial attack recipe inspired by textual variations in online social media communication. These variations are generated to convey the message using out-of-vocabulary words based on visual and phonetic similarities of characters and words in the shortest possible form. The intuition of the proposed scheme is to generate adversarial examples influenced by human cognition in text generation on social media platforms while preserving human robustness in text understanding with the fewest possible perturbations. The intentional textual variations introduced by users in online communication motivate us to replicate such trends in attacking text to see the effects of such widely used textual variations on the deep learning classifiers. In this work, the four most commonly used textual variations are chosen to generate adversarial examples. Moreover, this article introduced a word importance ranking-based beam search algorithm as a searching method for the best possible perturbation selection. The effectiveness of the proposed adversarial attacks has been demonstrated on four benchmark datasets in an extensive experimental setup.Keywords
Cite This Article
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.