Open Access
ARTICLE
Adversarial Attacks on Content-Based Filtering Journal Recommender Systems
1 Cyberspace Institute of Advanced Technology, Guangzhou University, Guangzhou, China.
2 Department of Computer and Information Sciences, Temple University, Philadelphia, USA.
* Corresponding Author: Mohan Li. Email: .
Computers, Materials & Continua 2020, 64(3), 1755-1770. https://doi.org/10.32604/cmc.2020.010739
Received 24 March 2020; Accepted 28 April 2020; Issue published 30 June 2020
Abstract
Recommender systems are very useful for people to explore what they really need. Academic papers are important achievements for researchers and they often have a great deal of choice to submit their papers. In order to improve the efficiency of selecting the most suitable journals for publishing their works, journal recommender systems (JRS) can automatically provide a small number of candidate journals based on key information such as the title and the abstract. However, users or journal owners may attack the system for their own purposes. In this paper, we discuss about the adversarial attacks against content-based filtering JRS. We propose both targeted attack method that makes some target journals appear more often in the system and non-targeted attack method that makes the system provide incorrect recommendations. We also conduct extensive experiments to validate the proposed methods. We hope this paper could help improve JRS by realizing the existence of such adversarial attacks.Keywords
Cite This Article
Citations
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.