Fangfang Shan1,2,*, Huifang Sun1,2, Mengyi Wang1,2
CMC-Computers, Materials & Continua, Vol.79, No.1, pp. 581-605, 2024, DOI:10.32604/cmc.2024.046202
- 25 April 2024
Abstract As social networks become increasingly complex, contemporary fake news often includes textual descriptions of events accompanied by corresponding images or videos. Fake news in multiple modalities is more likely to create a misleading perception among users. While early research primarily focused on text-based features for fake news detection mechanisms, there has been relatively limited exploration of learning shared representations in multimodal (text and visual) contexts. To address these limitations, this paper introduces a multimodal model for detecting fake news, which relies on similarity reasoning and adversarial networks. The model employs Bidirectional Encoder Representation from Transformers… More >