Open Access iconOpen Access

ARTICLE

crossmark

Cross-Modal Consistency with Aesthetic Similarity for Multimodal False Information Detection

by Weijian Fan1,*, Ziwei Shi2

1 State Key Laboratory of Media Convergence and Communication, Communication University of China, Beijing, 100024, China
2 Software Research Institute, China United Network Communication Group Co., Ltd., Beijing, 100024, China

* Corresponding Author: Weijian Fan. Email: email

Computers, Materials & Continua 2024, 79(2), 2723-2741. https://doi.org/10.32604/cmc.2024.050344

Abstract

With the explosive growth of false information on social media platforms, the automatic detection of multimodal false information has received increasing attention. Recent research has significantly contributed to multimodal information exchange and fusion, with many methods attempting to integrate unimodal features to generate multimodal news representations. However, they still need to fully explore the hierarchical and complex semantic correlations between different modal contents, severely limiting their performance detecting multimodal false information. This work proposes a two-stage detection framework for multimodal false information detection, called ASMFD, which is based on image aesthetic similarity to segment and explores the consistency and inconsistency features of images and texts. Specifically, we first use the Contrastive Language-Image Pre-training (CLIP) model to learn the relationship between text and images through label awareness and train an image aesthetic attribute scorer using an aesthetic attribute dataset. Then, we calculate the aesthetic similarity between the image and related images and use this similarity as a threshold to divide the multimodal correlation matrix into consistency and inconsistency matrices. Finally, the fusion module is designed to identify essential features for detecting multimodal false information. In extensive experiments on four datasets, the performance of the ASMFD is superior to state-of-the-art baseline methods.

Keywords


Cite This Article

APA Style
Fan, W., Shi, Z. (2024). Cross-modal consistency with aesthetic similarity for multimodal false information detection. Computers, Materials & Continua, 79(2), 2723-2741. https://doi.org/10.32604/cmc.2024.050344
Vancouver Style
Fan W, Shi Z. Cross-modal consistency with aesthetic similarity for multimodal false information detection. Comput Mater Contin. 2024;79(2):2723-2741 https://doi.org/10.32604/cmc.2024.050344
IEEE Style
W. Fan and Z. Shi, “Cross-Modal Consistency with Aesthetic Similarity for Multimodal False Information Detection,” Comput. Mater. Contin., vol. 79, no. 2, pp. 2723-2741, 2024. https://doi.org/10.32604/cmc.2024.050344



cc Copyright © 2024 The Author(s). Published by Tech Science Press.
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 504

    View

  • 233

    Download

  • 0

    Like

Share Link