Home / Advanced Search

  • Title/Keywords

  • Author/Affliations

  • Journal

  • Article Type

  • Start Year

  • End Year

Update SearchingClear
  • Articles
  • Online
Search Results (4)
  • Open Access

    ARTICLE

    Research on Fine-Grained Recognition Method for Sensitive Information in Social Networks Based on CLIP

    Menghan Zhang1,2, Fangfang Shan1,2,*, Mengyao Liu1,2, Zhenyu Wang1,2

    CMC-Computers, Materials & Continua, Vol.81, No.1, pp. 1565-1580, 2024, DOI:10.32604/cmc.2024.056008 - 15 October 2024

    Abstract With the emergence and development of social networks, people can stay in touch with friends, family, and colleagues more quickly and conveniently, regardless of their location. This ubiquitous digital internet environment has also led to large-scale disclosure of personal privacy. Due to the complexity and subtlety of sensitive information, traditional sensitive information identification technologies cannot thoroughly address the characteristics of each piece of data, thus weakening the deep connections between text and images. In this context, this paper adopts the CLIP model as a modality discriminator. By using comparative learning between sensitive image descriptions and… More >

  • Open Access

    REVIEW

    A Comprehensive Survey on Deep Learning Multi-Modal Fusion: Methods, Technologies and Applications

    Tianzhe Jiao, Chaopeng Guo, Xiaoyue Feng, Yuming Chen, Jie Song*

    CMC-Computers, Materials & Continua, Vol.80, No.1, pp. 1-35, 2024, DOI:10.32604/cmc.2024.053204 - 18 July 2024

    Abstract Multi-modal fusion technology gradually become a fundamental task in many fields, such as autonomous driving, smart healthcare, sentiment analysis, and human-computer interaction. It is rapidly becoming the dominant research due to its powerful perception and judgment capabilities. Under complex scenes, multi-modal fusion technology utilizes the complementary characteristics of multiple data streams to fuse different data types and achieve more accurate predictions. However, achieving outstanding performance is challenging because of equipment performance limitations, missing information, and data noise. This paper comprehensively reviews existing methods based on multi-modal fusion techniques and completes a detailed and in-depth analysis.… More >

  • Open Access

    ARTICLE

    Fake News Detection Based on Text-Modal Dominance and Fusing Multiple Multi-Model Clues

    Lifang Fu1, Huanxin Peng2,*, Changjin Ma2, Yuhan Liu2

    CMC-Computers, Materials & Continua, Vol.78, No.3, pp. 4399-4416, 2024, DOI:10.32604/cmc.2024.047053 - 26 March 2024

    Abstract In recent years, how to efficiently and accurately identify multi-model fake news has become more challenging. First, multi-model data provides more evidence but not all are equally important. Secondly, social structure information has proven to be effective in fake news detection and how to combine it while reducing the noise information is critical. Unfortunately, existing approaches fail to handle these problems. This paper proposes a multi-model fake news detection framework based on Tex-modal Dominance and fusing Multiple Multi-model Cues (TD-MMC), which utilizes three valuable multi-model clues: text-model importance, text-image complementary, and text-image inconsistency. TD-MMC is… More >

  • Open Access

    ARTICLE

    Cross-Modal Relation-Aware Networks for Fake News Detection

    Hui Yu, Jinguang Wang*

    Journal of New Media, Vol.4, No.1, pp. 13-26, 2022, DOI:10.32604/jnm.2022.027312 - 21 April 2022

    Abstract With the speedy development of communication Internet and the widespread use of social multimedia, so many creators have published posts on social multimedia platforms that fake news detection has already been a challenging task. Although some works use deep learning methods to capture visual and textual information of posts, most existing methods cannot explicitly model the binary relations among image regions or text tokens to mine the global relation information in a modality deeply such as image or text. Moreover, they cannot fully exploit the supplementary cross-modal information, including image and text relations, to supplement… More >

Displaying 1-10 on page 1 of 4. Per Page