Home / Advanced Search

  • Title/Keywords

  • Author/Affliations

  • Journal

  • Article Type

  • Start Year

  • End Year

Update SearchingClear
  • Articles
  • Online
Search Results (2)
  • Open Access

    ARTICLE

    Optimizing Sentiment Integration in Image Captioning Using Transformer-Based Fusion Strategies

    Komal Rani Narejo1, Hongying Zan1,*, Kheem Parkash Dharmani2, Orken Mamyrbayev3,*, Ainur Akhmediyarova4, Zhibek Alibiyeva4, Janna Alimkulova5

    CMC-Computers, Materials & Continua, Vol.84, No.2, pp. 3407-3429, 2025, DOI:10.32604/cmc.2025.065872 - 03 July 2025

    Abstract While automatic image captioning systems have made notable progress in the past few years, generating captions that fully convey sentiment remains a considerable challenge. Although existing models achieve strong performance in visual recognition and factual description, they often fail to account for the emotional context that is naturally present in human-generated captions. To address this gap, we propose the Sentiment-Driven Caption Generator (SDCG), which combines transformer-based visual and textual processing with multi-level fusion. RoBERTa is used for extracting sentiment from textual input, while visual features are handled by the Vision Transformer (ViT). These features are More >

  • Open Access

    ARTICLE

    A Novelty Framework in Image-Captioning with Visual Attention-Based Refined Visual Features

    Alaa Thobhani1,*, Beiji Zou1, Xiaoyan Kui1,*, Amr Abdussalam2, Muhammad Asim3, Mohammed ELAffendi3, Sajid Shah3

    CMC-Computers, Materials & Continua, Vol.82, No.3, pp. 3943-3964, 2025, DOI:10.32604/cmc.2025.060788 - 06 March 2025

    Abstract Image captioning, the task of generating descriptive sentences for images, has advanced significantly with the integration of semantic information. However, traditional models still rely on static visual features that do not evolve with the changing linguistic context, which can hinder the ability to form meaningful connections between the image and the generated captions. This limitation often leads to captions that are less accurate or descriptive. In this paper, we propose a novel approach to enhance image captioning by introducing dynamic interactions where visual features continuously adapt to the evolving linguistic context. Our model strengthens the… More >

Displaying 1-10 on page 1 of 2. Per Page