Home / Advanced Search

  • Title/Keywords

  • Author/Affliations

  • Journal

  • Article Type

  • Start Year

  • End Year

Update SearchingClear
  • Articles
  • Online
Search Results (2)
  • Open Access

    REVIEW

    A Survey on Enhancing Image Captioning with Advanced Strategies and Techniques

    Alaa Thobhani1,*, Beiji Zou1, Xiaoyan Kui1,*, Amr Abdussalam2, Muhammad Asim3, Sajid Shah3, Mohammed ELAffendi3

    CMES-Computer Modeling in Engineering & Sciences, Vol.142, No.3, pp. 2247-2280, 2025, DOI:10.32604/cmes.2025.059192 - 03 March 2025

    Abstract Image captioning has seen significant research efforts over the last decade. The goal is to generate meaningful semantic sentences that describe visual content depicted in photographs and are syntactically accurate. Many real-world applications rely on image captioning, such as helping people with visual impairments to see their surroundings. To formulate a coherent and relevant textual description, computer vision techniques are utilized to comprehend the visual content within an image, followed by natural language processing methods. Numerous approaches and models have been developed to deal with this multifaceted problem. Several models prove to be state-of-the-art solutions… More >

  • Open Access

    ARTICLE

    SSAG-Net: Syntactic and Semantic Attention-Guided Machine Reading Comprehension

    Chenxi Yu, Xin Li*

    Intelligent Automation & Soft Computing, Vol.34, No.3, pp. 2023-2034, 2022, DOI:10.32604/iasc.2022.029447 - 25 May 2022

    Abstract Machine reading comprehension (MRC) is a task in natural language comprehension. It assesses machine reading comprehension based on text reading and answering questions. Traditional attention methods typically focus on one of syntax or semantics, or integrate syntax and semantics through a manual method, leaving the model unable to fully utilize syntax and semantics for MRC tasks. In order to better understand syntactic and semantic information and improve machine reading comprehension, our study uses syntactic and semantic attention to conduct text modeling for tasks. Based on the BERT model of Transformer encoder, we separate a text… More >

Displaying 1-10 on page 1 of 2. Per Page