Home / Advanced Search

  • Title/Keywords

  • Author/Affliations

  • Journal

  • Article Type

  • Start Year

  • End Year

Update SearchingClear
  • Articles
  • Online
Search Results (16)
  • Open Access

    REVIEW

    Unlocking the Potential: A Comprehensive Systematic Review of ChatGPT in Natural Language Processing Tasks

    Ebtesam Ahmad Alomari*

    CMES-Computer Modeling in Engineering & Sciences, Vol.141, No.1, pp. 43-85, 2024, DOI:10.32604/cmes.2024.052256 - 20 August 2024

    Abstract As Natural Language Processing (NLP) continues to advance, driven by the emergence of sophisticated large language models such as ChatGPT, there has been a notable growth in research activity. This rapid uptake reflects increasing interest in the field and induces critical inquiries into ChatGPT’s applicability in the NLP domain. This review paper systematically investigates the role of ChatGPT in diverse NLP tasks, including information extraction, Name Entity Recognition (NER), event extraction, relation extraction, Part of Speech (PoS) tagging, text classification, sentiment analysis, emotion recognition and text annotation. The novelty of this work lies in its… More >

  • Open Access

    REVIEW

    Evolution and Prospects of Foundation Models: From Large Language Models to Large Multimodal Models

    Zheyi Chen1,, Liuchang Xu1,, Hongting Zheng1, Luyao Chen1, Amr Tolba2,3, Liang Zhao4, Keping Yu5,*, Hailin Feng1,*

    CMC-Computers, Materials & Continua, Vol.80, No.2, pp. 1753-1808, 2024, DOI:10.32604/cmc.2024.052618 - 15 August 2024

    Abstract Since the 1950s, when the Turing Test was introduced, there has been notable progress in machine language intelligence. Language modeling, crucial for AI development, has evolved from statistical to neural models over the last two decades. Recently, transformer-based Pre-trained Language Models (PLM) have excelled in Natural Language Processing (NLP) tasks by leveraging large-scale training corpora. Increasing the scale of these models enhances performance significantly, introducing abilities like context learning that smaller models lack. The advancement in Large Language Models, exemplified by the development of ChatGPT, has made significant impacts both academically and industrially, capturing widespread… More >

  • Open Access

    ARTICLE

    Comparing Fine-Tuning, Zero and Few-Shot Strategies with Large Language Models in Hate Speech Detection in English

    Ronghao Pan, José Antonio García-Díaz*, Rafael Valencia-García

    CMES-Computer Modeling in Engineering & Sciences, Vol.140, No.3, pp. 2849-2868, 2024, DOI:10.32604/cmes.2024.049631 - 08 July 2024

    Abstract Large Language Models (LLMs) are increasingly demonstrating their ability to understand natural language and solve complex tasks, especially through text generation. One of the relevant capabilities is contextual learning, which involves the ability to receive instructions in natural language or task demonstrations to generate expected outputs for test instances without the need for additional training or gradient updates. In recent years, the popularity of social networking has provided a medium through which some users can engage in offensive and harmful online behavior. In this study, we investigate the ability of different LLMs, ranging from zero-shot… More >

  • Open Access

    ARTICLE

    DeBERTa-GRU: Sentiment Analysis for Large Language Model

    Adel Assiri1, Abdu Gumaei2,*, Faisal Mehmood3,*, Touqeer Abbas4, Sami Ullah5

    CMC-Computers, Materials & Continua, Vol.79, No.3, pp. 4219-4236, 2024, DOI:10.32604/cmc.2024.050781 - 20 June 2024

    Abstract Modern technological advancements have made social media an essential component of daily life. Social media allow individuals to share thoughts, emotions, and ideas. Sentiment analysis plays the function of evaluating whether the sentiment of the text is positive, negative, neutral, or any other personal emotion to understand the sentiment context of the text. Sentiment analysis is essential in business and society because it impacts strategic decision-making. Sentiment analysis involves challenges due to lexical variation, an unlabeled dataset, and text distance correlations. The execution time increases due to the sequential processing of the sequence models. However,… More >

  • Open Access

    ARTICLE

    LKPNR: Large Language Models and Knowledge Graph for Personalized News Recommendation Framework

    Hao Chen#, Runfeng Xie#, Xiangyang Cui, Zhou Yan, Xin Wang, Zhanwei Xuan*, Kai Zhang*

    CMC-Computers, Materials & Continua, Vol.79, No.3, pp. 4283-4296, 2024, DOI:10.32604/cmc.2024.049129 - 20 June 2024

    Abstract Accurately recommending candidate news to users is a basic challenge of personalized news recommendation systems. Traditional methods are usually difficult to learn and acquire complex semantic information in news texts, resulting in unsatisfactory recommendation results. Besides, these traditional methods are more friendly to active users with rich historical behaviors. However, they can not effectively solve the long tail problem of inactive users. To address these issues, this research presents a novel general framework that combines Large Language Models (LLM) and Knowledge Graphs (KG) into traditional methods. To learn the contextual information of news text, we… More >

  • Open Access

    ARTICLE

    Enhancing Relational Triple Extraction in Specific Domains: Semantic Enhancement and Synergy of Large Language Models and Small Pre-Trained Language Models

    Jiakai Li, Jianpeng Hu*, Geng Zhang

    CMC-Computers, Materials & Continua, Vol.79, No.2, pp. 2481-2503, 2024, DOI:10.32604/cmc.2024.050005 - 15 May 2024

    Abstract In the process of constructing domain-specific knowledge graphs, the task of relational triple extraction plays a critical role in transforming unstructured text into structured information. Existing relational triple extraction models face multiple challenges when processing domain-specific data, including insufficient utilization of semantic interaction information between entities and relations, difficulties in handling challenging samples, and the scarcity of domain-specific datasets. To address these issues, our study introduces three innovative components: Relation semantic enhancement, data augmentation, and a voting strategy, all designed to significantly improve the model’s performance in tackling domain-specific relational triple extraction tasks. We first… More >

  • Open Access

    ARTICLE

    RoBGP: A Chinese Nested Biomedical Named Entity Recognition Model Based on RoBERTa and Global Pointer

    Xiaohui Cui1,2,#, Chao Song1,2,#, Dongmei Li1,2,*, Xiaolong Qu1,2, Jiao Long1,2, Yu Yang1,2, Hanchao Zhang3

    CMC-Computers, Materials & Continua, Vol.78, No.3, pp. 3603-3618, 2024, DOI:10.32604/cmc.2024.047321 - 26 March 2024

    Abstract Named Entity Recognition (NER) stands as a fundamental task within the field of biomedical text mining, aiming to extract specific types of entities such as genes, proteins, and diseases from complex biomedical texts and categorize them into predefined entity types. This process can provide basic support for the automatic construction of knowledge bases. In contrast to general texts, biomedical texts frequently contain numerous nested entities and local dependencies among these entities, presenting significant challenges to prevailing NER models. To address these issues, we propose a novel Chinese nested biomedical NER model based on RoBERTa and Global Pointer… More >

  • Open Access

    ARTICLE

    PAL-BERT: An Improved Question Answering Model

    Wenfeng Zheng1, Siyu Lu1, Zhuohang Cai1, Ruiyang Wang1, Lei Wang2, Lirong Yin2,*

    CMES-Computer Modeling in Engineering & Sciences, Vol.139, No.3, pp. 2729-2745, 2024, DOI:10.32604/cmes.2023.046692 - 11 March 2024

    Abstract In the field of natural language processing (NLP), there have been various pre-training language models in recent years, with question answering systems gaining significant attention. However, as algorithms, data, and computing power advance, the issue of increasingly larger models and a growing number of parameters has surfaced. Consequently, model training has become more costly and less efficient. To enhance the efficiency and accuracy of the training process while reducing the model volume, this paper proposes a first-order pruning model PAL-BERT based on the ALBERT model according to the characteristics of question-answering (QA) system and language More >

  • Open Access

    ARTICLE

    Classification of Conversational Sentences Using an Ensemble Pre-Trained Language Model with the Fine-Tuned Parameter

    R. Sujatha, K. Nimala*

    CMC-Computers, Materials & Continua, Vol.78, No.2, pp. 1669-1686, 2024, DOI:10.32604/cmc.2023.046963 - 27 February 2024

    Abstract Sentence classification is the process of categorizing a sentence based on the context of the sentence. Sentence categorization requires more semantic highlights than other tasks, such as dependence parsing, which requires more syntactic elements. Most existing strategies focus on the general semantics of a conversation without involving the context of the sentence, recognizing the progress and comparing impacts. An ensemble pre-trained language model was taken up here to classify the conversation sentences from the conversation corpus. The conversational sentences are classified into four categories: information, question, directive, and commission. These classification label sequences are for… More >

  • Open Access

    ARTICLE

    Personality Trait Detection via Transfer Learning

    Bashar Alshouha1, Jesus Serrano-Guerrero1,*, Francisco Chiclana2, Francisco P. Romero1, Jose A. Olivas1

    CMC-Computers, Materials & Continua, Vol.78, No.2, pp. 1933-1956, 2024, DOI:10.32604/cmc.2023.046711 - 27 February 2024

    Abstract Personality recognition plays a pivotal role when developing user-centric solutions such as recommender systems or decision support systems across various domains, including education, e-commerce, or human resources. Traditional machine learning techniques have been broadly employed for personality trait identification; nevertheless, the development of new technologies based on deep learning has led to new opportunities to improve their performance. This study focuses on the capabilities of pre-trained language models such as BERT, RoBERTa, ALBERT, ELECTRA, ERNIE, or XLNet, to deal with the task of personality recognition. These models are able to capture structural features from textual… More >

Displaying 1-10 on page 1 of 16. Per Page