Home / Advanced Search

  • Title/Keywords

  • Author/Affliations

  • Journal

  • Article Type

  • Start Year

  • End Year

Update SearchingClear
  • Articles
  • Online
Search Results (4)
  • Open Access

    ARTICLE

    Comparing Fine-Tuning, Zero and Few-Shot Strategies with Large Language Models in Hate Speech Detection in English

    Ronghao Pan, José Antonio García-Díaz*, Rafael Valencia-García

    CMES-Computer Modeling in Engineering & Sciences, Vol.140, No.3, pp. 2849-2868, 2024, DOI:10.32604/cmes.2024.049631

    Abstract Large Language Models (LLMs) are increasingly demonstrating their ability to understand natural language and solve complex tasks, especially through text generation. One of the relevant capabilities is contextual learning, which involves the ability to receive instructions in natural language or task demonstrations to generate expected outputs for test instances without the need for additional training or gradient updates. In recent years, the popularity of social networking has provided a medium through which some users can engage in offensive and harmful online behavior. In this study, we investigate the ability of different LLMs, ranging from zero-shot… More >

  • Open Access

    ARTICLE

    DeBERTa-GRU: Sentiment Analysis for Large Language Model

    Adel Assiri1, Abdu Gumaei2,*, Faisal Mehmood3,*, Touqeer Abbas4, Sami Ullah5

    CMC-Computers, Materials & Continua, Vol.79, No.3, pp. 4219-4236, 2024, DOI:10.32604/cmc.2024.050781

    Abstract Modern technological advancements have made social media an essential component of daily life. Social media allow individuals to share thoughts, emotions, and ideas. Sentiment analysis plays the function of evaluating whether the sentiment of the text is positive, negative, neutral, or any other personal emotion to understand the sentiment context of the text. Sentiment analysis is essential in business and society because it impacts strategic decision-making. Sentiment analysis involves challenges due to lexical variation, an unlabeled dataset, and text distance correlations. The execution time increases due to the sequential processing of the sequence models. However,… More >

  • Open Access

    ARTICLE

    LKPNR: Large Language Models and Knowledge Graph for Personalized News Recommendation Framework

    Hao Chen#, Runfeng Xie#, Xiangyang Cui, Zhou Yan, Xin Wang, Zhanwei Xuan*, Kai Zhang*

    CMC-Computers, Materials & Continua, Vol.79, No.3, pp. 4283-4296, 2024, DOI:10.32604/cmc.2024.049129

    Abstract Accurately recommending candidate news to users is a basic challenge of personalized news recommendation systems. Traditional methods are usually difficult to learn and acquire complex semantic information in news texts, resulting in unsatisfactory recommendation results. Besides, these traditional methods are more friendly to active users with rich historical behaviors. However, they can not effectively solve the long tail problem of inactive users. To address these issues, this research presents a novel general framework that combines Large Language Models (LLM) and Knowledge Graphs (KG) into traditional methods. To learn the contextual information of news text, we… More >

  • Open Access

    ARTICLE

    Enhancing Relational Triple Extraction in Specific Domains: Semantic Enhancement and Synergy of Large Language Models and Small Pre-Trained Language Models

    Jiakai Li, Jianpeng Hu*, Geng Zhang

    CMC-Computers, Materials & Continua, Vol.79, No.2, pp. 2481-2503, 2024, DOI:10.32604/cmc.2024.050005

    Abstract In the process of constructing domain-specific knowledge graphs, the task of relational triple extraction plays a critical role in transforming unstructured text into structured information. Existing relational triple extraction models face multiple challenges when processing domain-specific data, including insufficient utilization of semantic interaction information between entities and relations, difficulties in handling challenging samples, and the scarcity of domain-specific datasets. To address these issues, our study introduces three innovative components: Relation semantic enhancement, data augmentation, and a voting strategy, all designed to significantly improve the model’s performance in tackling domain-specific relational triple extraction tasks. We first… More >

Displaying 1-10 on page 1 of 4. Per Page