Home / Journals / CMC / Online First / doi:10.32604/cmc.2024.059115
Special Issues
Table of Content

Open Access

ARTICLE

Dual-Task Contrastive Meta-Learning for Few-Shot Cross-Domain Emotion Recognition

Yujiao Tang1, Yadong Wu1,*, Yuanmei He2, Jilin Liu1, Weihan Zhang1
1 School of Computer Science and Engineering, Sichuan University of Science and Engineering, Yibin, 644002, China
2 School of Mechanical and Power Engineering, Chongqing University of Science and Technology, Chongqing, 401331, China
* Corresponding Author: Yadong Wu. Email: email

Computers, Materials & Continua https://doi.org/10.32604/cmc.2024.059115

Received 28 September 2024; Accepted 18 November 2024; Published online 09 December 2024

Abstract

Emotion recognition plays a crucial role in various fields and is a key task in natural language processing (NLP). The objective is to identify and interpret emotional expressions in text. However, traditional emotion recognition approaches often struggle in few-shot cross-domain scenarios due to their limited capacity to generalize semantic features across different domains. Additionally, these methods face challenges in accurately capturing complex emotional states, particularly those that are subtle or implicit. To overcome these limitations, we introduce a novel approach called Dual-Task Contrastive Meta-Learning (DTCML). This method combines meta-learning and contrastive learning to improve emotion recognition. Meta-learning enhances the model’s ability to generalize to new emotional tasks, while instance contrastive learning further refines the model by distinguishing unique features within each category, enabling it to better differentiate complex emotional expressions. Prototype contrastive learning, in turn, helps the model address the semantic complexity of emotions across different domains, enabling the model to learn fine-grained emotions expression. By leveraging dual tasks, DTCML learns from two domains simultaneously, the model is encouraged to learn more diverse and generalizable emotions features, thereby improving its cross-domain adaptability and robustness, and enhancing its generalization ability. We evaluated the performance of DTCML across four cross-domain settings, and the results show that our method outperforms the best baseline by 5.88%, 12.04%, 8.49%, and 8.40% in terms of accuracy.

Keywords

Contrastive learning; emotion recognition; cross-domain learning; dual-task; meta-learning
  • 102

    View

  • 15

    Download

  • 1

    Like

Share Link