Open Access iconOpen Access

ARTICLE

crossmark

Dual-Task Contrastive Meta-Learning for Few-Shot Cross-Domain Emotion Recognition

Yujiao Tang1, Yadong Wu1,*, Yuanmei He2, Jilin Liu1, Weihan Zhang1

1 School of Computer Science and Engineering, Sichuan University of Science and Engineering, Yibin, 644002, China
2 School of Mechanical and Power Engineering, Chongqing University of Science and Technology, Chongqing, 401331, China

* Corresponding Author: Yadong Wu. Email: email

Computers, Materials & Continua 2025, 82(2), 2331-2352. https://doi.org/10.32604/cmc.2024.059115

Abstract

Emotion recognition plays a crucial role in various fields and is a key task in natural language processing (NLP). The objective is to identify and interpret emotional expressions in text. However, traditional emotion recognition approaches often struggle in few-shot cross-domain scenarios due to their limited capacity to generalize semantic features across different domains. Additionally, these methods face challenges in accurately capturing complex emotional states, particularly those that are subtle or implicit. To overcome these limitations, we introduce a novel approach called Dual-Task Contrastive Meta-Learning (DTCML). This method combines meta-learning and contrastive learning to improve emotion recognition. Meta-learning enhances the model’s ability to generalize to new emotional tasks, while instance contrastive learning further refines the model by distinguishing unique features within each category, enabling it to better differentiate complex emotional expressions. Prototype contrastive learning, in turn, helps the model address the semantic complexity of emotions across different domains, enabling the model to learn fine-grained emotions expression. By leveraging dual tasks, DTCML learns from two domains simultaneously, the model is encouraged to learn more diverse and generalizable emotions features, thereby improving its cross-domain adaptability and robustness, and enhancing its generalization ability. We evaluated the performance of DTCML across four cross-domain settings, and the results show that our method outperforms the best baseline by 5.88%, 12.04%, 8.49%, and 8.40% in terms of accuracy.

Keywords


Cite This Article

APA Style
Tang, Y., Wu, Y., He, Y., Liu, J., Zhang, W. (2025). Dual-task contrastive meta-learning for few-shot cross-domain emotion recognition. Computers, Materials & Continua, 82(2), 2331–2352. https://doi.org/10.32604/cmc.2024.059115
Vancouver Style
Tang Y, Wu Y, He Y, Liu J, Zhang W. Dual-task contrastive meta-learning for few-shot cross-domain emotion recognition. Comput Mater Contin. 2025;82(2):2331–2352. https://doi.org/10.32604/cmc.2024.059115
IEEE Style
Y. Tang, Y. Wu, Y. He, J. Liu, and W. Zhang, “Dual-Task Contrastive Meta-Learning for Few-Shot Cross-Domain Emotion Recognition,” Comput. Mater. Contin., vol. 82, no. 2, pp. 2331–2352, 2025. https://doi.org/10.32604/cmc.2024.059115



cc Copyright © 2025 The Author(s). Published by Tech Science Press.
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 361

    View

  • 137

    Download

  • 0

    Like

Share Link