Open Access iconOpen Access

ARTICLE

crossmark

Multi-Head Encoder Shared Model Integrating Intent and Emotion for Dialogue Summarization

Xinlai Xing, Junliang Chen*, Xiaochuan Zhang, Shuran Zhou, Runqing Zhang

School of Artificial Intelligence, Chongqing University of Technology, Chongqing, 401135, China

* Corresponding Author: Junliang Chen. Email: email

(This article belongs to the Special Issue: The Next-generation Deep Learning Approaches to Emerging Real-world Applications)

Computers, Materials & Continua 2025, 82(2), 2275-2292. https://doi.org/10.32604/cmc.2024.056877

Abstract

In task-oriented dialogue systems, intent, emotion, and actions are crucial elements of user activity. Analyzing the relationships among these elements to control and manage task-oriented dialogue systems is a challenging task. However, previous work has primarily focused on the independent recognition of user intent and emotion, making it difficult to simultaneously track both aspects in the dialogue tracking module and to effectively utilize user emotions in subsequent dialogue strategies. We propose a Multi-Head Encoder Shared Model (MESM) that dynamically integrates features from emotion and intent encoders through a feature fusioner. Addressing the scarcity of datasets containing both emotion and intent labels, we designed a multi-dataset learning approach enabling the model to generate dialogue summaries encompassing both user intent and emotion. Experiments conducted on the MultiWoZ and MELD datasets demonstrate that our model effectively captures user intent and emotion, achieving extremely competitive results in dialogue state tracking tasks.

Keywords

Dialogue summaries; dialogue state tracking; emotion recognition; task-oriented dialogue system; pre-trained language model

Cite This Article

APA Style
Xing, X., Chen, J., Zhang, X., Zhou, S., Zhang, R. (2025). Multi-Head Encoder Shared Model Integrating Intent and Emotion for Dialogue Summarization. Computers, Materials & Continua, 82(2), 2275–2292. https://doi.org/10.32604/cmc.2024.056877
Vancouver Style
Xing X, Chen J, Zhang X, Zhou S, Zhang R. Multi-Head Encoder Shared Model Integrating Intent and Emotion for Dialogue Summarization. Comput Mater Contin. 2025;82(2):2275–2292. https://doi.org/10.32604/cmc.2024.056877
IEEE Style
X. Xing, J. Chen, X. Zhang, S. Zhou, and R. Zhang, “Multi-Head Encoder Shared Model Integrating Intent and Emotion for Dialogue Summarization,” Comput. Mater. Contin., vol. 82, no. 2, pp. 2275–2292, 2025. https://doi.org/10.32604/cmc.2024.056877



cc Copyright © 2025 The Author(s). Published by Tech Science Press.
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 581

    View

  • 257

    Download

  • 0

    Like

Share Link