Home / Journals / CMC / Online First / doi:10.32604/cmc.2024.056877
Special Issues
Table of Content

Open Access

ARTICLE

Multi-Head Encoder Shared Model Integrating Intent and Emotion for Dialogue Summarization

Xinlai Xing, Junliang Chen*, Xiaochuan Zhang, Shuran Zhou, Runqing Zhang
School of Artificial Intelligence, Chongqing University of Technology, Chongqing, 401135, China
* Corresponding Author: Junliang Chen. Email: email
(This article belongs to the Special Issue: The Next-generation Deep Learning Approaches to Emerging Real-world Applications)

Computers, Materials & Continua https://doi.org/10.32604/cmc.2024.056877

Received 01 August 2024; Accepted 13 November 2024; Published online 06 December 2024

Abstract

In task-oriented dialogue systems, intent, emotion, and actions are crucial elements of user activity. Analyzing the relationships among these elements to control and manage task-oriented dialogue systems is a challenging task. However, previous work has primarily focused on the independent recognition of user intent and emotion, making it difficult to simultaneously track both aspects in the dialogue tracking module and to effectively utilize user emotions in subsequent dialogue strategies. We propose a Multi-Head Encoder Shared Model (MESM) that dynamically integrates features from emotion and intent encoders through a feature fusioner. Addressing the scarcity of datasets containing both emotion and intent labels, we designed a multi-dataset learning approach enabling the model to generate dialogue summaries encompassing both user intent and emotion. Experiments conducted on the MultiWoZ and MELD datasets demonstrate that our model effectively captures user intent and emotion, achieving extremely competitive results in dialogue state tracking tasks.

Keywords

Dialogue summaries; dialogue state tracking; emotion recognition; task-oriented dialogue system; pre-trained language model
  • 129

    View

  • 24

    Download

  • 0

    Like

Share Link