Open Access iconOpen Access

ARTICLE

crossmark

A Robust Framework for Multimodal Sentiment Analysis with Noisy Labels Generated from Distributed Data Annotation

by Kai Jiang, Bin Cao*, Jing Fan

School of Computer Science and Technology, Zhejiang University of Technology, Hangzhou, 310023, China

* Corresponding Author: Bin Cao. Email: email

(This article belongs to the Special Issue: Machine Learning Empowered Distributed Computing: Advance in Architecture, Theory and Practice)

Computer Modeling in Engineering & Sciences 2024, 139(3), 2965-2984. https://doi.org/10.32604/cmes.2023.046348

Abstract

Multimodal sentiment analysis utilizes multimodal data such as text, facial expressions and voice to detect people’s attitudes. With the advent of distributed data collection and annotation, we can easily obtain and share such multimodal data. However, due to professional discrepancies among annotators and lax quality control, noisy labels might be introduced. Recent research suggests that deep neural networks (DNNs) will overfit noisy labels, leading to the poor performance of the DNNs. To address this challenging problem, we present a Multimodal Robust Meta Learning framework (MRML) for multimodal sentiment analysis to resist noisy labels and correlate distinct modalities simultaneously. Specifically, we propose a two-layer fusion net to deeply fuse different modalities and improve the quality of the multimodal data features for label correction and network training. Besides, a multiple meta-learner (label corrector) strategy is proposed to enhance the label correction approach and prevent models from overfitting to noisy labels. We conducted experiments on three popular multimodal datasets to verify the superiority of our method by comparing it with four baselines.

Keywords


Cite This Article

APA Style
Jiang, K., Cao, B., Fan, J. (2024). A robust framework for multimodal sentiment analysis with noisy labels generated from distributed data annotation. Computer Modeling in Engineering & Sciences, 139(3), 2965-2984. https://doi.org/10.32604/cmes.2023.046348
Vancouver Style
Jiang K, Cao B, Fan J. A robust framework for multimodal sentiment analysis with noisy labels generated from distributed data annotation. Comput Model Eng Sci. 2024;139(3):2965-2984 https://doi.org/10.32604/cmes.2023.046348
IEEE Style
K. Jiang, B. Cao, and J. Fan, “A Robust Framework for Multimodal Sentiment Analysis with Noisy Labels Generated from Distributed Data Annotation,” Comput. Model. Eng. Sci., vol. 139, no. 3, pp. 2965-2984, 2024. https://doi.org/10.32604/cmes.2023.046348



cc Copyright © 2024 The Author(s). Published by Tech Science Press.
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 1072

    View

  • 398

    Download

  • 0

    Like

Share Link