Kai Jiang, Bin Cao*, Jing Fan
CMES-Computer Modeling in Engineering & Sciences, Vol.139, No.3, pp. 2965-2984, 2024, DOI:10.32604/cmes.2023.046348
- 11 March 2024
Abstract Multimodal sentiment analysis utilizes multimodal data such as text, facial expressions and voice to detect people’s attitudes. With the advent of distributed data collection and annotation, we can easily obtain and share such multimodal data. However, due to professional discrepancies among annotators and lax quality control, noisy labels might be introduced. Recent research suggests that deep neural networks (DNNs) will overfit noisy labels, leading to the poor performance of the DNNs. To address this challenging problem, we present a Multimodal Robust Meta Learning framework (MRML) for multimodal sentiment analysis to resist noisy labels and correlate More >