Open Access iconOpen Access

ARTICLE

Re-Distributing Facial Features for Engagement Prediction with ModernTCN

Xi Li1,2, Weiwei Zhu2, Qian Li3,*, Changhui Hou1,*, Yaozong Zhang1

1 College of Information and Artificial Intelligence, Nanchang Institute of Science and Technology, Nanchang, 330108, China
2 School of Electrical and Information Engineering, Wuhan Institute of Technology, Wuhan, 430205, China
3 School of Electronic Information Engineering, Wuhan Donghu University, Wuhan, 430212, China

* Corresponding Authors: Qian Li. Email: email; Changhui Hou. Email: email

(This article belongs to the Special Issue: The Latest Deep Learning Architectures for Artificial Intelligence Applications)

Computers, Materials & Continua 2024, 81(1), 369-391. https://doi.org/10.32604/cmc.2024.054982

Abstract

Automatically detecting learners’ engagement levels helps to develop more effective online teaching and assessment programs, allowing teachers to provide timely feedback and make personalized adjustments based on students’ needs to enhance teaching effectiveness. Traditional approaches mainly rely on single-frame multimodal facial spatial information, neglecting temporal emotional and behavioural features, with accuracy affected by significant pose variations. Additionally, convolutional padding can erode feature maps, affecting feature extraction’s representational capacity. To address these issues, we propose a hybrid neural network architecture, the redistributing facial features and temporal convolutional network (RefEIP). This network consists of three key components: first, utilizing the spatial attention mechanism large kernel attention (LKA) to automatically capture local patches and mitigate the effects of pose variations; second, employing the feature organization and weight distribution (FOWD) module to redistribute feature weights and eliminate the impact of white features and enhancing representation in facial feature maps. Finally, we analyse the temporal changes in video frames through the modern temporal convolutional network (ModernTCN) module to detect engagement levels. We constructed a near-infrared engagement video dataset (NEVD) to better validate the efficiency of the RefEIP network. Through extensive experiments and in-depth studies, we evaluated these methods on the NEVD and the Database for Affect in Situations of Elicitation (DAiSEE), achieving an accuracy of 90.8% on NEVD and 61.2% on DAiSEE in the four-class classification task, indicating significant advantages in addressing engagement video analysis problems.

Keywords


Cite This Article

APA Style
Li, X., Zhu, W., Li, Q., Hou, C., Zhang, Y. (2024). Re-distributing facial features for engagement prediction with moderntcn. Computers, Materials & Continua, 81(1), 369-391. https://doi.org/10.32604/cmc.2024.054982
Vancouver Style
Li X, Zhu W, Li Q, Hou C, Zhang Y. Re-distributing facial features for engagement prediction with moderntcn. Comput Mater Contin. 2024;81(1):369-391 https://doi.org/10.32604/cmc.2024.054982
IEEE Style
X. Li, W. Zhu, Q. Li, C. Hou, and Y. Zhang "Re-Distributing Facial Features for Engagement Prediction with ModernTCN," Comput. Mater. Contin., vol. 81, no. 1, pp. 369-391. 2024. https://doi.org/10.32604/cmc.2024.054982



cc Copyright © 2024 The Author(s). Published by Tech Science Press.
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 220

    View

  • 60

    Download

  • 0

    Like

Share Link