Open Access iconOpen Access

ARTICLE

Human-Computer Interaction Using Deep Fusion Model-Based Facial Expression Recognition System

Saiyed Umer1,*, Ranjeet Kumar Rout2, Shailendra Tiwari3, Ahmad Ali AlZubi4, Jazem Mutared Alanazi4, Kulakov Yurii5

1 Department of Computer Science & Engineering, Aliah University, Kolkata, 700156, India
2 Department of Computer Science and Engineering, National Institute of Technology, Srinagar, Jammu and Kashmir, 190006, India
3 Department of Computer Science & Engineering, Thapar University, Patiala, 147004, India
4 Computer Science Department, King Saud University, Riyadh, 11451, Saudi Arabia
5 Department of Computer Engineering, National Technical University of Ukraine, Igor Sikorsky Kyiv Polytechnic Institute, Kyiv, 03056, Ukraine

* Corresponding Author: Saiyed Umer. Email: email

Computer Modeling in Engineering & Sciences 2023, 135(2), 1165-1185. https://doi.org/10.32604/cmes.2022.023312

Abstract

A deep fusion model is proposed for facial expression-based human-computer Interaction system. Initially, image preprocessing, i.e., the extraction of the facial region from the input image is utilized. Thereafter, the extraction of more discriminative and distinctive deep learning features is achieved using extracted facial regions. To prevent overfitting, in-depth features of facial images are extracted and assigned to the proposed convolutional neural network (CNN) models. Various CNN models are then trained. Finally, the performance of each CNN model is fused to obtain the final decision for the seven basic classes of facial expressions, i.e., fear, disgust, anger, surprise, sadness, happiness, neutral. For experimental purposes, three benchmark datasets, i.e., SFEW, CK+, and KDEF are utilized. The performance of the proposed system is compared with some state-of-the-art methods concerning each dataset. Extensive performance analysis reveals that the proposed system outperforms the competitive methods in terms of various performance metrics. Finally, the proposed deep fusion model is being utilized to control a music player using the recognized emotions of the users.

Keywords


Cite This Article

APA Style
Umer, S., Rout, R.K., Tiwari, S., AlZubi, A.A., Alanazi, J.M. et al. (2023). Human-computer interaction using deep fusion model-based facial expression recognition system. Computer Modeling in Engineering & Sciences, 135(2), 1165-1185. https://doi.org/10.32604/cmes.2022.023312
Vancouver Style
Umer S, Rout RK, Tiwari S, AlZubi AA, Alanazi JM, Yurii K. Human-computer interaction using deep fusion model-based facial expression recognition system. Comput Model Eng Sci. 2023;135(2):1165-1185 https://doi.org/10.32604/cmes.2022.023312
IEEE Style
S. Umer, R.K. Rout, S. Tiwari, A.A. AlZubi, J.M. Alanazi, and K. Yurii, “Human-Computer Interaction Using Deep Fusion Model-Based Facial Expression Recognition System,” Comput. Model. Eng. Sci., vol. 135, no. 2, pp. 1165-1185, 2023. https://doi.org/10.32604/cmes.2022.023312



cc Copyright © 2023 The Author(s). Published by Tech Science Press.
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 1175

    View

  • 653

    Download

  • 2

    Like

Share Link