Home / Journals / IASC / Online First / doi:10.32604/iasc.2024.052983
Special Issues

Open Access

ARTICLE

A Deep Transfer Learning Approach for Addressing Yaw Pose Variation to Improve Face Recognition Performance

M. Jayasree1, K. A. Sunitha2,*, A. Brindha1, Punna Rajasekhar3, G. Aravamuthan3, G. Joselin Retnakumar1
1 Department of Electronics and Instrumentation Engineering, College of Engineering and Technology, SRM Institute of Science and Technology, Kattankulathur, Chennai, Tamil Nadu, 603203, India
2 Department of Electronics and Communication Engineering, SRM University, Amaravati, Mangalagiri, Andhra Pradesh, 522502, India
3 Security Electronics and Cyber Technology, Bhabha Atomic Research Centre, Anushakti Nagar, Mumbai, Maharashtra, 400085, India
* Corresponding Author: K. A. Sunitha. Email: email

Intelligent Automation & Soft Computing https://doi.org/10.32604/iasc.2024.052983

Received 21 April 2024; Accepted 13 June 2024; Published online 19 July 2024

Abstract

Identifying faces in non-frontal poses presents a significant challenge for face recognition (FR) systems. In this study, we delved into the impact of yaw pose variations on these systems and devised a robust method for detecting faces across a wide range of angles from 0° to ±90°. We initially selected the most suitable feature vector size by integrating the Dlib, FaceNet (Inception-v2), and “Support Vector Machines (SVM)” + “K-nearest neighbors (KNN)” algorithms. To train and evaluate this feature vector, we used two datasets: the “Labeled Faces in the Wild (LFW)” benchmark data and the “Robust Shape-Based FR System (RSBFRS)” real-time data, which contained face images with varying yaw poses. After selecting the best feature vector, we developed a real-time FR system to handle yaw poses. The proposed FaceNet architecture achieved recognition accuracies of 99.7% and 99.8% for the LFW and RSBFRS datasets, respectively, with 128 feature vector dimensions and minimum Euclidean distance thresholds of 0.06 and 0.12. The FaceNet + SVM and FaceNet + KNN classifiers achieved classification accuracies of 99.26% and 99.44%, respectively. The 128-dimensional embedding vector showed the highest recognition rate among all dimensions. These results demonstrate the effectiveness of our proposed approach in enhancing FR accuracy, particularly in real-world scenarios with varying yaw poses.

Keywords

Face recognition; pose variations; transfer learning method; yaw poses; FaceNet; Inception-v2
  • 256

    View

  • 34

    Download

  • 0

    Like

Share Link