Open Access
ARTICLE
Robust Facial Biometric Authentication System Using Pupillary Light Reflex for Liveness Detection of Facial Images
1 CSE Department, Geethanjali College of Engineering, Hyderabad, Telangana, 501301, India
2 Department of Computer Science, LBEF Campus, Kathmandu, 44600, Nepal
3 School of Computer Science Engineering and Technology (SCSET), Bennett University, Greater Noida, 201310, India
4 Department of Computer Engineering, College of Computer and Information Sciences, King Saud University, P.O. Box 51178, Riyadh, 11543, Saudi Arabia
5 CeADAR Ireland’s Centre for AI, Technological University Dublin, Dublin, D07 EWV4, Ireland
6 Operations Research Department, Faculty of Graduate Studies for Statistical Research, Cairo University, Giza, 12613, Egypt
7 Applied Science Research Center, Applied Science Private University, Amman, 11937, Jordan
* Corresponding Author: Ali Wagdy Mohamed. Email:
(This article belongs to the Special Issue: Intelligent Biomedical Image Processing and Computer Vision)
Computer Modeling in Engineering & Sciences 2024, 139(1), 725-739. https://doi.org/10.32604/cmes.2023.030640
Received 16 April 2023; Accepted 18 July 2023; Issue published 30 December 2023
Abstract
Pupil dynamics are the important characteristics of face spoofing detection. The face recognition system is one of the most used biometrics for authenticating individual identity. The main threats to the facial recognition system are different types of presentation attacks like print attacks, 3D mask attacks, replay attacks, etc. The proposed model uses pupil characteristics for liveness detection during the authentication process. The pupillary light reflex is an involuntary reaction controlling the pupil’s diameter at different light intensities. The proposed framework consists of two-phase methodologies. In the first phase, the pupil’s diameter is calculated by applying stimulus (light) in one eye of the subject and calculating the constriction of the pupil size on both eyes in different video frames. The above measurement is converted into feature space using Kohn and Clynes model-defined parameters. The Support Vector Machine is used to classify legitimate subjects when the diameter change is normal (or when the eye is alive) or illegitimate subjects when there is no change or abnormal oscillations of pupil behavior due to the presence of printed photograph, video, or 3D mask of the subject in front of the camera. In the second phase, we perform the facial recognition process. Scale-invariant feature transform (SIFT) is used to find the features from the facial images, with each feature having a size of a 128-dimensional vector. These features are scale, rotation, and orientation invariant and are used for recognizing facial images. The brute force matching algorithm is used for matching features of two different images. The threshold value we considered is 0.08 for good matches. To analyze the performance of the framework, we tested our model in two Face antispoofing datasets named Replay attack datasets and CASIA-SURF datasets, which were used because they contain the videos of the subjects in each sample having three modalities (RGB, IR, Depth). The CASIA-SURF datasets showed an 89.9% Equal Error Rate, while the Replay Attack datasets showed a 92.1% Equal Error Rate.Keywords
Cite This Article
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.