Home / Journals / CMC / Online First / doi:10.32604/cmc.2024.050018
Special Issues
Table of Content

Open Access

ARTICLE

A Novel 3D Gait Model for Subject Identification Robust against Carrying and Dressing Variations

Jian Luo1,*, Bo Xu1, Tardi Tjahjadi2, Jian Yi1
1 College Of Information Science and Engineering, Hunan Normal University, Changsha, 410000, China
2 School of Engineering, University of Warwick, Coventry, CV4 7AL, UK
* Corresponding Author: Jian Luo. Email: email
(This article belongs to the Special Issue: Multimodal Learning in Image Processing)

Computers, Materials & Continua https://doi.org/10.32604/cmc.2024.050018

Received 25 January 2024; Accepted 11 June 2024; Published online 08 July 2024

Abstract

Subject identification via the subject’s gait is challenging due to variations in the subject’s carrying and dressing conditions in real-life scenes. This paper proposes a novel targeted 3-dimensional (3D) gait model (3DGait) represented by a set of interpretable 3DGait descriptors based on a 3D parametric body model. The 3DGait descriptors are utilised as invariant gait features in the 3DGait recognition method to address object carrying and dressing. The 3DGait recognition method involves 2-dimensional (2D) to 3DGait data learning based on 3D virtual samples, a semantic gait parameter estimation Long Short Time Memory (LSTM) network (3D-SGPE-LSTM), a feature fusion deep model based on a multi-set canonical correlation analysis, and SoftMax recognition network. First, a sensory experiment based on 3D body shape and pose deformation with 3D virtual dressing is used to fit 3DGait onto the given 2D gait images. 3D interpretable semantic parameters control the 3D morphing and dressing involved. Similarity degree measurement determines the semantic descriptors of 2D gait images of subjects with various shapes, poses and styles. Second, using the 2D gait images as input and the subjects’ corresponding 3D semantic descriptors as output, an end-to-end 3D-SGPE-LSTM is constructed and trained. Third, body shape, pose and external gait factors (3D-eFactors) are estimated using the 3D-SGPE-LSTM model to create a set of interpretable gait descriptors to represent the 3DGait Model, i.e., 3D intrinsic semantic shape descriptor (3D-Shape); 3D skeleton-based gait pose descriptor (3D-Pose) and 3D dressing with other 3D-eFators. Finally, the 3D-Shape and 3D-Pose descriptors are coupled to a unified pattern space by learning prior knowledge from the 3D-eFators. Practical research on CASIA B, CMU MoBo, TUM GAID and GPJATK databases shows that 3DGait is robust against object carrying and dressing variations, especially under multi-cross variations.

Keywords

Gait recognition; human identification; three-dimensional gait; canonical correlation analysis.
  • 65

    View

  • 5

    Download

  • 0

    Like

Share Link