Open Access iconOpen Access

ARTICLE

crossmark

A Novel 3D Gait Model for Subject Identification Robust against Carrying and Dressing Variations

by Jian Luo1,*, Bo Xu1, Tardi Tjahjadi2, Jian Yi1

1 College of Information Science and Engineering, Hunan Normal University, Changsha, 410000, China
2 School of Engineering, University of Warwick, Coventry, CV4 7AL, UK

* Corresponding Author: Jian Luo. Email: email

(This article belongs to the Special Issue: Multimodal Learning in Image Processing)

Computers, Materials & Continua 2024, 80(1), 235-261. https://doi.org/10.32604/cmc.2024.050018

Abstract

Subject identification via the subject’s gait is challenging due to variations in the subject’s carrying and dressing conditions in real-life scenes. This paper proposes a novel targeted 3-dimensional (3D) gait model (3DGait) represented by a set of interpretable 3DGait descriptors based on a 3D parametric body model. The 3DGait descriptors are utilised as invariant gait features in the 3DGait recognition method to address object carrying and dressing. The 3DGait recognition method involves 2-dimensional (2D) to 3DGait data learning based on 3D virtual samples, a semantic gait parameter estimation Long Short Time Memory (LSTM) network (3D-SGPE-LSTM), a feature fusion deep model based on a multi-set canonical correlation analysis, and SoftMax recognition network. First, a sensory experiment based on 3D body shape and pose deformation with 3D virtual dressing is used to fit 3DGait onto the given 2D gait images. 3D interpretable semantic parameters control the 3D morphing and dressing involved. Similarity degree measurement determines the semantic descriptors of 2D gait images of subjects with various shapes, poses and styles. Second, using the 2D gait images as input and the subjects’ corresponding 3D semantic descriptors as output, an end-to-end 3D-SGPE-LSTM is constructed and trained. Third, body shape, pose and external gait factors (3D-eFactors) are estimated using the 3D-SGPE-LSTM model to create a set of interpretable gait descriptors to represent the 3DGait Model, i.e., 3D intrinsic semantic shape descriptor (3D-Shape); 3D skeleton-based gait pose descriptor (3D-Pose) and 3D dressing with other 3D-eFators. Finally, the 3D-Shape and 3D-Pose descriptors are coupled to a unified pattern space by learning prior knowledge from the 3D-eFators. Practical research on CASIA B, CMU MoBo, TUM GAID and GPJATK databases shows that 3DGait is robust against object carrying and dressing variations, especially under multi-cross variations.

Keywords


Cite This Article

APA Style
Luo, J., Xu, B., Tjahjadi, T., Yi, J. (2024). A novel 3D gait model for subject identification robust against carrying and dressing variations. Computers, Materials & Continua, 80(1), 235-261. https://doi.org/10.32604/cmc.2024.050018
Vancouver Style
Luo J, Xu B, Tjahjadi T, Yi J. A novel 3D gait model for subject identification robust against carrying and dressing variations. Comput Mater Contin. 2024;80(1):235-261 https://doi.org/10.32604/cmc.2024.050018
IEEE Style
J. Luo, B. Xu, T. Tjahjadi, and J. Yi, “A Novel 3D Gait Model for Subject Identification Robust against Carrying and Dressing Variations,” Comput. Mater. Contin., vol. 80, no. 1, pp. 235-261, 2024. https://doi.org/10.32604/cmc.2024.050018



cc Copyright © 2024 The Author(s). Published by Tech Science Press.
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 527

    View

  • 228

    Download

  • 0

    Like

Share Link