Open Access iconOpen Access

ARTICLE

crossmark

Multimodal Spatiotemporal Feature Map for Dynamic Gesture Recognition

Xiaorui Zhang1,2,3,*, Xianglong Zeng1, Wei Sun3,4, Yongjun Ren1,2,3, Tong Xu5

1 Engineering Research Center of Digital Forensics, Ministry of Education, Jiangsu Engineering Center of Network Monitoring, School of Computer and Software, Nanjing University of Information Science & Technology, Nanjing, 210044, China
2 Wuxi Research Institute, Nanjing University of Information Science & Technology, Wuxi, 214100, China
3 Jiangsu Collaborative Innovation Center of Atmospheric Environment and Equipment Technology (CICAEET), Nanjing University of Information Science & Technology, Nanjing, 210044, China
4 School of Automation, Nanjing University of Information Science & Technology, Nanjing, 210044, China
5 University of Southern California, Los Angeles, California, USA

* Corresponding Author: Xiaorui Zhang. Email: email

Computer Systems Science and Engineering 2023, 46(1), 671-686. https://doi.org/10.32604/csse.2023.035119

Abstract

Gesture recognition technology enables machines to read human gestures and has significant application prospects in the fields of human-computer interaction and sign language translation. Existing researches usually use convolutional neural networks to extract features directly from raw gesture data for gesture recognition, but the networks are affected by much interference information in the input data and thus fit to some unimportant features. In this paper, we proposed a novel method for encoding spatio-temporal information, which can enhance the key features required for gesture recognition, such as shape, structure, contour, position and hand motion of gestures, thereby improving the accuracy of gesture recognition. This encoding method can encode arbitrarily multiple frames of gesture data into a single frame of the spatio-temporal feature map and use the spatio-temporal feature map as the input to the neural network. This can guide the model to fit important features while avoiding the use of complex recurrent network structures to extract temporal features. In addition, we designed two sub-networks and trained the model using a sub-network pre-training strategy that trains the sub-networks first and then the entire network, so as to avoid the sub-networks focusing too much on the information of a single category feature and being overly influenced by each other’s features. Experimental results on two public gesture datasets show that the proposed spatio-temporal information encoding method achieves advanced accuracy.

Keywords


Cite This Article

APA Style
Zhang, X., Zeng, X., Sun, W., Ren, Y., Xu, T. (2023). Multimodal spatiotemporal feature map for dynamic gesture recognition. Computer Systems Science and Engineering, 46(1), 671-686. https://doi.org/10.32604/csse.2023.035119
Vancouver Style
Zhang X, Zeng X, Sun W, Ren Y, Xu T. Multimodal spatiotemporal feature map for dynamic gesture recognition. Comput Syst Sci Eng. 2023;46(1):671-686 https://doi.org/10.32604/csse.2023.035119
IEEE Style
X. Zhang, X. Zeng, W. Sun, Y. Ren, and T. Xu, “Multimodal Spatiotemporal Feature Map for Dynamic Gesture Recognition,” Comput. Syst. Sci. Eng., vol. 46, no. 1, pp. 671-686, 2023. https://doi.org/10.32604/csse.2023.035119



cc Copyright © 2023 The Author(s). Published by Tech Science Press.
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 1073

    View

  • 638

    Download

  • 1

    Like

Share Link