Open Access
ARTICLE
A Novel Locomotion Rule Rmbedding Long Short-Term Memory Network with Attention for Human Locomotor Intent Classification Using Multi-Sensors Signals
1 Key Laboratory of Symbol Computation and Knowledge Engineering, Ministry of Education, Colleague of Computer Science and Technology, Jilin University, Changchun, 130012, China
2 College of Software, and Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun, 130012, China
* Corresponding Author: Yan Wang. Email:
Computers, Materials & Continua 2024, 79(3), 4349-4370. https://doi.org/10.32604/cmc.2024.047903
Received 21 November 2023; Accepted 11 April 2024; Issue published 20 June 2024
Abstract
Locomotor intent classification has become a research hotspot due to its importance to the development of assistive robotics and wearable devices. Previous work have achieved impressive performance in classifying steady locomotion states. However, it remains challenging for these methods to attain high accuracy when facing transitions between steady locomotion states. Due to the similarities between the information of the transitions and their adjacent steady states. Furthermore, most of these methods rely solely on data and overlook the objective laws between physical activities, resulting in lower accuracy, particularly when encountering complex locomotion modes such as transitions. To address the existing deficiencies, we propose the locomotion rule embedding long short-term memory (LSTM) network with Attention (LREAL) for human locomotor intent classification, with a particular focus on transitions, using data from fewer sensors (two inertial measurement units and four goniometers). The LREAL network consists of two levels: One responsible for distinguishing between steady states and transitions, and the other for the accurate identification of locomotor intent. Each classifier in these levels is composed of multiple-LSTM layers and an attention mechanism. To introduce real-world motion rules and apply constraints to the network, a prior knowledge was added to the network via a rule-modulating block. The method was tested on the ENABL3S dataset, which contains continuous locomotion date for seven steady and twelve transitions states. Experimental results showed that the LREAL network could recognize locomotor intents with an average accuracy of 99.03% and 96.52% for the steady and transitions states, respectively. It is worth noting that the LREAL network accuracy for transition-state recognition improved by 0.18% compared to other state-of-the-art network, while using data from fewer sensors.Keywords
Cite This Article
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.