Open Access
ARTICLE
Automatic Recognition of Construction Worker Activities Using Deep Learning Approaches and Wearable Inertial Sensors
1 Department of Computer Engineering, School of Information and Communication Technology, University of Phayao, Phayao, 56000, Thailand
2 Department of Mathematics, Intelligent and Nonlinear Dynamic Innovations Research Center, Faculty of Applied Science, King Mongkut’s University of Technology North Bangkok, Bangkok, 10800, Thailand
* Corresponding Author: Anuchit Jitpattanakul. Email:
Intelligent Automation & Soft Computing 2023, 36(2), 2111-2128. https://doi.org/10.32604/iasc.2023.033542
Received 20 June 2022; Accepted 22 September 2022; Issue published 05 January 2023
Abstract
The automated evaluation and analysis of employee behavior in an Industry 4.0-compliant manufacturing firm are vital for the rapid and accurate diagnosis of work performance, particularly during the training of a new worker. Various techniques for identifying and detecting worker performance in industrial applications are based on computer vision techniques. Despite widespread computer vision-based approaches, it is challenging to develop technologies that assist the automated monitoring of worker actions at external working sites where camera deployment is problematic. Through the use of wearable inertial sensors, we propose a deep learning method for automatically recognizing the activities of construction workers. The suggested method incorporates a convolutional neural network, residual connection blocks, and multi-branch aggregate transformation modules for high-performance recognition of complicated activities such as construction worker tasks. The proposed approach has been evaluated using standard performance measures, such as precision, F1-score, and AUC, using a publicly available benchmark dataset known as VTT-ConIoT, which contains genuine construction work activities. In addition, standard deep learning models (CNNs, RNNs, and hybrid models) were developed in different empirical circumstances to compare them to the proposed model. With an average accuracy of 99.71% and an average F1-score of 99.71%, the experimental findings revealed that the suggested model could accurately recognize the actions of construction workers. Furthermore, we examined the impact of window size and sensor position on the identification efficiency of the proposed method.Keywords
Cite This Article
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.