Open Access
ARTICLE
Automated Crack Detection via Semantic Segmentation Approaches Using Advanced U-Net Architecture
1 Department of Applied Artificial Intelligence, Sungkyunkwan University, Seoul, 03063, Korea
2 AI Research Team, Scalawox, Seoul, 08589, Korea
3 R&D Team, Raon Data, Seoul, 03073, Korea
4 Department of Interaction Science, Sungkyunkwan University, Seoul, 03063, Korea
* Corresponding Author: Eunil Park. Email:
Intelligent Automation & Soft Computing 2022, 34(1), 593-607. https://doi.org/10.32604/iasc.2022.024405
Received 15 October 2021; Accepted 05 January 2022; Issue published 15 April 2022
Abstract
Cracks affect the robustness and adaptability of various infrastructures, including buildings, bridge piers, pavement, and pipelines. Therefore, the robustness and the reliability of automated crack detection are essential. In this study, we conducted image segmentation using various crack datasets by applying the advanced architecture of U-Net. First, we collected and integrated crack datasets from prior studies, including the cracks in buildings and pavements. For effective localization and detection of cracks, we used U-Net-based neural networks, ResU-Net, VGGU-Net, and EfficientU-Net. The models were evaluated by the five-fold cross-validation using several evaluation metrics including mean pixel accuracy (MPA), mean intersection over union (MIoU), and confusion matrix. The results of the integrated dataset showed that ResU-Net (68.47%) achieves the highest MIoU with a relatively low number of parameters compared to VGGU-Net (67.71%) and EfficientU-Net (68.07%). In addition to the performance, ResU-Net showed the lowest test runtime, 40 milliseconds per single image, and the highest true positive rate of 45.00% in the pixel-wise recognition test. As the models were trained and validated with diverse surfaces, the proposed approach can be used as a pre-trained model in the task with relatively few data sources. Furthermore, both practical and managerial implications are discussed herein.Keywords
Cite This Article
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.