|Computer Modeling in |
Engineering & Sciences
Self-Driving Algorithm and Location Estimation Method for Small Environmental Monitoring Robot in Underground Mines
Department of Energy Resources Engineering, Pukyong National University, Busan, 48513, Korea
*Corresponding Author: Yosoon Choi. Email: email@example.com
Received: 07 December 2020; Accepted: 10 March 2021
Abstract: In underground mine environments where various hazards exist, such as tunnel collapse, toxic gases, the application of autonomous robots can improve the stability of exploration and efficiently perform repetitive exploratory operations. In this study, we developed a small autonomous driving robot for unmanned environmental monitoring in underground mines. The developed autonomous driving robot controls the steering according to the distance to the tunnel wall measured using the light detection and ranging sensor mounted on the robot to estimate its location by simultaneously considering the measured values of the inertial measurement unit and encoder sensors. In addition, the robot autonomously drives through the underground mine and performs environmental monitoring using the temperature/humidity, gas, and particle sensors mounted on the robot. As a result of testing the performance of the developed robot at an amethyst mine in Korea, the robot was found to be able to autonomously drive through tunnel sections with 28 m length, 2.5 m height, and 3 m width successfully. The average error of location estimation was approximately 0.16 m. Using environmental monitoring sensors, temperature of 15–17C, humidity of 42%–43%, oxygen concentration of 15.6%–15.7%, and particle concentration of 0.008–0.38 mg/m3 were measured in the experimental area, and no harmful gases were detected. In addition, an environmental monitoring map could be created using the measured values of the robot’s location coordinates and environmental factors recorded during autonomous driving.
Keywords: Underground mine; environmental monitoring; autonomous driving; geographic information system
There are various risk factors in underground mine sites, such as rockfall, tunnel collapse, collision between workers and equipment, toxic gases, and many human accidents. According to statistics from the Centers for Disease Control and Prevention (CDC), between 2010 and 2015, approximately 12,230 safety accidents have occurred at the U.S. underground mine sites, of which 121 include deaths [1,2]. In the mining industry, various efforts are being adopted to prevent these risk factors. Goodman et al.  suggested a tunnel design method that considers the effects of jointed rock masses in underground mine tunnels. Abdellah et al.  performed a stability evaluation of the intersection point during the development process of an underground mine. Wang et al.  analyzed toxic gas accidents that occur in underground coal mines.
Recently, various studies on information communication technology (ICT)-based underground mine safety management systems are being conducted. There have been studies to prevent collisions between workers and equipment using Bluetooth beacons [6–9], radio-frequency identification (RFID) [10,11], and wireless access points (APs) [12,13] in an underground mine environment. In addition, there have been studies to measure environmental factors in underground mines using open-source hardware such as Arduino [14–17] and Raspberry Pi [18,19]. Studies have also been conducted to vividly visualize the workplace of underground mines or perform safety training using augmented reality (AR) [20,21] or virtual reality (VR) [22–26] technologies. A safety management system using artificial intelligence (AI) technology such as machine learning has also been developed [27,28].
Recently, studies have been conducted using autonomous driving robots to explore workplaces, transport roads, and accident sites in underground mines. Autonomous driving robots are used to perform exploration tasks while recognizing their own location in underground mine tunnels [29–33], or to perform tunnel mapping tasks to evaluate the shape and geological stability of the tunnels [34,35]. Representatively, Bakambu et al.  developed an autonomous driving robot that can perform path planning and obstacle avoidance in an underground mine environment, and created a 2D map for the underground mine shaft. Ghosh et al.  created a three-dimensional tunnel map for underground mines using a rotating light detection and aging (LiDAR) sensor and verified its performance through field experiments. Kim et al.  developed a LiDAR sensor-based autonomous driving robot and quantitatively evaluated the driving accuracy through driving tests at an underground mine site. Neumann et al.  developed an autonomous driving robot based on a robot operating system (ROS) equipped with inertial measurement unit (IMU), LiDAR, and camera sensors, and performed 3D mapping work of underground tunnels.
Various studies have also been conducted to measure environmental factors in underground mines using autonomous robots and environmental sensors. Baker et al.  developed ‘Groundhog’, an autonomous robot equipped with LiDAR sensors, camera sensors, gyro sensors, and gas sensors, and carried out environmental monitoring work at abandoned underground mines. Zhao et al.  developed an autonomous driving robot “MSRBOTS” that can detect toxic gases such as methane gas, carbon monoxide, and hydrogen sulfide in an underground mine environment, and conducted field experiments on underground mines. Günther et al.  developed a system that can measure temperature, humidity, and gas concentration in underground mine shafts using autonomous robots and transmit them remotely.
However, previous environmental monitoring studies using autonomous robots in underground mines have limitations in that autonomous driving functions can be used only in some areas, and the robots need to be controlled remotely in most areas. In addition, environmental factors of underground mines cannot be identified because no analysis or visualization of the acquired environmental data was performed. In particular, it is difficult to predict the location of the environmental data because the environmental data and location information of robots were not used together . As such, in previous studies, there has never been a case where autonomous driving, location estimation, and environmental monitoring work were conducted simultaneously.
In this study, we developed an unmanned environmental monitoring system using an autonomous robot and environmental sensors for underground mines and created an environmental map for underground mines using the location information of the autonomous robot, the environmental monitoring data, and the geographic information system (GIS). Location information of the autonomous driving robot was obtained using IMU, LiDAR, and encoder sensors, and the temperature, humidity, and concentration of gas in the atmosphere were measured using environmental sensors. This paper details the development of an environmental monitoring system for an autonomous driving robot and the results of a field experiment conducted using the developed system.
2.1 Hardware Configuration for Environmental Monitoring System
2.1.1 Hardware Configuration
Fig. 1 shows the hardware configuration of the autonomous driving robot for environmental monitoring developed in this study. The autonomous driving robot measures environmental factors using three types (temperature/humidity, gas, and particle) of environmental sensors, and performs autonomous driving and position estimation using three types (LiDAR, IMU, Encoder) of distance and angle sensors. All the sensors are connected to the laptop PC that acts as the main controller. In addition, a remote controller, a motor controller, and a driving platform are also connected to the main controller.
2.1.2 Mobile Robot Platform and Sensors
Tab. 1 shows the sensors and the mobile robot platform used in the autonomous robot developed in this study. The ERP-42 robot was used as the mobile robot platform . The ERP-42 robot controls speed and steering using four wheels, connects the remote controller and Wi-Fi communication, and communicates with the driving motor through RS232C. The autonomous driving robot utilizes three types of sensors (IMU, Encoder, LiDAR) to perform autonomous driving and location estimation. To measure the robot’s three-axis pose, an IMU sensor that combines the acceleration sensor, geomagnetic sensor, and gyroscope sensor with a Kalman filter was used. The IMU sensor was used after performing the correction work, and it outputs the pose data in the form of Euler angles of roll, pitch, and yaw. The driving distance of the robot was measured using the encoder sensor, and it was calculated by applying the encoder and motor gear ratio to the pulse output according to the rotation of the robot motor.
In addition, the autonomous driving robot’s main controller comprised an Intel core i7-9750H CPU 4.50 GHz, 16 GB RAM, a notebook PC with Windows 10 specifications, and a remote-control device with an Intel CPU N2600 1.60 GHz, 2 GB RAM, and a Windows 7 notebook PC. ATMega128 was used as the lower controller, and the video of the webcam installed on the front of the robot was transmitted to and recorded on a notebook PC. Fig. 2 shows the exterior view of the autonomous driving robot used in this study. Three types of environmental detection sensors (temperature/humidity, gas, and particle) and a webcam were placed in front of the robot, and LiDAR sensors were placed on the top of the robot.
2.1.3 Sensor Configuration for Environmental Monitoring
Fig. 3 shows the temperature and humidity sensor (Fig. 3a), gas sensor (Fig. 3b), and particle sensor (Fig. 3c) used in this study. To measure temperature and humidity, an Arduino DHT-11 sensor-based SEN11301P module, an open-source hardware, was used , and an Arduino Uno board was used to connect the SEN11301P module to the PC. Honeywell’s GasAlertMax XT II model was used to measure hydrogen sulfide (H2S), carbon monoxide (CO), oxygen (O2), and combustible gases [lower explosion limit (LEL)] . To measure the particle concentration in the air, a digital dust monitor 3443 model of KANOMAX was used . Tab. 2 shows the detailed specifications of the environmental sensors used in this study. Each environmental measurement sensor uses its own dedicated software to calibrate and interpolate noise and missing data (gas sensor: BW technologies fleet manager II software , dust sensor: digital dust monitor 3443 software , and temperature and humidity sensor: Arduino IDE software designed for pre-calibration ).
2.2 Software Configuration for Environmental Monitoring System
2.2.1 Software Configuration
The environmental sensors used in this study are connected to the main controller, a notebook PC via USB communication, and store the robot’s location information and environmental data in 1 s using LabVIEW software. Fig. 4 shows the overall structure of the autonomous driving robot-based environment mapping system developed in this study. This system performs autonomous driving and location estimation using LiDAR, IMU, and encoder sensors, and calculates the robot’s pose and location information in real time. In addition, it measures environmental factors using temperature/humidity, gas, and particle sensors, and stores data. The location information of the robot calculated using the location estimation sensors and the environmental data measured using the environmental sensors are sorted according to time and converted into one-point data that is sequentially converted into line and surface data to create an environment map.
2.2.2 Autonomous Driving and Location Estimation Method
Fig. 5 shows the flowchart of the autonomous driving algorithm. The autonomous driving robot measures the distance to the front using the LiDAR sensor and determines the obstacles in the direction of driving. It also calculates the distance difference between the left- and right-side tunnel walls to recognize the centerline of the road and recognizes the robot’s driving position using two driving threshold values (Min.Threshold, Max.Threshold). According to distance difference thresholds, robot’s driving states are classified as BR (Big Right), SR (Small Right), N (Normal), SL (Small Left), and BL (Big Left). Fig. 6 shows the classification of states according to the robot’s driving position. The robot measures the distances to the left and right walls, calculates the difference, and determines the state class. For example, when the robot is driving close to the right wall, the distance to the left wall is measured to be considerably greater than that to the right wall, which is then determined as BR. The robot automatically drives along the centerline of the road through classified driving positions and distance differences. Tab. 3 shows the calculation of the steering angle for returning from the current robot’s driving position to the centerline of the road, depending on the distance between the left- and right-side tunnel walls .
Fig. 7 shows the dynamic model of the autonomous driving robot developed in this study. In the xy-plane, the robot has a velocity of and along the x- and y-axes. The distance from the location at the previous time (t) to the location at the current time () is represented by d(t), and the heading angle of the robot is represented by () (Fig. 7a). In the xz-plane, the robot has a velocity of and along the x- and z-axes; the robot’s pitch angle is represented by () (Fig. 7b). The pose and driving distance of the autonomous robot were calculated as location coordinates using Eqs. (1)–(3), where x(t), y(t), and z(t) represent the robot’s x-, y-, and z-coordinates at time (t), respectively. At (t), the heading and pitch angles are determined using the LiDAR and IMU sensors. Additionally, d(t) represents the moving distance input from the encoder sensor .
IMU, encoder, and LiDAR sensors were used to estimate the location of the autonomous robot in the underground mines. Fig. 8 shows the overall system architecture of the location estimation algorithm for the autonomous driving robot. Raw data received from the accelerometer, gyroscope, and magnetometer were fused using a Kalman filter to calculate the robot’s three-dimensional pose of the robot. The robot’s heading value was calculated by recognizing the wall of the tunnel with the distance data measured using the LiDAR sensor and calculating the rotation angle of the robot. The robot’s 3D pose calculated using the IMU sensor and the robot’s heading angle calculated using the LiDAR sensor were fused according to the rotation angle of the robot, and then applied together with the travel distance input from the encoder sensor to estimate the location of the autonomous robot.
Fig. 9 represents the processing diagram of the data processing algorithm of the IMU sensor. Gyroscope, magnetometer, and accelerometer are processed with Kalman filter and converted to orientation matrix. U, V, and W are fixed frames of the IMU sensor in x, y, and z axes. g represents gravity acceleration, while aUgravity, aVgravity, aWgravity represents gravity acceleration vectors in the U, V, and W axes. aUcentripetal, aVcentripetal, aWcentripetal denotes centripetal in the direction of U, V, and W, wU, wV, wW, and , , , denote the angular and linear velocities in the U, V and W axes .
To perform autonomous driving, data acquisition from sensors, and location estimation, LabVIEW2018 software (National Instruments) was used, which is a graphical programming language that enables intuitive programming; therefore, it is effectively used in robot control and signal instruments. Fig. 10 shows a part of the block diagram of the programming code for the autonomous driving algorithm and data acquisition from the IMU sensor. The autonomous driving robot’s position is classified into five driving states, with Max.Threshold and Min.Threshold as the boundaries. In each driving state, the distance to the left and right walls is calculated, and the steering angle, which enables driving along the center line of the tunnel, is output (Fig. 10a). When an obstacle is detected, the robot is set to stop, and the autonomous driving mode is switched to the remote-control mode through a remote controller. In addition, for the IMU sensor, 3-axis raw data and pre-calibrated roll, pitch, and yaw data received from the gyroscope, accelerometer, and magnetometer are output in 0.005 s increments (Fig. 10b).
2.3 Creating Environmental Map Using GIS
In this study, ArcGIS, a geographic information system (GIS) software, was used to create an environmental map. Fig. 11 shows the flowchart used to create an environment map in ArcGIS software using the robot’s location information and environmental data. First, the position of the robot and the acquired data were matched by sorting the robot’s location information and environmental data according to the acquisition time. The matched data were merged into one data using the [Join] function. Using the [XY to Line] function, the environmental data values, including location information, were converted from point data into line data having a range, and using the [Buffer] function, line data were converted into plane data having a range. When converting line data into plane data, each side was extended by 1.5 m to the left and right by reflecting the width of the underground mine shaft. Because each plane data represented the robot’s driving path, the robot’s heading could also be reflected when implementing the [Buffer] function.
In this study, the buffer function was used instead of the spatial interpolation technique because the autonomous driving robot acquired data at close intervals. Additionally, environmental factors were expressed in different colors according to the range to visualize changes and distributions in environmental factors. A 2D map of the underground mine shaft surveyed with a 2D LiDAR sensor was visualized simultaneously.
2.4 Field Experiment
2.4.1 Experiment Area
In this study, field experiments were conducted on an abandoned amethyst mine located in Korea (353243 N, 129537 E). Among the total mine sites, some areas with a length of approximately 28 m, a height of approximately 2.5 m, and a width of approximately 3 m were selected as the test area. The experimental area was classified in 10-s increments according to the robot’s driving time, and the entire area was classified into six sections (A–F). Fig. 12 shows the entire mine tunnel (Fig. 12a), classified driving section (Fig. 12b), driving path and the start and end points (Fig. 12c) of the field experiment area. In the experimental area, both the left and right walls are within the measurement range of the LiDAR sensor and includes four curved points.
2.4.2 Experiment Method
When the autonomous driving robot starts by receiving a start signal from a remote controller, it performs autonomous driving and location estimation using IMU, LiDAR, and encoder sensors. It also measures the temperature, humidity, and concentration of hydrogen sulfide, carbon monoxide, oxygen, combustible gases, and particles using environmental sensors. The exterior driving image of the robot and the screen of the notebook PC were recorded. The estimated location, pose data, and environmental factor data were saved in units of 1 s. After the experiment was completed, the stored location and environmental data were sorted over time to match the environmental factor values according to the robot’s location. In addition, we measured the actual coordinates and driving paths of real robots by recording and analyzing the appearance of the robot’s driving path and evaluated the accuracy of the location estimation method by comparing them with estimated location coordinates. The actual location and estimated location are calculated using the root mean square error (RMSE) method, as shown in Eq. (4).
Fig. 13 shows the developed autonomous driving robot estimating the location and measuring environmental data and its driving directions while autonomous driving through the underground mine shafts at 20, 40, and 60 s. It was confirmed that the robot successfully performed autonomous driving in all sections, and it took approximately 61 s to drive through the experimental area. The autonomous driving robot was able to visualize the robot’s driving path by performing location estimation using sensors in real time.
Tab. 4 shows the robot’s estimated and actual x- and y-coordinates and the environmental data measured while the robot is driving through the field experiment area. While the autonomous robot drove approximately 30 m, 61 locations and environmental data were measured and stored.
The underground mine where the field experiments were conducted was measured at temperatures of approximately 15–16C and humidity of approximately 42%–43%. No other gases (CO, H2S, LEL) except oxygen was detected during the field test. Fig. 14 shows the graph of changes in environmental data over time. The temperature graph (Fig. 14a) shows a maximum difference of 2C, and the humidity graph (Fig. 14b) shows a maximum difference of 1%, indicating an almost constant flow. In contrast, the particle graph (Fig. 14c) shows a relatively large difference.
The particle concentration of 0.293 mg/m3 was measured at approximately 38 to 51 s after the robot’s departure, and the relative concentration was 190 when the lowest particle concentration generated in the experimental area was converted to 1. It was expected that smoke or particles from the movement of people or equipment would have occurred at that time. The O2 concentration graph (Fig. 14d) shows a change of approximately 1C, indicating an almost constant flow similar to the temperature and humidity graphs.
In this study, the location of the autonomous driving robot was measured in real time using LiDAR, IMU, and encoder sensors, and the driving path was estimated by storing these data over time. In addition, the actual driving path of the robot was assessed by recording the appearance of the robot driving from the outside and analyzing it. Fig. 15 compares the estimated driving path and the actual driving path of the robot while driving through the field experiment area for unmanned environmental monitoring. The autonomous driving robot showed a relatively large error in some sections where the rotation angle was large, but the overall accuracy of location estimation was high. When compared quantitatively, RMSEs of approximately 0.11 m along the x-axis and 0.22 m along the y-axis were noted. Tab. 5 shows the RMSEs of location estimation and the average, standard deviation of environmental data at each section.
Fig. 16 shows the environment map created using the environmental data and GIS while the autonomous driving robot is driving. Because the location information and particle concentration of the autonomous driving robot were visualized simultaneously, it was possible to intuitively check the change in the particle concentration according to the driving path of the robot.
It was confirmed that the particle concentration partially increased in the area of 5 m along the Y-axis of the experimental area, and the particle concentration gradually increased from approximately 18 m to the maximum in the area of 19 m, and remained at a high concentration until the area of 24 m.
In this study, a small autonomous driving robot that can perform unmanned environmental monitoring in underground mines was developed using location estimation sensors and environmental detection sensors. Three types of sensors (IMU, LiDAR, and encoder) were used to estimate the location of the robot, and three types (temperature/humidity, gas, and particle) of environmental sensors were used to measure environmental factors. As a result of conducting field experiments on underground mines using the developed system, the location estimation method showed errors of approximately 0.22 m along the x-axis and 0.11 m along the y-axis. Temperature, humidity, O2, and particle concentration were measured to be almost constant, and the concentration of harmful gases was not measured. In the case of particle concentration, it was measured at a maximum of 0.293 mg/m3; it was confirmed from the created environmental map that a large number of particles were generated in the 18–24 m section of the experimental section.
Because the global positioning system (GPS) cannot be used in underground mine environments, it is difficult to recognize the location, and the communication environment for remotely operating devices is also limited. However, the autonomous driving robot developed in this study could efficiently collect location information from the measurement points of environmental data by using location estimation sensors and also conduct exploration autonomously without intervention by workers. In addition, because the location information and environmental data were used together to create an environmental map, the environmental information of the underground mine could be effectively visualized.
The developed small autonomous driving robot can be used in areas where road conditions are relatively stable. However, in the case of an actual underground mine environment, as there exist areas where the road conditions are not stable, its utilization is limited. Therefore, to expand the utilization of the autonomous driving system developed in this study, it would need to be applied to large-scale equipment such as mining transport trucks and loaders [51–54].
Funding Statement: This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (2021R1A2C1011216).
Conflicts of Interest: The authors declare that they have no conflicts of interest to report regarding the present study.
|This work is licensed under a Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.|