Computers, Materials & Continua DOI:10.32604/cmc.2022.026783 | |
Article |
Autonomous Unmanned Aerial Vehicles Based Decision Support System for Weed Management
1Department of Computer Science and Information Systems, College of Applied Sciences, AlMaarefa University, Ad Diriyah, Riyadh, 13713, Kingdom of Saudi Arabia
2Department of Computer Engineering, College of Computers and Information Technology, Taif University, Taif, 21944, Kingdom of Saudi Arabia
3Department of Archives and Communication, King Faisal University, Al Ahsa, Hofuf, 31982, Kingdom of Saudi Arabia
*Corresponding Author: Ashit Kumar Dutta. Email: drashitkumar@yahoo.com
Received: 04 January 2022; Accepted: 23 February 2022
Abstract: Recently, autonomous systems become a hot research topic among industrialists and academicians due to their applicability in different domains such as healthcare, agriculture, industrial automation, etc. Among the interesting applications of autonomous systems, their applicability in agricultural sector becomes significant. Autonomous unmanned aerial vehicles (UAVs) can be used for suitable site-specific weed management (SSWM) to improve crop productivity. In spite of substantial advancements in UAV based data collection systems, automated weed detection still remains a tedious task owing to the high resemblance of weeds to the crops. The recently developed deep learning (DL) models have exhibited effective performance in several data classification problems. In this aspect, this paper focuses on the design of autonomous UAVs with decision support system for weed management (AUAV-DSSWM) technique. The proposed AUAV-DSSWM technique intends to identify the weeds by the use of UAV images acquired from the target area. Besides, the AUAV-DSSWM technique primarily performs image acquisition and image pre-processing stages. Moreover, the Adam optimizer with You Only Look Once Object Detector-(YOLOv3) model is applied for the detection of weeds. For the effective classification of weeds and crops, the poor and rich optimization (PRO) algorithm with softmax layer is applied. The design of Adam optimizer and PRO algorithm for the parameter tuning process results in enhanced weed detection performance. A wide range of simulations take place on UAV images and the experimental results exhibit the promising performance of the AUAV-DSSWM technique over the other recent techniques with the
Keywords: Autonomous systems; object detection; precision agriculture; unmanned aerial vehicles; deep learning; parameter tuning
In recent times, application of remote sensing with UAV showed greater potential in precision agriculture since they could be armed with several imaging sensors to gather higher temporal, spatial, and spectral resolution images [1]. The benefits of their high flexibility and lower-cost in-flight scheduling make them prevalent for research fields. Regarding UAV-based remote sensing, Object-based Image Analysis (OBIA) is one of the traditional methods in object classification [2]. First, The OBIA identifies spatially and spectrally homogenous objects based on its segmentation result and later integrates geometry, spectral and textural data from that object to increase classification results [3]. Earlier research about OBIA in precision agriculture examined for instance weed detection and crop classification through UAV images. Precision agriculture is described as the application of technology with the aim of enhancing environmental quality and crop performance [4]. The primary objective of the presented approach is to choose the accurate management practice for allocating the right doses of inputs, like herbicides, fertilizers, fuel, seed, and so on, at the right time and to the right place.
Weed characterization and detection represent the main problems of precision agriculture, because, in present farming practice, herbicides are widely employed across fields, even though that weed exhibits uneven spatial distribution [5]. The traditional approach utilized for controlling the weed in crops is manual weeding. But it is labour and time-consuming, making it ineffective for largescale crops [6]. In order to solve this problem, UAV network is utilized. In addition, this UAV is armed with multi-spectral cameras which provide further details when compared to RGB digital images, because they capture spectral band which is not identified by the human eye—like near infrared (NIR)—provide data on the factors like the reflectance of vegetation indices and visible light [7]. This component allows us to detect significant correlations which assist in making distinct estimations.
In spite of considerable developments in UAV acquisition systems, the automated detection of weed remains a challenge. Recently, deep learning (DL) approaches have been demonstrated significant advances for several computer vision (CV) tasks, and current development showed the significance of this technique for the detection of weed [8]. Still, they aren’t typically employed in agriculture, but, the large number of data needed in the learning process have emphasized the problems of the manual annotation of this dataset [9]. The same issues emerge in agriculture data, whereas labelling plant in image fields is time-consuming. Until now, very little consideration is given to the unsupervised annotation of data for training DL methods, especially for agriculture [10].
In spite of current progress and efforts that have been made, further work is still needed for enhancing weed map robustness and accuracy to conquer difficult agricultural conditions. While taking a realtime situations of weed detection in row crop fields into account, crop rows are highly effective to assist inter-row weed recognition by analyzing the images [11]. The major benefits of this detection method are effective but it failed to identify intra-row weed. Rather, OBIA has the capacity to identify weeds nevertheless of their distribution, when it relies largely on extracted features and has the possibility to categorize inter-row weed inaccurately.
This paper presents an autonomous UAV with decision support system for weed management (AUAV-DSSWM) technique. The proposed AUAV-DSSWM technique initially undergoes image acquisition and image pre-processing stages. Then, the Adam optimizer with You Only Look Once Object Detector-(YOLOv3) model is utilized as an automated weed detection. Moreover, the poor and rich optimization (PRO) algorithm with softmax layer is used effective classification of weeds and crops. The design of Adam optimizer and PRO algorithm for the parameter tuning process results in enhanced weed detection performance. A detailed simulation analysis is carried out on the test UAV images and the results are inspected under varying aspects.
The rest of the paper is organized as follows. Section 2 briefs the related works, Section 3 provides proposed model, Section 4 offers experimental validation, and Section 5 draws conclusion.
This section provides a detailed survey of existing weed detection techniques using UAV images. In Islam et al. [12], the performance of various ML methods like RF, SVM, and KNN, are analyzed for detecting weeds through UAV images gathered from chilli crop fields. Osorio et al. [13] introduced three methodologies for estimating weed according to the DL image processing in lettuce crop and compared with the visual estimation by the specialists. One approach is depending on SVM using HOG as feature descriptors. Another one is depending on YOLOV3 uses its effective framework for the detection of objects, and last one is depending on Mask RCNN for getting an instance segmentation for all the individuals.
In Islam et al. [14], RGB images captured by drones were utilized for detecting weeds in chilli fields. This process has been tackled by feature extraction, orthomasaicking of images, labeling of images for training ML approaches, and utilize of unsupervised learning with the classification of RF. In Huang et al. [15], UAV images were captured in rice fields. A semantic labelling model has been adapted for generating the weed distribution map. An ImageNet pretrained CNN using residual architecture has been adopted in a full convolution form and transmitted to this data set by finetuning. Atrous convolution has been employed for extending the views of convolution filter; the performances of multiscale processing were estimated, and an FC-CRF method was employed afterward the CNN for additionally refining the spatial information.
Bah et al. [16] integrated the DL with line detection for enforcing the classification method. The presented approach is employed for higher-resolution UAV image of vegetables taken around 20 m above the soil. Also, they implemented a wide-ranging assessment of the algorithm with actual information. Gao et al. [17] designed an approach for detecting inter- and intra-row weeds in earlier season maize fields from aerial visual images. Especially, the Hough transform algorithm (HT) has been used in the orthomosaicked image for detecting inter-row weeds. A semi-automated Object-Based Image Analysis (OBIA) process has been presented using RF integrated to FS methods for classifying maize, soil, and weeds.
Bah, et al. [18] presented a fully automated learning approach with CNN using an unsupervised trained data set for detecting weeds from UAV image. The presented approach includes three primary stages. Firstly, detect the crop row and utilize them for identifying the inter-row weed. Next, inter-row weed is employed for constituting the trained data set. Lastly, execute CNN on this data set for building an algorithm which is capable of detecting the crop and the weeds in the image. Gašparović et al. [19] experimented with four classification methods for creating of weed map, combining manual and automatic models, and pixel-based and object-based classification models that are separately utilized on two subsets. Input UAV data were gathered by a lower-cost RGB camera because of its competitiveness than multi-spectral cameras. The classification algorithm is depending on the RF-ML method for weed and bare soil extraction follows an unsupervised classification with the K-means method for additional evaluation of weed and bare soil existence in non-soil and non-weed regions.
In this study, a new AUAV-DSSWM technique has been developed for the detection and classification of weeds on UAV images. The AUAV-DSSWM technique encompasses several subprocesses such as UAV image collection, image pre-processing, YOLO-v3 based object detection, Adam optimizer based hyperparameter tuning, SM layer based classification, and PRO based parameter optimization. Fig. 1 illustrates the overall process of AUAV-DSSWM technique. The detailed work of each module is elaborated in the succeeding sections.
3.1 Data Collection and Pre-processing
For data collection, sensors and camera mounted UAVs are utilized for capturing agricultural field crops. In this study, RGB cameras are placed on the UAVs and it acquired the images by the use of a Phantom 3 Advanced drone mounted camera with the 1/2.300 CMOS sensor. Generally, the basic processes involved in UAV image preprocessing are photo alignment, dense cloud building,
where
3.2 YOLOv3 with Adam optimizer Based Object Detection
During the object detection process, the UAV images are passed into the YOLOv3 model and the outcome will be the identified objects that exist in it. The family of YOLO algorithm looks at the whole images while recognizing and detecting objects and extracts deep data regarding the appearance and classes, different from other methods like R–CNN based algorithm or sliding window-based method. This algorithm processes the recognition of objects as an individual regression problem which gives fast responses with a decrease in the model difficulty of the detector. Although the considerable speed attainment, the algorithm lags based on the accuracy particularly with smaller objects. The newest process in the YOLO, i.e., YOLOv3, has demonstrated its performance over other advanced detectors. TheYOLOv3 framework has 107 layers in overall allocated as {route
For the detecting process, the network is altered by eliminating its previous layers and stacking up to other layers which results in the ultimate network framework. The initial seventy-five layers in the network represent fifty-two convolution layers of the Darknet-53 models pre-trained on ImageNet. The residual thirty-two layers are included for qualifying YOLOv3 for object recognition on distinct data sets with additional training. As well, YOLOv3 employs remaining layers like skip connection which integrates feature map from 2 layers with component-wise addition result in finer-grained data.
The YOLOv3 substitutes the softmax based activation utilized in old version with independent logistic classifier. The feature is extracted by the same idea to feature pyramid network. Likewise, binary cross-entropy loss is currently employed to class prediction, i.e., helpful while confronted with images having over-lapping labels. The
YOLOv3 process image by separating them in
The YOLOv3 consist of a loss function box and precisely categorize the identified object with a (3) which instruct the network to appropriately forecast bounding provision to penalize false positives:
The symbols under hat represent respective prediction values. The loss function has 3 error mechanisms: classification, localization, and confidence as noted in Eq. (3). Distinct loss mechanisms are integrated with sum-squared method as it is easy for optimization. The localization loss is accountable for reducing the error among the ground truth object and the “responsible” bounding box when objects are identified in a grid cell.
For optimally adjusting the hyperparameters that exist in the YOLOv3 model, the Adam optimizer is used. The hyperparameter selection of the DNN model is carried out by the Adam optimizer. The Adam is a 1st order optimization model used for replacing the conventional stochastic gradient descent procedure. It connects the 2nd moment computation based on the 1st order moment approximation and appends a moment to Adadelta. The learning rate of all the variables can be adaptively modified utilizing the 1st and 2nd order moment approximation of the gradients. In addition, bias correction is appended that produces the variables highly stable. The iterated mathematical equations can be defined using Eq. (5):
where
Once the objects are detected in the UAV images, the classification process is carried out by the use of PRO with SM classifier and thereby effectually identified the weeds from the crop. The SM layer could forecast the label probability of the input data
whereas
Let
To optimize the weight value that exists in the SM layer, the PRO algorithm is applied in such a way that the weed detection outcomes can be improved to a maximum extent.
The PRO technique is presented by the author [22]. The PRO was dependent upon people's wealth performance under the society. Usually, the people are clustered as to 2 financial classes from the society. An initial group has of wealthier people (wealth has superior to average). The next group has worse people (wealth is lesser than average). All persons from these sets are seeking for improving their financial place in society. The people of lesser economic class are trying to enhance their financial place and decrease the class gap with learned in wealthier peoples. The rich economic class people attempt to extend its class gap with observed in individuals from least economic class. During the optimized issue, all individual solutions from the Poor population move nearby the global optimum solutions from the search space with learning in the rich solution under the Rich population. Assume that ‘N’ represents the population size. Arbitrarily, it can be ‘N’ solutions with arbitrary real values amongst zero and one. Then, the digitized procedure was executed for all places of individual solution to change altering real values to binary values dependent upon in Eq. (9)
At this point,
The FF roles are an essential play from the optimized issues. It computes a positive integer for indicating optimum the candidate solution is. The classifier error rate has been assumed as minimized FF that is expressed in Eq. (13). The rich solution is minimal fitness score (error rate) and worse solutions have maximum fitness score (error rate).
The rich people are moving nearby for improving their economical class gap with observed in individuals from the worse economic classes [23]. The worse economic class people are moving nearby decrease its economical class gap with learning in individuals from rich economic class to enhance its financial status. This general performance of rich as well as poor people has been utilized for generating the novel solution.
The experimental result analysis of the AUAV-DSSWM technique is carried out in this section. The classification results of the AUAV-DSSWM technique is examined using a benchmark dataset [24]. It comprises of 287 images containing crops and 2713 images comprising weed. A few sample images are exhibited in Fig. 3.
Fig. 4 demonstrate the sample visualization result analysis of the AUAV-DSSWM technique. Fig. 4a illustrates the original image containing both crops and weeds. Fig. 4b indicated the presence of weeds and are identified by the red bounding boxes. These figures revealed that the AUAV-DSSWM technique has effectually identified the weeds out of the crops.
Fig. 5 showcases the sample results sample set of original images along with the ground truth of the crops. Fig. 5a demonstrates the original image with few crops. Fig. 5b depicts that the crops are bounded by boxes, representing the ground truth which is helpful for the training process.
The confusion matrices generated by the AUAV-DSSWM technique under dissimilar epochs is portrayed in Fig. 6. The results shown that the AUAV-DSSWM technique has effectually classified the images into crop and weed. For instance, under 10 epochs, the AUAV-DSSWM technique has identified 275 images into crop and 2700 images into weed. Likewise, under 50 epochs, the AUAV-DSSWM technique has categorized 279 images into crop and 2700 images into weed.
Tab. 1 and Fig. 7 portrays the overall weed detection outcome of the AUAV-DSSWM technique under distinct epochs. The results notified that the AUAV-DSSWM technique has accomplished effective outcome under all epochs. For instance, with 10 epochs, the AUAV-DSSWM technique has offered
The ROC analysis of the AUAV-DSSWM technique on the test weed dataset is shown in Fig. 8. The figure revealed that the AUAV-DSSWM technique has resulted to an increased ROC of 99.9732. It implies that the AUAV-DSSWM technique has the ability of attaining improved weed classification performance.
In order to showcase the betterment of the AUAV-DSSWM technique, a detailed comparison study is made in Tab. 2.
Fig. 9 shows the
Fig. 10 demonstrates the
Tab. 3 and Fig. 11 shows the CT analysis of the AUAV-DSSWM technique with recent approaches [25]. The results depicted that the FE-RF, FE-KNN, and FSVM models have obtained higher CT of 204, 185 and 172 s respectively.
In line with, the SVM, RF, ResNet-101, and VGG-16Net models have obtained certainly reduced CT of 157, 141, 125, and 97 s respectively. However, the AUAV-DSSWM technique has outperformed the existing methods with the lower CT of 64 s. The above mentioned results and discussion portrayed that the AUAV-DSSWM technique has the ability of attaining maximum weed detection performance.
In this study, a new AUAV-DSSWM technique has been developed for the detection and classification of weeds on UAV images. The AUAV-DSSWM technique encompasses several subprocesses such as UAV image collection, image pre-processing, YOLO-v3 based object detection, Adam optimizer based hyperparameter tuning, SM layer based classification, and PRO based parameter optimization. The utilization of the Adam optimizer and PRO algorithm for the parameter tuning process results in enhanced weed detection performance. A detailed simulation analysis is carried out on the test UAV images and the results are inspected under varying aspects. The comprehensive comparative results demonstrate the significant outcomes of the AUAV-DSSWM technique over the other recent techniques. Therefore, the AUAV-DSSWM technique can be extended to the design of automated image annotation techniques to reduce the manual labelling task.
Acknowledgement: The authors would like to acknowledge the support provided by AlMaarefa University while conducting this research work.
Funding Statement: This research was supported by the Researchers Supporting Program (TUMA-Project-2021-27) Almaarefa University, Riyadh, Saudi Arabia. Taif University Researchers Supporting Project number (TURSP-2020/161), Taif University, Taif, Saudi Arabia.
Conflicts of Interest: The authors declare that they have no conflicts of interest to report regarding the present study.
1. J. T. Sánchez, F. J. M. Carrascosa, F. M. J. Brenes, A. I. de Castro and F. L. Granados, “Early detection of broad-leaved and grass weeds in wide row crops using artificial neural networks and UAV imagery,” Agronomy, vol. 11, no. 4, pp. 749, 2021. [Google Scholar]
2. M. P. Ortiz, J. M. Peña, P. A. Gutiérrez, J. T. Sánchez, C. H. Martínez et al., “Selecting patterns and features for between-and within-crop-row weed mapping using UAV-imagery,” Expert Systems with Applications, vol. 47, pp. 85–94, 2016. [Google Scholar]
3. M. Du and N. Noguchi, “Monitoring of wheat growth status and mapping of wheat yield’s within-field spatial variations using color images acquired from UAV-camera system,” Remote Sensing, vol. 9, no. 3, pp. 289, 2017. [Google Scholar]
4. J. Rasmussen, J. Nielsen, F. G. Ruiz, S. Christensen and J. C. Streibig, “Potential uses of small unmanned aircraft systems (UAS) in weed research,” Weed Research, vol. 53, no. 4, pp. 242–248, 2013. [Google Scholar]
5. D. J. Mulla, “Twenty five years of remote sensing in precision agriculture: Key advances and remaining knowledge gaps,” Biosystems Engineering, vol. 114, no. 4, pp. 358–371, 2013. [Google Scholar]
6. V. Alchanatis, L. Ridel, A. Hetzroni and L. Yaroslavsky, “Weed detection in multi-spectral images of cotton fields,” Computers and Electronics in Agriculture, vol. 47, no. 3, pp. 243–260, 2005. [Google Scholar]
7. A. D. S. Ferreira, D. M. Freitas, G. G. D. Silva, H. Pistori and M. T. Folhes, “Weed detection in soybean crops using ConvNets,” Computers and Electronics in Agriculture, vol. 143, no. 11, pp. 314–324, 2017. [Google Scholar]
8. M. J. Expósito, F. L. Granados, J. L. G. Andújar and L. G. Torres, “Characterizing population growth rate of convolvulus arvensis in wheat-sunflower no-tillage systems,” Crop Science, vol. 45, no. 5, pp. 2106–2112, 2005. [Google Scholar]
9. C. Gée, J. Bossu, G. Jones and F. Truchetet, “Crop/weed discrimination in perspective agronomic images,” Computers and Electronics in Agriculture, vol. 60, no. 1, pp. 49–59, 2008. [Google Scholar]
10. G. Jones, C. Gée and F. Truchetet, “Assessment of an inter-row weed infestation rate on simulated agronomic images,” Computers and Electronics in Agriculture, vol. 67, no. 1–2, pp. 43–50, 2009. [Google Scholar]
11. F. L. Granados, “Weed detection for site-specific weed management: Mapping and real-time approaches: Weed detection for site-specific weed management,” Weed Research, vol. 51, no. 1, pp. 1–11, 2011. [Google Scholar]
12. N. Islam, M. M. Rashid, S. Wibowo, C. Y. Xu, A. Morshed et al., “Early weed detection using image processing and machine learning techniques in an australian chilli farm,” Agriculture, vol. 11, no. 5, pp. 387, 2021. [Google Scholar]
13. K. Osorio, A. Puerto, C. Pedraza, D. Jamaica and L. Rodríguez, “A deep learning approach for weed detection in lettuce crops using multispectral images,” AgriEngineering, vol. 2, no. 3, pp. 471–488, 2020. [Google Scholar]
14. N. Islam, M. M. Rashid, S. Wibowo, S. Wasimi, A. Morshed et al., “Machine learning based approach for weed detection in chilli field using RGB images,” in Int. Conf. on Natural Computation, Fuzzy Systems and Knowledge Discovery, Cham, Springer, pp. 1097–1105, 2020. [Google Scholar]
15. H. Huang, Y. Lan, J. Deng, A. Yang, X. Deng et al., “A semantic labeling approach for accurate weed mapping of high resolution UAV imagery,” Sensors, vol. 18, no. 7, pp. 2113, 2018. [Google Scholar]
16. M. D. Bah, E. Dericquebourg, A. Hafiane and R. Canals, “Deep learning based classification system for identifying weeds using high-resolution UAV imagery,” in Science and Information Conf., Cham, Springer, pp. 176–187, 2018. [Google Scholar]
17. J. Gao, W. Liao, D. Nuyttens, P. Lootens, J. Vangeyte et al., “Fusion of pixel and object-based features for weed mapping using unmanned aerial vehicle imagery,” International Journal of Applied Earth Observation and Geoinformation, vol. 67, pp. 43–53, 2018. [Google Scholar]
18. M. Bah, A. Hafiane and R. Canals, “Deep learning with unsupervised data labeling for weed detection in line crops in UAV images,” Remote Sensing, vol. 10, no. 11, pp. 1690, 2018. [Google Scholar]
19. M. Gašparović, M. Zrinjski, Đ. Barković and D. Radočaj, “An automatic method for weed mapping in oat fields based on UAV imagery,” Computers and Electronics in Agriculture, vol. 173, no. 4, pp. 105385, 2020. [Google Scholar]
20. L. Zhou, G. Deng, W. Li, J. Mi and B. Lei, “A lightweight SE-YOLOv3 network for multi-scale object detection in remote sensing imagery,” International Journal of Pattern Recognition and Artificial Intelligence, vol. 35, no. 13, pp. 2150037, 2021. [Google Scholar]
21. T. B. Do, H. H. Nguyen, T. T. N. Nguyen, H. Vu, T. T. H. Tran et al., “Plant identification using score-based fusion of multi-organ images,” in 2017 9th Int. Conf. on Knowledge and Systems Engineering (KSE), Hue, pp. 191–196, 2017. [Google Scholar]
22. S. H. S. Moosavi and V. K. Bardsiri, “Poor and rich optimization algorithm: A new human-based and multi populations algorithm,” Engineering Applications of Artificial Intelligence, vol. 86, no. 12, pp. 165–181, 2019. [Google Scholar]
23. K. Thirumoorthy and K. Muneeswaran, “Feature selection using hybrid poor and rich optimization algorithm for text classification,” Pattern Recognition Letters, vol. 147, no. 10, pp. 63–70, 2021. [Google Scholar]
24. K. Sudars, J. Jasko, I. Namatevs, L. Ozola and N. Badaukis, “Dataset of annotated food crops and weed images for robotic computer vision control,” Data in Brief, vol. 31, no. 1, pp. 105833, 2020. [Google Scholar]
25. R. Kamath, M. Balachandra and S. Prabhu, “Paddy crop and weed discrimination: A multiple classifier system approach,” International Journal of Agronomy, vol. 2020, no. 3 and 4, pp. 1–14, 2020. [Google Scholar]
This work is licensed under a Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. |