Open Access
ARTICLE
A Deep Learning-Based Novel Approach for Weed Growth Estimation
1 Chitkara University Institute of Engineering and Technology, Chitkara University, Punjab, India
2 Department of Management Information Systems King Khalid University, Guraiger, Abha, 62529, Saudi Arabia
3 Department of Civil Engineering, College of Engineering, Taif University, Taif, 21944, Saudi Arabia
4 Computer Science and Engineering, Lovely Professional University, Punjab, 144411, India
* Corresponding Author: Aman Singh. Email:
Intelligent Automation & Soft Computing 2022, 31(2), 1157-1173. https://doi.org/10.32604/iasc.2022.020174
Received 12 May 2021; Accepted 07 July 2021; Issue published 22 September 2021
Abstract
Automation of agricultural food production is growing in popularity in scientific communities and industry. The main goal of automation is to identify and detect weeds in the crop. Weed intervention for the duration of crop establishment is a serious difficulty for wheat in North India. The soil nutrient is important for crop production. Weeds usually compete for light, water and air of nutrients and space from the target crop. This research paper assesses the growth rate of weeds due to macronutrients (nitrogen, phosphorus and potassium) absorbed from various soils (fertile, clay and loamy) in the rabi crop field. The weed image data have been collected from three different places in Madhya Pradesh, India with 10 different rabi crops (Maize, Lucerne, Cumin, Coriander, Wheat, Fenugreek, Gram, Onion, Mustard and Tomato) and 10 different weeds (Corchorus Capsularis, Cynodondactylon, Chloris barbata, Amaranthaceae, Argemone mexicana, Carthamus oxyacantha, Capsella bursa Pastoris, Chenopodium Album, Dactyloctenium aegyptium and Convolvulus Ravens). Intel Real Sense LiDAR digital camera L515 and Canon digital SLR DIGICAM EOS 850 D 18-55IS STM cameras were mounted over the wheat crop in 10 × 10 square feet area of land and 3670 different weed images have been collected. The 2936 weed images were used for training and 734 images for testing and validation. The Efficient Net-B7 and Inception V4 architectures have been used to train the model that has provided accuracy of 97% and 94% respectively. The Image classification using Inspection V4 was unsuccessful with less accurate results as compared to EfficientNet-B7.Keywords
Cite This Article
Citations
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.