Establishing a system for measuring plant health and bacterial infection is critical in agriculture. Previously, the farmers themselves, who observed them with their eyes and relied on their experience in analysis, which could have been incorrect. Plant inspection can determine which plants reflect the quantity of green light and near-infrared using infrared light, both visible and eye using a drone. The goal of this study was to create algorithms for assessing bacterial infections in rice using images from unmanned aerial vehicles (UAVs) with an ensemble classification technique. Convolution neural networks in unmanned aerial vehicles image were used. To convey this interest, the rice’s health and bacterial infection inside the photo were detected. The project entailed using pictures to identify bacterial illnesses in rice. The shape and distinct characteristics of each infection were observed. Rice symptoms were defined using machine learning and image processing techniques. Two steps of a convolution neural network based on an image from a UAV were used in this study to determine whether this area will be affected by bacteria. The proposed algorithms can be utilized to classify the types of rice deceases with an accuracy rate of 89.84 percent.
The target industry in Thailand are ten industries that have the potential to be the country’s economic growth engine and boost competitiveness. These ten industries can be classified into two groups. The first category includes automobiles, smart electronics, tourism, agriculture, biotechnology, and future food. The second group which is an expansion of the previous five industries, includes industrial robots, aviation and logistics, biochemical industry, digital industry, and comprehensive medical industry (medical hub), all of which are new areas in which Thailand has competitive potential and where investors are interested.
Agriculture and biotechnology are two other target industries that have an impact on the national GDP. Drones are another intriguing concept for agricultural usage. It is a highly accurate agricultural tool that is gaining popularity in the era of Thailand 4.0 and Agriculture 4.0, which focuses on technology and innovation to help increase agricultural product efficiency, reduce production costs, save time, and save human labor in the face of a declining agricultural labor situation. Drones for agriculture are another technology that can be used to regulate agricultural product production quality precisely. Farmers will be able to minimize production costs if they use agricultural technology such as drones for agriculture. Furthermore, if the government supports the use of technology in agriculture, such as drones, and the capital group has consistently transferred technological knowledge, including trends in technology products that are more prevalent and less expensive, the Thai agriculture sector will be able to advance to the next level of development. Changing into a new era, such as the knowledge-based and digital economy eras has become increasingly significant and popular, as research and development employing technology and innovation in numerous business sectors has been invented. Unmanned aerial vehicles (UAVs), also known as drones, are another technology that is becoming more prevalent in numerous industries. Drones are used in agriculture to boost production efficiency, reduce expenses, and save time and labor by spraying medications, fertilizers, and other chemicals. Furthermore, the output quality can be properly regulated. Drones for agriculture are becoming popular in agricultural period 4.0 as an exciting alternative to modern agriculture that farmers and entrepreneurs could adapt to evolve into professional farmers (smart farmers).
Drones are unmanned aircraft that may be used for a variety of tasks, including gas pipeline exploration, meteorological data gathering, traffic conditions, transportation, taking photographs or events from a high angle, assessing agricultural and irrigation areas. Multirotor UAVs, fixed-wing drones, and hybrid drone operations are the three categories of drone operation. The most prevalent type of UAV is the multirotor. It travels rapidly and aggressively because it has four, six, or eight propellers and does not require a runway to fly. However, it has a disadvantage in that it has a lower flying speed than other types of drone. Fixed-wing drones behave similar to airplanes, which require a runway to fly longer and faster, making them appropriate for use in a wide area of research. They can also carry large objects over long distances while using minimal energy. Without using the runway, the hybrid model can fly faster, further, and more efficiently than the second model. However, this type of drone is uncommon in the global market.
Drones can be used to investigate broad agricultural areas, including soil analysis, seed planting, and harvest time prediction with the ability to develop three-dimensional maps, and entrepreneurs can better examine and plan their planting. Agricultural drones are becoming more important in precision farming, such as irrigation, hormones, and foliar fertilization, to overcome the limitations of tall plants, which prevents farmers from watering thoroughly and fertilizing at the same time. As a result, drones save more time than manual work. Furthermore, if a plant is not too tall, such as rice, farmers do not have to step on the rice until it is damaged, which is an advantage. Plant disease imaging, analysis, and diagnosis allow farmers to directly treat plant diseases. Furthermore, using drones for agriculture helps decrease the spread of pesticides, which farmers may be exposed to and inhaled during spraying. Thailand is capable of producing agricultural drones. Both private sector output and educational institutions collaborate with private sector enterprises in research and development. Drones are used to sow fertilizers, seeds, and medicines in rice, cassava, corn, sugarcane, and pineapple, among other crops. Drones are used by large private corporations in agriculture to sow seeds, fertilizers, and spray drugs as analysis agents.
Establishing a system for measuring plant health and bacterial infection is critical in agriculture. Previously, the farmers themselves, who observed them with their eyes and relied on their experience in analysis, which could have been incorrect, inspected it. Plant inspection can determine which plants reflect the quantity of green light and NIR using infrared light, both visible and eye (NIR: near-infrared) using a drone equipped with this equipment. On the other hand, these data can be used to create multispectral photographs. Changes in these crops, including health, can be recorded across the entire plantation in a timely manner. Farmers can also use the information they have to inspect and deliver correct and timely cures when diseases are identified in plants. In the event of crop loss, these two options improve the ability to overcome plant illnesses. Farmers can save money on insurance claims more efficiently by saving losses. As a result, researchers are aware of the issue and are interested in combining infrared images captured by the eye with near-infrared (NIR) cameras mounted on the drone to solve it. Image processing tools can be used to analyze plant diseases. This study and the creation of knowledge are based on specific datasets from Thailand as well as a worldwide dataset.
The remainder of this section is organized as follows. The theory of CNNs is introduced in
Bacterial leaf streak disease is found mainly in rain and irrigation fields in the central, northeastern and southern regions. The causative agent is
Brown spot disease, caused by Bipolaris oryzae (Helminthosporium oryzae Breda de Haan), is found mainly in rainwater and irrigation in the central, northern, western, northeastern, and southern regions [
Bacterial leaf Blight Disease found in rainwater, irrigated fields, in the North, the Northeast and the South caused by
Developing machines that can learn, anticipate, or create knowledge is an important issue in artificial intelligence development. Deep learning, artificial neural networks, support vector machines, and other techniques can be used to teach machines how to learn and build information. The artificial neural network concept is based on the neural network system of the human nervous system. The input layer, output layer, and hidden layer are the three layers that make up the architecture of an artificial neural network. ANNs have the ability to make decisions on difficult problems that humans are unable to solve. Currently, deep learning is one of the most widely used techniques [
Image indexing is the process of entering image properties into a database, such as the color histogram, in order to create and store a unique vector within each image. The features of each image will be expressed as numeric values, n numbers (depending on the method’s requirements), or an n-dimensional vector, which is the image’s specific vector. These one-of-a-kind vectors will serve as an image index, with points in n-dimensional space serving as their representation.
Feature extraction is the foundation of content-based picture retrieval [
Nagano et al. [
Zhang et al. [
Zhang et al. [
Abdulridha et al. [
Gao et al. [
Pineda et al. [
Wang et al. [
Image retrieval techniques formerly depended on obtaining image file names or utilizing words that describe images (image caption), which is a non-real aspect of the image. As a result, it is incompatible with the large database that already exists. Because each user’s description of a photo differs and may not always correspond, defining each image based on a few words becomes more difficult as the database develops. Furthermore, certain image databases, such as a geographic image database, may contain a significant number of image data from the same group, such as many ground shots, varied tree images, and so on. When working with vast databases, it must also rely on humans to classify the image, which takes a lengthy time. Image characteristics are the characteristics of an image processing algorithm. The image’s three basic elements are color, shape, and texture. [
Object detection using the You Only Look Once (YOLO) method can swiftly forecast the type of rice disease. However, due to the possibility of an output fault, it is unable to determine which disease kind of rice is present in a single procedure. The model has been divided into two parts: the first recognizes the rice in the image, and the second predicts the type of rice disease based on the model’s highest probability. The YOLOv3 small will be utilized in this project because it has the fastest processing time. The Yolo v3 small structure in the Keras perspective was depicted in
Structure: The structure of CNN composed of 13 convolutional layers which composed of 1 stride, 1 padding, 6 max pooling layers which 2 stride, 2 route & 1 up sample layers, 2 output layers. The learning rate is 0.001. Number of batches is 64 batches. Activation function is ReLU.
When users upload an image to the model, it is routed to model 1 for processing. The outputs are images with multiple bounding boxes of projected rice and a probability score that varies according to the image. Model 1 outputs are not returned to the user; instead, they are transferred to model 2 for processing in order to determine the type of rice disease in each bounding box.
Users can upload photographs of any size or resolution and the application resizes them to 416 × 416 pixels (YOLO format). Object detection and picture classification were the two models used in our approach, the specifications of which are as follows: Model 1 was designed to detect the objects. Rice was discovered during this stage. The clipped photographs were then submitted to Model 2 with a bounding box size. Model 2 was used to classify the images. Using the highest likelihood score, one of the rice disease types was identified. The user was then given all the outputs (predicted photos, rice disease type).
We have also used image augmentation methods to reduce the model’s overfitting, using the following steps: First, we collect the sample image, as shown in
Datasets were obtained in this study from a paddy in Amol, Mazandaran Province, Iran, with geographical coordinates of length 52.453171, width 36.601498 and height 12-meters [
Thermal imaging was performed using the T8 thermal imaging camera, which produced images with a resolution of 385 × 594 pixels. For visible imaging, the Sony IMX234 Exmor RS sensor with a 12.6-inch detector surface area was used. The developed imaging system takes both visible and thermal images from the farm surface virtually simultaneously. All photographs were captured from a height of 120 cm relative to the water level on the ground and at a 135-degree angle, with the center of the two cameras in the imaging system positioned at a distance of 5 cm from each other and on a screen. Example of the thermal images (IRT) dataset is shown in
To train the model, we divided the datasets into two sections to train and validate the model correction. The best ratio for splitting the dataset was 80:20 (train:test) and was stored in a text file. For each model, we followed the following data:
Disease type | Number of datasets | Source | Example image |
---|---|---|---|
Bacterial leaf streak disease | 792 | Rice Department, Ministry of Agricultures and Cooperative | |
Brown spot disease | 198 | Rice Department, Ministry of Agricultures and Cooperative | |
Bacterial blight disease | 930 | Rice Department, Ministry of Agricultures and Cooperative |
The outputs after running this command are as following: True Positive (TP) for each class, False Positive (FP) for each class, False Negative (FN) for each class. In this research, we also used the precision value, which is the value that measures the number of correct predicting answers by dividing by the total number of images in the dataset. Next recall is calculated. Recall is the value that measures the number of correct predicting answers for each class and is divided by the total number of that class which is the ground truth. If we want to measure the efficiency of our model, we need to get a value to represent the performance of our model. Normally, the average precision value is used to represent the efficiency of the model. The average precision can calculate the area under the precision and recall graphs. The values being between 0 and 1. The value of 1 represents perfect precision, with the resulting precision graph resembling a square with precision and recall both being 1.
The classification results obtained by applying the proposed algorithms to categorize the deceased type of rice images were used to demonstrate the experimental findings, as shown in
# Cutoff | Precision | Delta recall | |
---|---|---|---|
1 | 1 | 0.2 | 0.2 |
2 | 1 | 0.2 | 0.2 |
3 | 0.66 | 0 | 0 |
4 | 0.75 | 0.2 | 0.15 |
5 | 0.6 | 0 | 0 |
6 | 0.66 | 0.2 | 0.132 |
7 | 0.57 | 0.2 | 0.114 |
8 | 0.5 | 0 | 0 |
9 | 0.44 | 0 | 0 |
10 | 0.5 | 0.2 | 0.1 |
0.896 |
Class name | TP |
FP |
Precision |
---|---|---|---|
Rice | 1540 | 180 | 89.535% |
Other | 653 | 70 | 90.318% |
Multiclass Precision | 89.828% |
Class name | TP |
FP |
Precision |
---|---|---|---|
Bacterial leaf streak disease | 1540 | 180 | 89.535% |
Brown Spot Disease | 653 | 70 | 90.318% |
Bacterial leaf Blight Disease | 1079 | 112 | 90.596% |
Multiclass Precision | 89.828% |
Classification technique | Precision |
---|---|
AlexNet | 86.58% |
VGGNet | 84.52% |
Simple CNN | 87.52% |
ensemble classification | 89.89% |
In
As shown in
The goal of this study was to create algorithms for assessing bacterial infections in rice using images from unmanned aerial vehicles (UAVs) with an ensemble classification technique. Object detection using the You Only Look Once (YOLO) method can forecast the type of rice disease. However, due to the possibility of an output fault, it is unable to determine which disease kind of rice is present in a single procedure. The model has been divided into two parts: the first recognizes the rice in the image, and the second predicts the type of rice disease based on the model’s highest probability. Convolutional neural networks in unmanned aerial vehicles image was used. To convey this interest, the rice’s health and bacterial infection inside the photo were detected. The project entailed using pictures to identify bacterial illnesses in rice. The shape and distinct characteristics of each infection were observed. Rice symptoms were defined using machine learning and image processing techniques. Two steps of a convolution neural network based on an image from a UAV were used in this study to determine whether this area will be affected by bacteria. The proposed algorithms can be utilized to classify the types of rice deceases with an accuracy rate of 89.84 percent. In future, the proposed ensemble classification technique will apply for multiclass classification to improve the detection accuracy rate with optimized feature selection strategies.