iconOpen Access

ARTICLE

Ghost-RetinaNet: Fast Shadow Detection Method for Photovoltaic Panels Based on Improved RetinaNet

by Jun Wu, Penghui Fan, Yingxin Sun, Weifeng Gui*

School of Physics and Electronic Information Engineering, Henan Polytechnic University, Jiaozuo, 454000, China

* Corresponding Author: Weifeng Gui. Email: email

(This article belongs to the Special Issue: AI-Driven Intelligent Sensor Networks: Key Enabling Theories, Architectures, Modeling, and Techniques)

Computer Modeling in Engineering & Sciences 2023, 134(2), 1305-1321. https://doi.org/10.32604/cmes.2022.020919

Abstract

Based on the artificial intelligence algorithm of RetinaNet, we propose the Ghost-RetinaNet in this paper, a fast shadow detection method for photovoltaic panels, to solve the problems of extreme target density, large overlap, high cost and poor real-time performance in photovoltaic panel shadow detection. Firstly, the Ghost CSP module based on Cross Stage Partial (CSP) is adopted in feature extraction network to improve the accuracy and detection speed. Based on extracted features, recursive feature fusion structure is mentioned to enhance the feature information of all objects. We introduce the SiLU activation function and CIoU Loss to increase the learning and generalization ability of the network and improve the positioning accuracy of the bounding box regression, respectively. Finally, in order to achieve fast detection, the Ghost strategy is chosen to lighten the size of the algorithm. The results of the experiment show that the average detection accuracy (mAP) of the algorithm can reach up to 97.17%, the model size is only 8.75 MB and the detection speed is highly up to 50.8 Frame per second (FPS), which can meet the requirements of real-time detection speed and accuracy of photovoltaic panels in the practical environment. The realization of the algorithm also provides new research methods and ideas for fault detection in the photovoltaic power generation system.

Keywords


1  Introduction

Nowadays, electricity is extremely important for the development of technology and economy, while the disadvantages of fossil energy are becoming more and more prominent [1]. So new energy has become a hot spot of world concern, among which solar energy has been widely attached importance and applied all over the world [2]. Due to the complex and changeable installation environment of the photovoltaic power generation system, the problem of being shaded by trees, clouds, poles and buildings is almost inevitable. Photovoltaic panel shadow will cause uneven light intensity and hot spot effect, which eventually reduce the power generation efficiency and even damage photovoltaic elements [3]. Taking a base of Shanghai metro new energy as an example, the power loss due to shadow occlusion is 31,000 kWh in one year, and the proportion of shadow cluster is 19.08%, with an average daily loss is 1069.38 kWh. The methods of manual observation and empirical prediction have poor real-time performance and accuracy, which cannot meet the actual needs of photovoltaic panel shadow real-time monitoring and fault detection.

The physical characteristics of the photovoltaic module obtained by infrared image, unmanned aerial vehicle electroluminescence image, ultrasonic and other physical means can be adopted to detect the faults in the photovoltaic module. Cubukcu et al. [4] identified the significant bright spots in the infrared thermography image, which could realize the rapid detection of faults. In [5], the authors analyzed the temperature gradient of defective photovoltaic cells in order to achieve the detection and classification of solar panel overheating faults. To detect faults in photovoltaic arrays, the authors of [6] used an unmanned aerial system to perform visual inspection and infrared thermography to collect thousands of photovoltaic panels in a short time. Thus, these detection methods are not universal because of high cost, insufficient accuracy, high dependence on equipment and limitations of real-time monitoring.

Similarly, the fault detection is achieved by estimating theoretical voltage, current and power values and comparing them with the actual measured values, using them or the difference between them as data. Hariharan et al. [7] calculated the mutation of power and irradiance at the direct current side of photovoltaic array to detect the shadow and mismatch of photovoltaic modules. The authors of [8] obtained the fault boundary function by using the cubic polynomial and taking the power and voltage losses of photovoltaic array as input, and then adopted fuzzy reasoning to improve the fault detection rate. A fault detection method based on K-nearest neighbor algorithm was proposed in [9], which took the difference between the theoretical value and the actual value as the input and determined the threshold adaptively by exponential weighted moving average algorithm (EWMA). Dhimish et al. [10] analyzed the variation relationship between the theoretical value and the actual value of photovoltaic array in different states by using numerical statistics method to gain the fault threshold for further fault detection and classification. In [11], the authors proposed a global maximum power point tracking method based on the slope trend of power-voltage characteristic curve, including shadow detection and tracking algorithm. Harrou et al. [12] utilized the difference between measured and predicted values of photovoltaic array current, voltage and power at the same temperature and irradiance, and then used these residuals and multivariate EWMA monitoring images to detect and identify fault types. To sum up, the methods mentioned above are extremely dependent on the accuracy of the simulation model. As the time goes by, the simulation model will be misjudged with the aging of the photovoltaic array.

Using the ample information of photovoltaic array current-voltage (I-V) curve and taking these data as input can accurately reflect the characteristics of fault in various situations. Li et al. [13] applied the inflection point on the I-V curve to detect partial shadows, and differentiated the current in the fault string by setting different voltages, so as to achieve effective fault location. The authors of [14] connected each battery pack to the photovoltaic equalizer inductor, briefly measured the peak current of the battery pack and conducted cross comparison to find the shadow. Bressan et al. [15] normalized the difference between the I-V curves under normal conditions and shadow conditions of photovoltaic modules, and then obtained the functional relationship between the shadow area and the photovoltaic voltage by calculating their first derivative to get the shadow area of photovoltaic panel. In [16], the authors determined the stride and inflection point by detecting the concave and convex of I-V curve, and utilized exponential function to amplify the fault characteristic values. Finally, the fault was decoupled according to the characteristics and slope of stride to realize detection of photovoltaic faults such as local shadows, hot spots and cracks. Fadhel et al. [17] carried out experiments on different faults to obtain a large number of I-V data, and then chose principal component analysis (PCA) to detect and classify photovoltaic shadow faults. The authors [18] input the parameters of the I-V curve into the fuzzy diagnosis algorithm for fault detection based on the characteristic that the I-V curve will deviate in the actual situation of the photovoltaic module. After a survey of the above relevant literature, we can find that these methods under mentioned can cause power loss when the inverter is out of operation, and the I-V curve update slowly, which cannot realize real-time detection.

Under different fault conditions, the real-time output voltage and current of photovoltaic array are not the same. The fault diagnosis of photovoltaic array can be carried out by analyzing the variation of time series waveform. The authors of [19] deployed the voltage sensor by distinguishing the odevity of the number of the photovoltaic array string, meanwhile, formulated the positioning rules for each fault to achieve accurate location of faults. Abd El-Ghany et al. [20] installed a diode in each photovoltaic string and adopted the change rate of instantaneous voltage and current to detect the fault of photovoltaic system. In [21], the authors normalized the sequence voltage and current of photovoltaic array, and gained the sequence power waveform of photovoltaic array by numerical calculation, finally used semi-supervised ladder network for fault diagnosis. Considering the method of wavelet packet decomposition, Kumar et al. in [22] decomposed the voltage in photovoltaic array to a specific frequency range, and extracted the characteristics of corresponding faults, finally adopted the threshold method to achieve photovoltaic fault diagnosis. The authors in [23,24] first extracted the fault characteristic value of photovoltaic system by multi-resolution decomposition method, then detected and identified the fault by support vector machine and fuzzy inference system. The authors of [25] adopted the time series sliding window and calculated the local outlier factor (LOF) of the current point in TSSW. When successive multiple LOF exceeded the threshold, the photovoltaic string was judged as failure. These methods cannot detect the aging fault of the system, and the changes of sequence voltage and current are complex, so it is difficult to detect accurately.

The artificial intelligence algorithm is combined with the voltage, current data measured by the sensor in the photovoltaic array or image to establish the mapping relationship between the fault characteristics, the fault location and fault types. Karakose et al. [26] identified the target through background difference, and then detected the shaded area by edge detection and fuzzy logic decision system. The authors of [27] took voltage, current, irradiance and temperature as the input of residual network to extract features through multiple convolutional and pooling layers, and chose softmax function to identify common faults in photovoltaic system. In [28], the authors processed the data set of photovoltaic array by PCA to generate transformation matrix, and then input the data into support vector machine for fault classification. With sufficient data, the authors [29] adopted discriminant common vector (DCV) method to detect and classify photovoltaic panel faults. The authors of [30] introduced the image processing technology based on shadow analysis to obtain the shadows and the values of the shaded areas on the photovoltaic array, and then used the clone selection method for optimization. Espinosa et al. [31] proposed an automatic classification method of photovoltaic power station physical faults based on convolutional neural network for semantic segmentation and classification of RGB images. The authors of [32] took the light intensity and temperature of photovoltaic panels as input and power as output, then utilized machine learning methods to detect and classify faults. Consequently, the intelligent detection methods mainly depend on the accuracy of the algorithm, but it has certain limitations.

Against the complex and changeable installation environment of photovoltaic system, when looking for a solution to judge the shadow problem of photovoltaic system in real time, we find that deep learning algorithm has developed rapidly in object detection in recent years, and its detection accuracy and speed have been proved by practice. Many traditional detection methods are replaced by it, so we choose deep learning methods to detect photovoltaic shadow. At present, deep learning object detection algorithms are mainly divided into two types. One is the one-stage object detection algorithm, which is an end-to-end detection algorithm based on regression and has the characteristic of fast detection speed. The representative algorithms are YOLO series [3336], SSD [37], RetinaNet [38], EfficientDet [39], M2Det [40], etc. The other is the two-stage object detection algorithm. This kind of algorithm firstly generates a region proposal that may have a target, and then uses the convolutional neural network to predict the position and category of the target in the region proposal. The detection accuracy of this algorithm is excellent, but the network structure is complex and the speed is slow. Representative algorithms include R-CNN series [4143], SPP Net [44], etc.

In order to complete the shadow detection of photovoltaic panels, we improve the RetinaNet algorithm. As a one-stage detection algorithm, RetinaNet is more accurate than many two-stage algorithms. However, the RetinaNet algorithm cannot meet the requirement of real-time detection in the photovoltaic plate shadow detection task. The model is too large to be applied in the actual scene. Meanwhile, the detection effect of photovoltaic plate shadow target with large target density and overlap is poor. Therefore, we propose several innovations.

•   Compared with the feature extraction network of the original algorithm, we propose Ghost CSP DenseNet feature extraction networks based on Cross Stage Partial (CSP) structure and Ghost module, which greatly optimizes the model size and the detection speed.

•   In feature fusion, we choose Ghost module and recursive feature fusion mechanism, meanwhile adjust the number of original feature layers to achieve three-scale network output, which improves the detection speed and feature expression ability of the network.

•   The activation function and regression loss function adopt SiLU and CIoU Loss functions, respectively. SiLU inherits ReLU speed and improves network learning capability. Compared with smooth L1 Loss, CIoU Loss can improve the prediction accuracy and convergence speed of the network.

2  Related Work

In order to be appropriate for photovoltaic panel shadow detection in the real environment, the following improvements and optimizations are made in this paper for the RetinaNet algorithm. Firstly, in order to improve the detection speed and accuracy of the algorithm, the feature extraction network is redesigned with CSP structure and Ghost module, which also includes Focus and SPP structures. Secondly, the feature fusion structure uses the Ghost module and the circular feature fusion mechanism to replace the top-down feature fusion network in the original network, and the network parameters are adjusted to enhance the expression ability of all object features. Then, the Relu activation function is improved to the Silu activation function to advance the network learning ability and robustness. Finally, CIoU Loss is used to replace smooth L1 loss, which improves the prediction accuracy and convergence speed. Finally, a lightweight shadow detection model of photovoltaic panels is obtained. The structure of the algorithm in this paper is shown in Fig. 1.

images

Figure 1: Ghost-RetinaNet network structure. (a) Ghost CSP DenseNet; (b) Recursive-FPN; (c) Class+box subnet

2.1 Network Lightweighting

Ghost module [45] reduces a lot of redundant calculations in the process of feature extraction, and obtains the Ghost feature maps through simple linear operations, which is mapped to the output feature maps, thus, the reasoning speed of the model is greatly improved.

A feature map of h×w×c is convolved to obtain a feature map of h×w×n. The standard convolution operation is shown in formula (1). The bias is omitted in the following formulas:

Y=Xf(1)

where XRh×w×c is the feature map with c channel, and YRh×w×n is the feature map with n channel, and fRc×k×k×n is the convolution kernel, h and w are the height and width of the input feature map, h and w are the height and width of the output feature map. The FLOPs of this process is n×h×w×c×k×k.

Firstly, the Ghost module uses standard convolution to generate m feature maps, and then produces s Ghost feature maps by simple linear transformation. The final n feature maps are obtained by concatenation operation between m feature maps and s Ghost feature maps. Ghost operation is shown in formula (2) and (3).

Y=Xf(2)

yij=Φi,j(yi),i=1,,m,j=1,,s(3)

where fRc×k×k×m is the convolution kernel, and YRh×w×m is the feature map with m channels, and yi is the ith original feature map of Y, and Φi,j is the jth linear operation, and yij is the jth Ghost feature map of yi. As the final part, Φi,s retains the identity mapping of all the original feature map. Compared with standard convolution, the Ghost operation greatly accelerates the reasoning speed of the network. In this paper, the convolution kernel of DepthWise convolution in Ghost model is 3 × 3.

2.2 GhostCSP DenseNet

Although the residual network can solve the problem of gradient disappearance and gradient explosion brought about by deepening the network, its great computations, extremely large number of parameters and relatively small gradient lead to slow detection speed and big model size, which makes practical application difficult. Therefore, we propose a new feature extraction network based on CSP structure [46] and the Ghost module, which strengthens the learning ability of the convolutional neural network, sharply reduces the amount of calculation and cuts down memory consumption.

As shown in Fig. 2, CSP DenseNet consists of a Partial Dense Block and a Partial Transition Layer. Each Partial Dense Block is composed of k Dense layers. The i th Dense Layer is obtained by calculating convolution for input to get the result and concatenating the result and the input. The advantages of this structure are increasing the gradient path, balancing the computation of each layer and reducing occupation of memory.

images

Figure 2: Cross stage partial densenet structure

The feedforward and weight update formulas of CSP DenseNet are shown in formulas (4) and (5).

xk=wk[x0,x1,,xk1]xT=wT[x0,x1,,xk]xU=wU[x0,xT](4)

where is the convolution operation, and xi is the output of the ith Dense Layer, and [x0,x1,,xk] is the splicing of x0, x1, , xk, and wi is the weight of the ith Partial Dense Block.

wk=f(wk,g0,g1,,gk1)wT=f(wT,g0,g1,,gk)wU=f(wU,g0,gT)(5)

where f is the update function of network weight, and gi is the backward gradient of the i th Dense Layer.

The specific structure of the backbone network is to use the Focus at the bottom to carry out the down sampling operation without information loss. Next up is the Layer1 structure, which starts with the GhostBottleneck_2 operation and then adopts the GhostCSP-3 module. The required feature maps for P3 and P4 fusion are then obtained through Layer2 and Layer3 operations, which differ only in the number of GhostBottleneck_1 used in GhostCSP compared to Layer1. Finally, the Layer4 structure is adopted, GhostBottleneck_2 operation is performed first, followed by SPP that can enrich the P5 feature information and increase the receptive field to obtain the required feature map of P5. In addition, the number of convolution channels in Focus is 16, and in layer1–4 is 24, 40, 80, 160. The exp is 36, 90, 240, 480, respectively.

The backbone network of Ghost-RetinaNet algorithm is shown in Fig. 1a. The improved algorithm can extract more features for dense and small photovoltaic panel shadow objects but the number of parameters is only 1.59% of ResNet50. Therefore, the detection speed and accuracy of the algorithm can be greatly improved.

2.3 Recursive Feature Fusion

The FPN feature fusion mechanism of RetinaNet is top-down. This fusion method mainly enriches the semantic information of shallow feature maps. Although the semantic information of high-level feature maps is rich, the location information is relatively poor, which does not deal with in this algorithm. Therefore, Recursive feature fusion is adopted in this paper, in which the bottom-up feature fusion is performed on the basis of FPN. The advantages of the method are as following. Firstly, it shortens the transmission path of feature information and realizes the utilization of positioning information from high-level features to low-level features. Secondly, all feature information of feature pyramid can be utilized by each anchor. Thirdly, it can increase the source of feature information to enrich the feature information of all feature layers. With regard to the dense and overlapping objects in this paper, the enhancement of semantic and location information can greatly improve the ability of feature expression of this algorithm.

The recursive feature fusion network structure of Ghost-RetinaNet is shown in Fig. 1b. The feature fusion network is simplified in order to advance the detection speed without decreasing the accuracy. P6 and P7 of the original algorithm are removed. P3 is convolved to obtain N3, and N3 is down-sampled and fused with P4 that is convolved to obtain N4. N4 is down-sampled and fused with convoluted P5 to obtain N5. Compared with the feature map of the original network, the feature expression ability of the network is increased, and the shadow detection ability at all scales is improved. The 3 × 3 convolution of the original network is replaced by the GhostBottleneck (GhostBottleneck_1 for stride 1 and GhostBottleneck_2 for stride 2), and the number of channels for all operations is improved to 128, exp is 480. The number of channels in the classification and regression task is changed to 128.

2.4 SiLU Activation Function

The ReLU activation function sets all negative x values to zero and positive x values to itself. Its advantage is that the convergence and calculation speed is fast. However, for the negative value, the output and derivative of the function are always zero, which leads to the fact that the network parameters are no longer updated. Thus, the ReLU activation function limits the learning ability of the network.

SiLU activation function, which has four properties a lower bound, no upper bound, smooth and nonmonotonic included is adopted in this paper. A lower bound can enhance the regularization effect of the network. No upper bound can ensure that the network does not experience gradient disappearance. Smoothing can not only improve the generalization ability of the network, but also optimize the network preferably. It also can avoid the uncontrollable problem caused by the discontinuity of ReLU at the origin. Nonmonotonicity can assure that some small negative values can be retained, enhance the network interpretability and improve the network gradient.

The SiLU activation function inhibits negative values rather than setting them all to zero directly, which advances the learning ability of the network and avoids the appearance of silent neurons. Its mathematical expression is shown in formula (6).

f(x)=x1+ex(6)

2.5 CIoU Loss

Ghost-RetinaNet algorithm uses CIoU Loss [47] in training box regression loss, which considers overlapping area, center distance, aspect ratio and has the characteristics of fast convergence and high convergence accuracy. It has better effect on the dense and overlapping objects in the photovoltaic panel shadow detection task.

Although the smooth L1 loss function improves the L1 loss function, it calculates the four points of the bounding box separately and then adds them to find the final regression loss, which defaults that the four points of the bounding box are independent of each other. The evaluation index of bounding box detection is intersection ratio (IoU). In the box with the same smooth L1 loss, the value of IoU may vary greatly. So smooth L1 loss function has some adverse effects on the algorithm. Therefore, CIoU Loss is adopted as the bounding box regression loss function, and its mathematical expression is shown in formula (7).

LCIoU=1IoU+ρ2(b,bgt)c2+αυ(7)

where b and bgt are the center points of bounding box and ground truth, respectively, and ρ() represents the Euclidean distance, and c is the diagonal distance of the minimum circumscribed rectangle of the ground truth and the bounding box. The meanings of α and υ are shown in formulas (8) and (9).

υ=4π2(arctanwgthgtarctanwh)2(8)

where wgt and hgt indicate the width and height of the ground truth, w and h represent the width and height of the bounding box, respectively.

α=υ(1IoU)+υ(9)

The loss of the algorithm in this paper consists of regression CIoU Loss and classification Focal Loss. The total loss is the sum of Focal Loss and one-quarter CIoU Loss.

3  Experimental Results and Analysis

In order to verify the effectiveness of the proposed algorithm, the experimental process and results are described from the experimental environment, evaluation index, datasets making and processing, model training and results analysis and comparison of different algorithms.

3.1 In-Line Style

The running and testing environment of all the algorithms in this paper is shown in Table 1.

images

3.2 Evaluation Index

Standard evaluation indexes such as Precision, Recall, meaning of average precision (AP), mean average precision (mAP), Frame per second (FPS) and model size are used to objectively estimate the algorithm performance, and compared with representative deep learning algorithms to verify the practicality of the proposed algorithm.

Refer to the binary classification problem, the confusion matrix of the classification results are shown in Table 2. The results are divided into four cases: True Positive (TP), False Negative (FN), False Positive (FP) and True Negative (TN).

images

Precision [48], also known as accuracy, is the proportion of TP in all samples that detection results are positive, as shown in formula (10), where P represents the Precision.

P=TPTP+FP(10)

Recall [49] refers to the proportion of TP among all ground truth positive cases, as shown in formula (11), where R is the Recall.

R=TPTP+FN(11)

The AP is the integral of the precision-recall curve of a certain class under all thresholds, which balances the precision and recall and reflects the comprehensive ability of the algorithm in a certain category, as shown in formula (12), where cls is a certain class.

AP(cls)=01P(cls)dR(cls)(12)

The mAP is the average of the AP of all classes, which reflects the overall effect of the algorithm, as shown in formula (13), where C represents the class set.

mAP=1CclsCAP(cls)(13)

FPS refers to the number of images detected per second, which reflects the detection speed of the algorithm. Model size is the amount of memory occupied by the model, which embodies the requirement of the algorithm for storage space.

3.3 Dataset Making and Processing

Photographs of photovoltaic panels with or without shadow in actual application are obtained through three methods, including surveillance video in photovoltaic power station, drone shooting and manual shooting. Adobe Premiere software is used to process the video to obtain each frame image. A large number of similar photos and unqualified pictures taken artificially are filtered out, and finally the image pixels are normalized. Images obtained above are labeled with LabelImg software. PVP_shielding and PVP respectively stand for photovoltaic panels with or without shadow. The labeling format is Visual Object Classes. There are 8402 images in the dataset, which consists of 52,220 unshaded photovoltaic panels and 51,269 shadowed photovoltaic panels. Random scaling, random inversion and random gamut distortion are used to realize online data augmentation to expand the number of samples in the dataset and improve the accuracy and generalization ability of the algorithm.

The original anchor is no longer applicable to the photovoltaic panel shadow dataset. Considering the inference speed of the network, we choose an image of 416 × 416 size as the input of the algorithm. Based on this, the anchor of the model is redesigned. For the dataset in this study, the K-means++ clustering results are shown in Fig. 3.

images

Figure 3: Clustering results of photovoltaic panel shadow dataset

As can be seen from Fig. 3, we get 9 clusters, and the anchor size of this model is obtained through experimental verification. Anchors in feature layer N3 are [16, 13], [31, 25], [45, 33], in feature layer N4 are [67, 44], [92, 56], [132, 67] and in feature layer N5 are [130, 86], [156, 96], [205, 102].

3.4 Model Training and Results Analysis

The amount of data in the test set is 10% of the total data set, and the remaining data are automatically divided into a training set and a validation set at a ratio of 9:1 during training. The specific distribution of the dataset is shown in Table 3.

images

Limited by the experimental platform, the batch is set to 8 and the iterations of each epoch are 850. Adam optimizer with momentum of 0.9 is used, and the initial learning rate is set to 0.0001. The learning rate is dynamically adjusted according to the change of loss in training, and it will be reduced by 50% when loss does not change for 3 times, and the training will stop when loss remains unchanged for 10 times. During the training process, the learning rate is adjusted after iterations of 56, 68, 73, 79, 82, 87 and 90 epochs. The variation of loss is shown in Fig. 4. The loss tends to be stable after 67000 iterations, and finally stabilizes at about 0.106.

images

Figure 4: Training loss decline curve

Test is conducted on test set divided randomly. The P-R curve of detection results of Ghost-RetinaNet proposed in this paper is shown in Fig. 5.

images

Figure 5: P-R curves of Ghost-RetinaNet algorithm: (a) Photovoltaic panel; (b) Photovoltaic panel shielding

The P-R curve is composed of recall and accuracy values under all confidence levels, and the integral of the curve is the AP. The AP values of PVP and PVP_shielding is 98.37% and 95.98%, respectively.

3.5 Comparison of Different Algorithms

To verify the effectiveness of the proposed algorithm, comparisons will be made between the proposed algorithm and representative one-stage and two-stage algorithms. The one-stage algorithms include SSD, YOLOv3, YOLOv4, EfficientDet and M2Det. The two-stage algorithm are faster R-CNN and R-FCN. It is worth mentioning that the anchors in SSD and faster R-CNN are no longer reasonable for the dataset, so the new anchors are obtained by clustering. These algorithms are implemented in the same experimental environment and photovoltaic panel shadow dataset, and the results are shown in Table 4.

images

As can be seen from Table 4, except for the algorithm in this paper, the highest AP of PVP detection is 97.89% of the R-FCN, the greatest AP of PVP_shielding is 98.33% of YOLOv3. The highest mAP of all classes is 97.10% of YOLOv4. The algorithm in this study is optimal except that the AP of PVP_shielding is lower than YOLOv3 and YOLOv4. Especially in model size and speed, which realizes genuine lightweight and real-time detection. Therefore, the algorithm has the advantages of high accuracy, small model size, fast detection speed and real-time detection in solving the problem of shadow detection of photovoltaic panels.

The Ghost-RetinaNet is used on the photovoltaic panel shadow dataset. The AP of PVP and PVP_shielding are 98.37% and 95.98%, respectively. The mAP can attain to 97.17% and the detection speed is highly up to 50.8 FPS. Compared with the RetinaNet, mAP is improved by 1.95%, and the detection speed is increased by more than four times. The model is lightweight enough to meet the requirements of actual detection.

The detection effect on the dense photovoltaic panel shadow dataset is shown in Fig. 6, with the original image on the left, the detection result of RetinaNet in the middle and the detection consequence of Ghost-RetinaNet on the right. In the figure, the box labeled PVP is the photovoltaic panel, PVP_shielding is the shadow photovoltaic panel and the elliptical box is the area with missing or false detection. In (a)--(e) dense and overlapping target scenes, RetinaNet has missed detection in (a) (b) (d) (e), while there is no missing detection and false detection in the detection results of Ghost-RetinaNet. Meanwhile, the confidence of the prediction box in Ghost-RetinaNet is also generally improved. In the small target scenes of (b), (d) and (e), compared with the Ghost-RetinaNet, the missing of objects in RetinaNet is serious and the confidence is lower. According to the experimental results, the Ghost-RetinaNet algorithm is available for shadow detection of the photovoltaic panel.

imagesimages

Figure 6: Detection results: (a) (b) (c) (d) (e) Dense and overlapping target; (b) (d) (e) Small target

4  Conclusion

In this paper, we introduce a convolutional neural network into photovoltaic panel state detection and propose a Ghost-RetinaNet algorithm. The proposed algorithm solves the problems of low detection accuracy and slow speed caused by target density and target frame overlap in photovoltaic panel shadow detection, which provides a new method for photovoltaic panel fault detection.

The Ghost-RetinaNet algorithm uses CSP feature extraction network, recursive feature fusion network, SiLU activation function, CIoU Loss function and Ghost module. The detection speed, accuracy and robustness of the photovoltaic panel shadow detection model are improved, particularly the model size is greatly reduced. The experimental results show that the object position prediction is more accurate and the accuracy is also higher. The detection accuracy of photovoltaic panel and photovoltaic panel shadow are increased by 1.45% and 2.46%, respectively, and the overall mAP is improved by 1.95%. Meanwhile, the model size is decreased from 139.3 MB to 8.75 MB, and the detection time of a single image is reduced from 64.1 to 19.7 ms. In general, the proposed algorithm is more effective than other algorithms.

The algorithm presented in this paper has a good effect on the shadow detection of photovoltaic panels. Theoretically, the algorithm also has certain applicability to ash deposition, damage and other states. No research has been done in this paper, so relevant data can be added to the dataset made in this study for further experiments.

Acknowledgement: The authors would like to thank Editor-in-Chief, Editors, and anonymous Reviewers for their valuable reviews.

Funding Statement: This work was supported by the National Natural Science Foundation of China (No. 52074305), Henan Scientific and Technological Research Project (No. 212102210005), Open Fund of Henan Engineering Laboratory for Photoelectric Sensing and Intelligent Measurement and Control (No. HELPSIMC-2020-00X).

Conflicts of Interest: The authors declare that they have no conflicts of interest to report regarding the present study.

References

 1.  Guo, Z. F., Zhou, K. L., Zhang, C., Lu, X. H., Chen, W. et al. (2018). Residential electricity consumption behavior: Influencing factors, related theories and intervention strategies. Renewable & Sustainable Energy Reviews, 81, 399–412. DOI 10.1016/j.rser.2017.07.046. [Google Scholar] [CrossRef]

 2.  Hu, E., Yang, Y. P., Nishimura, A., Yilmaz, F., Kouzani, A. (2010). Solar thermal aided power generation. Applied Energy, 87(9), 2881–2885. DOI 10.1016/j.apenergy.2009.10.025. [Google Scholar] [CrossRef]

 3.  Zhu, H. G., Yu, C., Lu, L. X., Lian, W. W., Yao, J. X. et al. (2018). Research on parameter distribution features of photovoltaic array under the cover and shadow shading conditions. International Journal of Photoenergy, 2018(23), 1–14. DOI 10.1155/2018/9207917. [Google Scholar] [CrossRef]

 4.  Cubukcu, M., Akanalci, A. (2020). Real-time inspection and determination methods of faults on photovoltaic power systems by thermal imaging in Turkey. Renewable Energy, 147, 1231–1238. DOI 10.1016/j.renene.2019.09.075. [Google Scholar] [CrossRef]

 5.  Guerriero, P., Cuozzo, G., Daliento, S. (2016). Health diagnostics of PV panels by means of single cell analysis of thermographic images. 2016 IEEE 16th International Conference on Environment and Electrical Engineering (EEEIC), pp. 1–6. Florence, Italy. DOI 10.1109/EEEIC.2016.7555516. [Google Scholar] [CrossRef]

 6.  Cardinale-Villalobos, L., Meza, C., Murillo-Soto, L. D. (2021). Experimental comparison of visual inspection and infrared thermography for the detection of soling and partial shading in photovoltaic arrays. In: Nesmachnow, S., Hernández Callejo, L. (edsSmart cities, vol. 1359, Springer, Cham. DOI 10.1007/978-3-030-69136-3_21. [Google Scholar] [CrossRef]

 7.  Hariharan, R., Chakkarapani, M., Ilango, G. S., Nagamani, C. (2016). A method to detect photovoltaic array faults and partial shading in PV systems. IEEE Journal of Photovoltaics, 6(5), 1278–1285. DOI 10.1109/JPHOTOV.2016.2581478. [Google Scholar] [CrossRef]

 8.  Dhimish, M., Holmes, V., Mehrdadi, B., Dales, M., Mather, P. (2017). Photovoltaic fault detection algorithm based on theoretical curves modelling and fuzzy classification system. Energy, 140, 276–290. DOI 10.1016/j.energy.2017.08.102. [Google Scholar] [CrossRef]

 9.  Harrou, F., Taghezouit, B., Sun, Y. (2019). Improved kNN-based monitoring schemes for detecting faults in PV systems. IEEE Journal of Photovoltaics, 9(3), 811–821. DOI 10.1109/JPHOTOV.2019.2896652. [Google Scholar] [CrossRef]

10. Dhimish, M., Holmes, V., Mehrdadi, B., Dales, M. (2017). Simultaneous fault detection algorithm for grid-connected photovoltaic plants. IET Renewable Power Generation, 11(12), 1565–1575. DOI 10.1049/iet-rpg.2017.0129. [Google Scholar] [CrossRef]

11. Gosumbonggot, J., Fujita, G. (2019). Global maximum power point tracking under shading condition and hotspot detection algorithms for photovoltaic systems. Energies, 12(5), 1–23. DOI 10.3390/en12050882. [Google Scholar] [CrossRef]

12. Harrou, F., Sun, Y., Taghezouit, B., Saidi, A., Hamlati, M. E. (2018). Reliable fault detection and diagnosis of photovoltaic systems based on statistical monitoring approaches. Renewable Energy, 116, 22–37. DOI 10.1016/j.renene.2017.09.048. [Google Scholar] [CrossRef]

13. Li, C., Yang, Y., Zhang, K., Zhu, C., Wei, H. (2021). A fast MPPT-based anomaly detection and accurate fault diagnosis technique for PV arrays. Energy Conversion and Management, 234(4), 113950. DOI 10.1016/j.enconman.2021.113950. [Google Scholar] [CrossRef]

14. Villa, L. F. L., Raison, B., Crebier, J. C. (2014). Toward the design of control algorithms for a photovoltaic equalizer: Detecting shadows through direct current sampling. IEEE Journal of Emerging and Selected Topics in Power Electronics, 2(4), 893–906. DOI 10.1109/JESTPE.2014.2352621. [Google Scholar] [CrossRef]

15. Bressan, M., Basri, Y. E., Galeano, A. G., Alonso, C. (2016). A shadow fault detection method based on the standard error analysis of I-V curves. Renewable Energy, 99, 1181–1190. DOI 10.1016/j.renene.2016.08.028. [Google Scholar] [CrossRef]

16. Ma, M. Y., Zhang, Z. X., Yun, P., Xie, Z., Wang, H. S. et al. (2021). Photovoltaic module current mismatch fault diagnosis based on I-V data. IEEE Journal of Photovoltaics, 11(3), 779–788. DOI 10.1109/JPHOTOV.2021.3059425. [Google Scholar] [CrossRef]

17. Fadhel, S., Delpha, C., Diallo, D., Bahri, I., Migan, A. et al. (2019). PV shading fault detection and classification based on I-V curve using principal component analysis: Application to isolated PV system. Solar Energy, 179, 1–10. DOI 10.1016/j.solener.2018.12.048. [Google Scholar] [CrossRef]

18. Sarikh, S., Raoufi, M., Bennouna, A., Ikken, B. (2021). Characteristic curve diagnosis based on fuzzy classification for a reliable photovoltaic fault monitoring. Sustainable Energy Technologies and Assessments, 43, 100958. DOI 10.1016/j.seta.2020.100958. [Google Scholar] [CrossRef]

19. Pei, T. T., Zhang, J. F., Li, L., Hao, X. H. (2020). A fault locating method for PV arrays based on improved voltage sensor placement. Solar Energy, 201, 279–297. DOI 10.1016/j.solener.2020.03.019. [Google Scholar] [CrossRef]

20. Abd El-Ghany, H. A., Elgebaly, A. E., Taha, I. B. M. (2021). A new monitoring technique for fault detection and classification in PV systems based on rate of change of voltage-current trajectory. International Journal of Electrical Power & Energy Systems, 133(1), 107248. DOI 10.1016/j.ijepes.2021.107248. [Google Scholar] [CrossRef]

21. Chen, S. Q., Yang, G. J., Gao, W., Guo, M. F. (2021). Photovoltaic fault diagnosis via semisupervised ladder network with string voltage and current measures. IEEE Journal of Photovoltaics, 11(1), 219–231. DOI 10.1109/JPHOTOV.2020.3038335. [Google Scholar] [CrossRef]

22. Kumar, B. P., Ilango, G. S., Reddy, M. J. B., Chilakapati, N. (2018). Online fault detection and diagnosis in photovoltaic systems using wavelet packets. IEEE Journal of Photovoltaics, 8(1), 257–265. DOI 10.1109/JPHOTOV.2017.2770159. [Google Scholar] [CrossRef]

23. Yi, Z., Etemadi, A. H. (2017). Line-to-line fault detection for photovoltaic arrays based on multiresolution signal decomposition and Two-stage support vector machine. IEEE Transactions on Industrial Electronics, 64(11), 8546–8556. DOI 10.1109/TIE.2017.2703681. [Google Scholar] [CrossRef]

24. Yi, Z., Etemadi, A. H. (2017). Fault detection for photovoltaic systems based on multi-resolution signal decomposition and fuzzy inference systems. IEEE Transactions on Smart Grid, 8(3), 1274–1283. DOI 10.1109/TSG.2016.2587244. [Google Scholar] [CrossRef]

25. Chen, G., Lin, P., Lai, Y., Chen, Z., Wu, L. et al. (2018). Location for fault string of photovoltaic array based on current time series change detection. Energy Procedia, 145, 406–412. DOI 10.1016/j.egypro.2018.04.067. [Google Scholar] [CrossRef]

26. Karaköse, M., Firildak, K. (2015). A shadow detection approach based on fuzzy logic using images obtained from PV array. 2015 6th International Conference on Modeling, Simulation, and Applied Optimization (ICMSAO), pp. 1–5. Istanbul, Turkey. DOI 10.1109/ICMSAO.2015.7152216. [Google Scholar] [CrossRef]

27. Chen, Z. C., Chen, Y. X., Wu, L. J., Cheng, S. Y., Lin, P. J. (2019). Deep residual network based fault detection and diagnosis of photovoltaic arrays using current-voltage curves and ambient conditions. Energy Conversion and Management, 198, 111793. DOI 10.1016/j.enconman.2019.111793. [Google Scholar] [CrossRef]

28. Chen, L. C., Lin, P. J., Zhang, J., Chen, Z. C., Lin, Y. H. et al. (2018). Fault diagnosis and classification for photovoltaic arrays based on principal component analysis and support vector machine. IOP Conference Series Earth and Environmental Science, 188(1), 012089. DOI 10.1088/1755-1315/188/1/012089. [Google Scholar] [CrossRef]

29. Onal, Y., Turhal, U. C. (2021). Discriminative common vector in sufficient data case: A fault detection and classification application on photovoltaic arrays. Engineering Science and Technology, an International Journal, 24(5), 1168–1179. DOI 10.1016/j.jestch.2021.02.017. [Google Scholar] [CrossRef]

30. Karakose, M., Baygin, M., Parlak, K. S., Baygin, N., Akin, E. (2018). A novel reconfiguration method using image processing based moving shadow detection, optimization, and analysis for PV arrays. Journal of Information Science and Engineering, 34(5), 1307–1328. DOI 10.6688/JISE.201809_34(5). 0012. [Google Scholar] [CrossRef]

31. Espinosa, A. R., Bressan, M., Giraldo, L. F. (2020). Failure signature classification in solar photovoltaic plants using RGB images and convolutional neural networks. Renewable Energy, 162, 249–256. DOI 10.1016/j.renene.2020.07.154. [Google Scholar] [CrossRef]

32. Lazzaretti, A. E., Costa, C. H. D., Rodrigues, M. P., Yamada, G. D., Lexinoski, G. et al. (2020). A monitoring system for online fault detection and classification in photovoltaic plants. Sensors, 20(17), 1–30. DOI 10.3390/s20174688. [Google Scholar] [CrossRef]

33. Redmon, J., Divvala, S., Girshick, R., Farhadi, A. (2016). You only look once: Unified, real-time object detection. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 779–788. Las Vegas, NV, USA. DOI 10.1109/CVPR.2016.91. [Google Scholar] [CrossRef]

34. Redmon, J., Farhadi, A. (2017). YOLO9000: Better, faster, stronger. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 6517–6525. Honolulu, HI, USA. DOI 10.1109/cvpr.2017.690. [Google Scholar] [CrossRef]

35. Redmon, J., Farhadi, A. (2018). YOLOv3: An incremental improvement. https://arxiv.org/abs/1804.02767v1. [Google Scholar]

36. Bochkovskiy, A., Wang, C. Y., Hong, Y. (2020). YOLOv4: Optimal speed and accuracy of object detection. https://arxiv.org/abs/2004.10934. [Google Scholar]

37. Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S. et al. (1016). SSD: Single shot MultiBox detector. Proceedings of the European Conference on Computer Vision, pp. 21–37. Cham. DOI 10.1007/978-3-319-46448-0_2. [Google Scholar] [CrossRef]

38. Lin, T. Y., Goyal, P., Girshick, R., He, K., Dollar, P. (2020). Focal loss for dense object detection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 42(2), 318–327. DOI 10.1109/TPAMI.2018.2858826. [Google Scholar] [CrossRef]

39. Tan, M., Pang, R., Le, Q. V. (2020). EfficientDet: Scalable and efficient object detection. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10778–10787. Seattle, WA, USA. DOI 10.1109/CVPR42600.2020.01079. [Google Scholar] [CrossRef]

40. Zhao, Q., Sheng, T., Wang, Y., Tang, Z., Chen, Y. et al. (2019). M2Det: A single-shot object detector based on multi-level feature pyramid network. Proceedings of the AAAI Conference on Artificial Intelligence, 33(1), 9259–9266. DOI 10.1609/aaai.v33i01.33019259. [Google Scholar] [CrossRef]

41. Girshick, R., Donahue, J., Darrell, T., Malik, J. (2014). Rich feature hierarchies for accurate object detection and semantic segmentation. 2014 IEEE Conference on Computer Vision and Pattern Recognition, pp. 580–587. Columbus, OH, USA. DOI 10.1109/cvpr.2014.81. [Google Scholar] [CrossRef]

42. Girshick, R. (2015). Fast R-CNN. 2015 IEEE International Conference on Computer Vision (ICCV), pp. 1440–1448. Santiago, Chile. DOI 10.1109/ICCV.2015.169. [Google Scholar] [CrossRef]

43. Ren, S., He, K., Girshick, R., Sun, J. (2017). Faster R-CNN: Towards real-time object detection with region proposal networks. IEEE Transactions on Pattern Analysis and Machine Intelligence, 39(6), 1137–1149. DOI 10.1109/TPAMI.2016.2577031. [Google Scholar] [CrossRef]

44. He, K., Zhang, X., Ren, S., Sun, J. (2015). Spatial pyramid pooling in deep convolutional networks for visual recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 37(9), 1904–1916. DOI 10.1109/TPAMI.2015.2389824. [Google Scholar] [CrossRef]

45. Han, K., Wang, Y., Tian, Q., Guo, J., Xu, C. J. (2020). GhostNet: More features from cheap operations. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1577–1586. Seattle, WA, USA. DOI 10.1109/CVPR42600.2020.00165. [Google Scholar] [CrossRef]

46. Wang, C., Liao, H. M., Wu, Y., Chen, P., Hsieh, J. et al. (2020). CSPNet: A New backbone that can enhance learning capability of CNN. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 1571–1580. Seattle, WA, USA. DOI 10.1109/CVPRW50498.2020.00203. [Google Scholar] [CrossRef]

47. Zheng, Z., Wang, P., Liu, W., Li, J., Ye, R. et al. (2019). Distance-IoU loss: Faster and better learning for bounding box regression. https://arxiv.org/abs/1911.08287. [Google Scholar]

48. Gupta, S., Agrawal, A., Gopalakrishnan, K., Narayanan, P. (2015). Deep learning with limited numerical precision. https://arxiv.org/abs/1502.02551. [Google Scholar]

49. Lee, H., Grosse, R., Ranganath, R., Ng, A. Y. (2009). Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations. Proceedings of the 26th Annual International Conference on Machine Learning, pp. 609–616. Montreal, Quebec, Canada. DOI 10.1145/1553374.1553453. [Google Scholar] [CrossRef]


Cite This Article

APA Style
Wu, J., Fan, P., Sun, Y., Gui, W. (2023). Ghost-retinanet: fast shadow detection method for photovoltaic panels based on improved retinanet. Computer Modeling in Engineering & Sciences, 134(2), 1305-1321. https://doi.org/10.32604/cmes.2022.020919
Vancouver Style
Wu J, Fan P, Sun Y, Gui W. Ghost-retinanet: fast shadow detection method for photovoltaic panels based on improved retinanet. Comput Model Eng Sci. 2023;134(2):1305-1321 https://doi.org/10.32604/cmes.2022.020919
IEEE Style
J. Wu, P. Fan, Y. Sun, and W. Gui, “Ghost-RetinaNet: Fast Shadow Detection Method for Photovoltaic Panels Based on Improved RetinaNet,” Comput. Model. Eng. Sci., vol. 134, no. 2, pp. 1305-1321, 2023. https://doi.org/10.32604/cmes.2022.020919


cc Copyright © 2023 The Author(s). Published by Tech Science Press.
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 1306

    View

  • 839

    Download

  • 0

    Like

Share Link