It is one of the topics that have been studied extensively on maximum power point tracking (MPPT) recently. Traditional or soft computing methods are used for MPPT. Since soft computing approaches are more effective than traditional approaches, studies on MPPT have shifted in this direction. This study aims comparison of performance of seven meta-heuristic training algorithms in the neuro-fuzzy training for MPPT. The meta-heuristic training algorithms used are particle swarm optimization (PSO), harmony search (HS), cuckoo search (CS), artificial bee colony (ABC) algorithm, bee algorithm (BA), differential evolution (DE) and flower pollination algorithm (FPA). The antecedent and conclusion parameters of neuro-fuzzy are determined by these algorithms. The data of a 250 W photovoltaic (PV) is used in the applications. For effective MPPT, different neuro-fuzzy structures, different membership functions and different control parameter values are evaluated in detail. Related training algorithms are compared in terms of solution quality and convergence speed. The strengths and weaknesses of these algorithms are revealed. It is seen that the type and number of membership function, colony size, number of generations affect the solution quality and convergence speed of the training algorithms. As a result, it has been observed that CS and ABC algorithm are more effective than other algorithms in terms of solution quality and convergence in solving the related problem.
Interest in MPPT techniques has been ongoing for many years, and numerous studies have been conducted on them. Studies on MPPT techniques aim to increase the efficiency of these algorithms, to reach the maximum power point rapidly, and to oscillating as little as possible around this point. It is seen in the literature that many traditional and artificial intelligence-based approaches are recommended for MPPT [
Mao et al. [
Fuzzy, neural network, neuro-fuzzy and meta-heuristic optimization algorithms are among the artificial intelligence techniques used for MPPT. It is seen that neuro-fuzzy-based studies are particularly effective due to its strong structure. Rezvani et al. [
In the light of the above information, it is seen that ANFIS is used extensively in MPPT studies. As is known, an effective training algorithm is utilized to get effective results with ANFIS. The number of algorithms trained in ANFIS for MPPT is limited. Within the scope of this study, ANFIS is trained using PSO, HS, BA, FPA, DE, ABC and CS for MPPT. There are three main innovations in this study. First, most of these algorithms are used for the first time in ANFIS training for MPPT. Second, for the first time, the ANFIS training performance of these algorithms has been evaluated in detail and the ANFIS training performances of seven heuristic algorithms are compared. Third, this study is important in terms of presenting the performance of different algorithms for MPPT. In general, PSO and GA are used more frequently in the literature. However, it is presented in the light of concrete information that there are algorithms that give better results than these.
Neuro-fuzzy models combine the advantages of neural networks and fuzzy logic. There are many neuro-fuzzy models proposed in the literature. ANFIS [
Layer 1 is named as fuzzification layer. In this layer, fuzzy sets are obtained from input values by using membership functions (MFs). There are parameters that allow the shaping of the membership function. These parameters are called antecedent parameters. They are used in neuro-fuzzy training. Each membership function has a membership degree in the range [0, 1]. In the case of using the generalized bell function (Gbellmf), the membership degree is calculated using
Layer 2 is named as rule layer. Firing strengths are obtained for each rule by using the membership degrees obtained in Layer 1.
Layer 3 is named as normalization layer. Normalized firing strengths are calculated for each rule by utilizing the firing strengths obtained in the previous layer.
Layer 4 is named as defuzzification layer. Weighted values of rules are calculated by using normalized firing strengths and a first order polynomial. {
Layer 5 is named as the summation layer. The actual output of the model is found by summing the outputs obtained for each rule in Layer 4.
It is aimed to adjust the antecedent and conclusion parameters in neuro-fuzzy training. As seen in
MPPT techniques are most popular way to reach maximum power of alternative energy sources like solar, wind and fuel cell etc. MPPT techniques aim to get the maximum power from alternative energy sources at that moment. MPPT techniques can be created with many methods such as traditional methods, intelligent algorithms, and nature-inspired algorithms.
MPPT techniques are most commonly applied in energy sources that include wind turbines and solar panels. The characteristics of the energy produced by these sources depend on environmental conditions. Partial or total shade, dusting and variations due to panel temperature directly affect the output power of solar panels. The simplified electrical equivalent circuit of a PV cell is shown in
where;
In this study, neuro-fuzzy training is performed by using PSO, HS, BA, ABC, FPA, DE and CS for MPPT. The data obtained from the solar simulator is used for MPPT. Solar array simulators are power electronics applications that behave as solar panels. There are a large number of commercial products that use a variety of strengths and techniques designed for this purpose. Solar array simulators can behave as solar panels in a single or array structure over the data they contain or that the user can enter from the outside. These data can be listed as solar radiation in (w/m2), ambient temperature in C0 and type/model of the panel. The solar panel data used in the study are produced by the solar array simulator. In this context, air temperature and solar radiation data pertaining to one day are entered into the solar array simulator. The input data is applied to the 250 W solar panel model of Kyocera Company. Namely, the inputs of the system consist of temperature and solar radiation. The output is the power value. The dataset consists of 509 samples. 408 of them are used in the training process. The other data is applied in the testing process. Due to the large values of the inputs and output, scaling is realized by using
The neuro-fuzzy model consisting of 2 inputs and 1 output given in
A block diagram of MPPT modeling based on neuro-fuzzy is given in
Neuro-fuzzy training is realized by using PSO, HS, BA, ABC, FPA, DE and CS, and the training results are presented in
Colony size | Type of MFs | Number of MFs | Error | |
---|---|---|---|---|
Mean | Std | |||
10 | Gbellmf | 2 | 0.00088 | 0.00055 |
3 | 0.00098 | 0.00054 | ||
Trimf | 2 | 0.00412 | 0.00219 | |
3 | 0.00412 | 0.00263 | ||
Gaussmf | 2 | 0.00058 | 0.00055 | |
3 | 0.00071 | 0.00046 | ||
20 | Gbellmf | 2 | 0.00130 | 0.00089 |
3 | 0.00105 | 0.00056 | ||
Trimf | 2 | 0.00426 | 0.00242 | |
3 | 0.00553 | 0.00268 | ||
Gaussmf | 2 | 0.00081 | 0.00079 | |
3 | 0.00101 | 0.00078 | ||
50 | Gbellmf | 2 | 0.00295 | 0.00127 |
3 | 0.00320 | 0.00107 | ||
Trimf | 2 | 0.00741 | 0.00246 | |
3 | 0.00845 | 0.00293 | ||
Gaussmf | 2 | 0.00235 | 0.00141 | |
3 | 0.00241 | 0.00111 |
Colony size | Type of MFs | Number of MFs | Error | |
---|---|---|---|---|
Mean | Std | |||
10 | Gbellmf | 2 | 0.00486 | 0.00314 |
3 | 0.00474 | 0.00283 | ||
Trimf | 2 | 0.03143 | 0.03655 | |
3 | 0.02309 | 0.01788 | ||
Gaussmf | 2 | 0.00425 | 0.00285 | |
3 | 0.00379 | 0.00291 | ||
20 | Gbellmf | 2 | 0.02438 | 0.01084 |
3 | 0.02564 | 0.01093 | ||
Trimf | 2 | 0.16022 | 0.07415 | |
3 | 0.15397 | 0.04949 | ||
Gaussmf | 2 | 0.03089 | 0.01595 | |
3 | 0.02836 | 0.01392 | ||
50 | Gbellmf | 2 | 0.11626 | 0.04404 |
3 | 0.12123 | 0.04459 | ||
Trimf | 2 | 0.35076 | 0.02279 | |
3 | 0.34126 | 0.02557 | ||
Gaussmf | 2 | 0.11683 | 0.06146 | |
3 | 0.13082 | 0.05454 |
Colony size | Type of MFs | Number of MFs | Error | |
---|---|---|---|---|
Mean | Std | |||
10 | Gbellmf | 2 | 0.01092 | 0.00371 |
3 | 0.01493 | 0.00597 | ||
Trimf | 2 | 0.09689 | 0.02941 | |
3 | 0.08606 | 0.02416 | ||
Gaussmf | 2 | 0.00873 | 0.00385 | |
3 | 0.00724 | 0.00334 | ||
20 | Gbellmf | 2 | 0.01211 | 0.00414 |
3 | 0.01397 | 0.00518 | ||
Trimf | 2 | 0.10396 | 0.03105 | |
3 | 0.08015 | 0.02418 | ||
Gaussmf | 2 | 0.00879 | 0.00371 | |
3 | 0.00624 | 0.00252 | ||
50 | Gbellmf | 2 | 0.01207 | 0.00484 |
3 | 0.01491 | 0.00551 | ||
Trimf | 2 | 0.09317 | 0.02601 | |
3 | 0.07776 | 0.02472 | ||
Gaussmf | 2 | 0.00865 | 0.00368 | |
3 | 0.00834 | 0.00385 |
Colony size | Type of MFs | Number of MFs | Error | |
---|---|---|---|---|
Mean | Std | |||
10 | Gbellmf | 2 | 0.00054 | 0.00034 |
3 | 0.00034 | 0.00023 | ||
Trimf | 2 | 0.00231 | 0.00221 | |
3 | 0.00158 | 0.00174 | ||
Gaussmf | 2 | 0.00066 | 0.00043 | |
3 | 0.00034 | 0.00020 | ||
20 | Gbellmf | 2 | 0.00073 | 0.00044 |
3 | 0.00066 | 0.00029 | ||
Trimf | 2 | 0.00243 | 0.00194 | |
3 | 0.00168 | 0.00157 | ||
Gaussmf | 2 | 0.00083 | 0.00055 | |
3 | 0.00057 | 0.00032 | ||
50 | Gbellmf | 2 | 0.00147 | 0.00049 |
3 | 0.00159 | 0.00228 | ||
Trimf | 2 | 0.00423 | 0.00222 | |
3 | 0.00519 | 0.00258 | ||
Gaussmf | 2 | 0.00156 | 0.00121 | |
3 | 0.00129 | 0.00084 |
Colony size | Type of MFs | Number of MFs | Error | |
---|---|---|---|---|
Mean | Std | |||
10 | Gbellmf | 2 | 0.00117 | 0.00071 |
3 | 0.00058 | 0.00027 | ||
Trimf | 2 | 0.00558 | 0.00328 | |
3 | 0.00456 | 0.00278 | ||
Gaussmf | 2 | 0.00122 | 0.00071 | |
3 | 0.00056 | 0.00037 | ||
20 | Gbellmf | 2 | 0.00179 | 0.00083 |
3 | 0.00107 | 0.00039 | ||
Trimf | 2 | 0.00607 | 0.00269 | |
3 | 0.00575 | 0.00252 | ||
Gaussmf | 2 | 0.00171 | 0.00063 | |
3 | 0.00102 | 0.00046 | ||
50 | Gbellmf | 2 | 0.00390 | 0.00143 |
3 | 0.00362 | 0.00161 | ||
Trimf | 2 | 0.01213 | 0.00359 | |
3 | 0.01078 | 0.00254 | ||
Gaussmf | 2 | 0.00354 | 0.00132 | |
3 | 0.00220 | 0.00081 |
Colony size | Type of MFs | Number of MFs | Error | |
---|---|---|---|---|
Mean | Std | |||
10 | Gbellmf | 2 | 0.00087 | 0.00130 |
3 | 0.00151 | 0.00176 | ||
Trimf | 2 | 0.00093 | 0.00146 | |
3 | 0.00128 | 0.00220 | ||
Gaussmf | 2 | 0.00032 | 0.00086 | |
3 | 0.00131 | 0.00163 | ||
20 | Gbellmf | 2 | 0.00456 | 0.00138 |
3 | 0.00510 | 0.00174 | ||
Trimf | 2 | 0.00320 | 0.00176 | |
3 | 0.00558 | 0.00191 | ||
Gaussmf | 2 | 0.00368 | 0.00160 | |
3 | 0.00459 | 0.00141 | ||
50 | Gbellmf | 2 | 0.00548 | 0.00215 |
3 | 0.00538 | 0.00223 | ||
Trimf | 2 | 0.00737 | 0.00269 | |
3 | 0.00631 | 0.00194 | ||
Gaussmf | 2 | 0.00544 | 0.00143 | |
3 | 0.00451 | 0.00170 |
Colony size | Type of MFs | Number of MFs | Error | |
---|---|---|---|---|
Mean | Std | |||
10 | Gbellmf | 2 | 0.00040 | 0.00018 |
3 | 0.00028 | 0.00013 | ||
Trimf | 2 | 0.00218 | 0.00129 | |
3 | 0.00184 | 0.00131 | ||
Gaussmf | 2 | 0.00051 | 0.00030 | |
3 | 0.00025 | 0.00013 | ||
20 | Gbellmf | 2 | 0.00051 | 0.00017 |
3 | 0.00034 | 0.00011 | ||
Trimf | 2 | 0.00215 | 0.00122 | |
3 | 0.00186 | 0.00102 | ||
Gaussmf | 2 | 0.00056 | 0.00029 | |
3 | 0.00034 | 0.00013 | ||
50 | Gbellmf | 2 | 0.00093 | 0.00030 |
3 | 0.00068 | 0.00022 | ||
Trimf | 2 | 0.00357 | 0.00127 | |
3 | 0.00336 | 0.00117 | ||
Gaussmf | 2 | 0.00100 | 0.00041 | |
3 | 0.00071 | 0.00022 |
The effect of meta-heuristic algorithms on the results differs according to the MFs used.
Algorithm | Type of MFs | Number of MFs | Population size | Error | |
---|---|---|---|---|---|
Train | Test | ||||
PSO | Gaussmf | 2 | 10 | 0.00058 | 0.00057 |
HS | Gaussmf | 3 | 10 | 0.00379 | 0.00376 |
BA | Gaussmf | 3 | 20 | 0.00624 | 0.00620 |
ABC | Gaussmf | 3 | 10 | 0.00034 | 0.00033 |
FPA | Gaussmf | 3 | 10 | 0.00056 | 0.00056 |
DE | Gaussmf | 2 | 10 | 0.00032 | 0.00032 |
CS | Gaussmf | 3 | 10 | 0.00025 | 0.00024 |
The success ranking of the algorithms varies according to colony size and the neuro-fuzzy structure chosen.
Population size | Number of MFs | Process | Algorithms | ||||||
---|---|---|---|---|---|---|---|---|---|
PSO | HS | BA | ABC | FPA | DE | CS | |||
N = 10 | Gbellmf | Training | 4 | 6 | 7 | 2 | 3 | 5 | 1 |
Test | 4 | 6 | 7 | 2 | 3 | 5 | 1 | ||
Trimf | Training | 4 | 6 | 7 | 2 | 5 | 1 | 3 | |
Test | 5 | 6 | 7 | 3 | 4 | 1 | 2 | ||
Gaussmf | Training | 4 | 6 | 7 | 2 | 3 | 5 | 1 | |
Test | 4 | 6 | 7 | 2 | 3 | 5 | 1 | ||
N = 20 | Gbellmf | Training | 3 | 7 | 6 | 2 | 4 | 5 | 1 |
Test | 3 | 7 | 6 | 2 | 4 | 5 | 1 | ||
Trimf | Training | 3 | 7 | 6 | 1 | 5 | 4 | 2 | |
Test | 4 | 7 | 6 | 1 | 5 | 3 | 2 | ||
Gaussmf | Training | 3 | 7 | 6 | 2 | 4 | 5 | 1 | |
Test | 4 | 7 | 6 | 2 | 3 | 5 | 1 | ||
N = 50 | Gbellmf | Training | 3 | 7 | 6 | 2 | 4 | 5 | 1 |
Test | 3 | 7 | 6 | 2 | 4 | 5 | 1 | ||
Trimf | Training | 4 | 7 | 6 | 2 | 5 | 3 | 1 | |
Test | 4 | 7 | 6 | 3 | 5 | 2 | 1 | ||
Gaussmf | Training | 4 | 7 | 6 | 2 | 3 | 5 | 1 | |
Test | 4 | 7 | 6 | 2 | 3 | 5 | 1 | ||
Total rank | 67 | 120 | 114 | 36 | 70 | 74 | 23 |
One of the important performance criteria of algorithms is convergence speeds. Convergence of related meta-heuristic algorithms on MPPT is compared in
In neuro-fuzzy training, the type of MFs, number of MFs, number of rules, colony size and number of generations directly affect performance. It is seen that different MFs are used in the literature. Within the scope of this study, the analyzes are carried out on three popular MFs such as Gbellmf, Trimf and Gaussmf. In MPPT, the effect of MFs is clearly observed. The best results are found by utilizing Gaussmf. The increase in the number of MFs directly affects the number of parameters to be used in training. At the same time, it also affects the solution quality. It can be positive in some cases and negative in other cases. In particular, the training algorithms are also a decisive factor. This is clearly observed in MPPT model based on the neuro-fuzzy.
The effect of training algorithms is very important in neuro-fuzzy training. Control parameters affect the performance of training algorithms. Colony size and number of maximum generations are common control parameters of algorithms. In neuro-fuzzy training for MPPT, increasing colony size negatively affects performance generally. The best results are generally obtained when the colony size is 10. Namely, increasing colony size often worsens solution quality. This situation can be analyzed on the CS and ABC algorithm, where the best results are obtained. When 2 Gbellmf, the obtained results for n = 10, 20 and 50 are 0.00040, 0.00051 and 0.00093 by using CS, respectively. When the ABC algorithm is evaluated, the errors obtained are 0.00054, 0.00073 and 0.00147, respectively. As can be seen, as the colony size increases, the quality of the solution deteriorates. A similar situation is mostly observed in other membership functions and meta-heuristic algorithms. Within the limitations of the study, it is seen that n = 10 is more effective for solving the related problem.
In addition to the solution quality, one of the important indicators in neuro-fuzzy training is the speed of convergence. Fast convergence enables to reach effective solutions in a short time. An algorithm can reach efficient solutions in high generations. If it cannot reach effective solutions in lower generations, it will have a disadvantage. It is important to achieve the best results in a short time in energy and electronics applications. Especially in the process of integrating artificial intelligence techniques such as neuro-fuzzy and artificial neural networks into embedded systems, this may affect the preferences. As with solution quality, convergence is affected by membership functions, colony size, and number of generations. The effect of convergence is discussed within the limitations of the study. Here, the convergence obtained with n = 10 and 3 Gaussmf is evaluated. Within this limitation, it was observed that the algorithms with the best convergence speed received CS and ABC.
As it is known, the training process takes place on known data. Predicted outputs are obtained as a result of the training process. As a result of an effective training process, it is expected that the error will be low. In other words, the real output used in the training process and the predicted output should be similar to each other. As the similarity increases, the acceptability of the training process will also increase. That’s why it’s important to compare real and predicted output. When the comparison chart is examined, the predicted output values in the first 100 and the last 50 data have moved away from the real output mostly. In other words, it shows that meta-heuristic algorithms have difficulty in predicting this data. In other data, the algorithms mostly achieved a good predictive output. It is seen that the predicted outputs obtained with CS and ABC exactly overlap with the real output. This means that the two algorithms produce acceptable results for MPPT estimation.
Temperature and solar radiation are given as an input to the neuro-fuzzy model in MPPT. As the output, power value is obtained. When the tables and figures are examined, it is seen that results very close to the real output value are obtained. This indicates that the relevant heuristic algorithms can be used for MPPT. It is true that CS and ABC algorithm will be more effective for precise calculations. However, it is possible to use other training algorithms within certain fault tolerance. The maximum number of generations can be increased to achieve better results. CS and ABC algorithm are recommended for studies requiring rapid results.
This study proposes a MPPT model based on neuro-fuzzy by using PSO, HS, BA, ABC, FPA, DE and CS. The proposed MPPT model takes as input values such as the temperature and solar radiation. It is obtained a power value as the output. For effective MPPT, neuro-fuzzy model is separately trained using related meta-heuristic algorithms. The effects of type of MF, number of MF, colony size and maximum number of generations on performance are investigated. All obtained results are compared with each other. It has been observed that CS and ABC algorithms are more effective than others in terms of both solution quality and convergence. For the proposed MPPT model, Gaussmf is generally more successful. More effective solutions are usually found when the colony size is 10. In particular, it is observed that the difference between the real output and the predicted output is low. This shows that the suggested approaches can be used in MPPT applications.
MPPT is one of the important topics studied in the literature. Variants of related algorithms can be developed to obtain more effective results on MPPT in future studies. It is possible to analyze the performance of different artificial intelligence techniques on MPPT. In addition, it seems from the results that the proposed method can be applied to different engineering problems other than MPPT.