Computer Modeling in Engineering & Sciences |
DOI: 10.32604/cmes.2022.019198
ARTICLE
An Improved Gorilla Troops Optimizer Based on Lens Opposition-Based Learning and Adaptive
College of Mechanical and Electrical Engineering, Northeast Forestry University, Harbin, 150040, China
*Corresponding Author: Xue Sun. Email: xuesun@hit.edu.cn
Received: 08 September 2021; Accepted: 28 October 2021
Abstract: Gorilla troops optimizer (GTO) is a newly developed meta-heuristic algorithm, which is inspired by the collective lifestyle and social intelligence of gorillas. Similar to other metaheuristics, the convergence accuracy and stability of GTO will deteriorate when the optimization problems to be solved become more complex and flexible. To overcome these defects and achieve better performance, this paper proposes an improved gorilla troops optimizer (IGTO). First, Circle chaotic mapping is introduced to initialize the positions of gorillas, which facilitates the population diversity and establishes a good foundation for global search. Then, in order to avoid getting trapped in the local optimum, the lens opposition-based learning mechanism is adopted to expand the search ranges. Besides, a novel local search-based algorithm, namely adaptive
Keywords: Gorilla troops optimizer; circle chaotic mapping; lens opposition-based learning; adaptive -hill climbing
Optimization refers to the process of searching for the optimal solution to a particular issue under certain constraints, so as to maximize benefits, performance and productivity [1–4]. With the help of optimization techniques, a large number of problems encountered in different applied disciplines could be solved in a more efficient, accurate, and real-time way [5, 6]. However, with the increasing complexity of global optimization problems nowadays, conventional mathematical methods based on gradient information are challenged by high-dimensional, suboptimal regions, and large-scale search ranges that cannot adapt to the real requirements [7, 8]. The development of more effective tools to settle these complex NP-hard problems is an indivisible research hotspot. Compared to traditional approaches, meta-heuristic algorithms (MAs) are often able to obtain the global best results on such problems, which is attributed to the merits of their simple structure, ease of implementation, as well as strong capability to bypass the local optimum [9, 10]. As a result, during the past few decades, MAs have entered the blowout stage and received major attention from worldwide scholars [11 –13].
MAs find out the optimal solution through the simulation of stochastic phenomena in nature. Based on the different design concepts, the nature-inspired MAs may be generally classified into four categories [14–16]: evolution-based, physical-based, swarm-based, and human-based algorithms. Specifically, evolutionary algorithms emulate the laws of Darwinian natural selection theory, and some well-regarded cases of which are Genetic Algorithm (GA) [17], Differential Evolution (DE) [18], and Biogeography-Based Optimization (BBO) [19]. Physical-based algorithms simulate the physical phenomenon of the universe such as Simulated Annealing (SA) [20], Multi-Verse Optimizer (MVO) [21], Thermal Exchange Optimization (TEO) [22], Atom Search Optimization (ASO) [23], and Equilibrium Optimizer (EO) [24], etc. Swarm-based algorithms primarily originate from the collective behaviours of social creatures. A remarkable embodiment of this category of algorithms is Particle Swarm Optimization (PSO) [25], which was first proposed in 1995 based on the foraging behaviour of birds. Ant Colony Optimization (ACO) [26], Chicken Swarm Optimization (CSO) [27], Dragonfly Algorithm (DA) [28], Whale Optimization Algorithm (WOA) [29], Spotted Hyena Optimizer (SHO) [30], Emperor Penguin Optimizer (EPO) [31], Seagull Optimization Algorithm (SOA) [32], Harris Hawks Optimization (HHO) [33], Tunicate Swarm Algorithm (TSA) [34], Sooty Tern Optimization Algorithm (STOA) [35], Slime Mould Algorithm (SMA) [36], Rat Swarm Optimizer (RSO) [37], and Aquila Optimizer (AO) [38] are also essential parts in this branch. The final type is influenced by human learning habits including Search Group Algorithm (SGA) [39], Soccer League Competition Algorithm (SLC) [40], and Teaching-Learning-Based Optimization (TLBO) [41].
With their own distinctive characteristics, these metaheuristics are commonly used in a variety of computing science fields, such as fault diagnosis [42], feature selection [43], engineering optimization [44], path planning [45], and parameters identification [46]. Nevertheless, it has been shown that the most basic algorithms still have the limitations of slow convergence, poor accuracy, and ease of getting trapped into the local optimum [7, 15] in several applications. The non-free lunch (NFL) theorem indicates that there is no general algorithm that could be appropriate for all optimization tasks [47]. Hence, encouraged by this theorem, many scholars begin improving existing algorithms to generate higher-quality solutions from different aspects. Fan et al. [7] proposed an enhanced Equilibrium Optimizer (m-EO) algorithm based on reverse learning and novel updating mechanisms, which considerably increase its convergence speed and precision. Jia et al. [48] introduced the dynamic control parameter and mutation strategies into the Harris Hawks Optimization, and then proposed a novel method called DHHO/M to segment satellite images. Ding et al. [49] constructed an improved Whale Optimization Algorithm (LNAWOA) for continuous optimization, in which the nonlinear convergence factor is utilized to speed up the convergence. Besides, authors in [50] employed Lévy flight and crossover operation to further promote the robust and global exploration capability of the native Salp Swarm Algorithm. Recently, there is also an emerging trend to combine two prospective MAs to overcome the performance drawbacks of one single algorithm. For instance, Abdel-Basset et al. [51] incorporated Slime Mould Optimizer and Whale Optimization Algorithm into an efficient hybrid algorithm (HSMAWOA) for image segmentation of chest X-ray to determine whether a person is infected with the COVID-19 virus. Fan et al. [9] proposed a new hybrid algorithm named ESSAWOA, which has been successfully applied to solve structural design problems. Moreover, Liu et al. [52] developed a hybrid imperialist competitive evolutionary algorithm and used it to find out the best portfolio solutions. Dhiman [53] constructed a hybrid bio-inspired Emperor Penguin and Salp Swarm Algorithm (ESA) for numerical optimization that effectively deals with different constraint problems in engineering optimization.
In this study, we focus on a novel swarm intelligent algorithm namely Gorilla Troops Optimizer (GTO), which was proposed by Abdollahzadeh et al. in 2021 [54]. The inspiration of GTO originates from the collective lifestyle and social intelligence of gorillas. Preliminary research indicates that GTO has excellent performances on benchmark function optimization. Nevertheless, similar to other meta-heuristic algorithms, it still suffers from low optimization accuracy, premature convergence, and the propensity to fall into the local optimum when solving complex optimization problems [55]. These defects are mainly associated with the poor quality of the initial population, lack of a proper balance between the exploration and exploitation, and low likelihood of large spatial leaps in the iteration process. Therefore, NFL theorem motivates us to improve this latest swarm-inspired algorithm.
In view of the above discussion, to enhance GTO for global optimization, an improved gorilla troops optimizer known as IGTO is developed in this paper by incorporating three improvements. Firstly, Circle chaotic mapping is utilized to replace the random initialization mode of GTO for enriching population diversity. Secondly, a novel lens opposition-based learning mechanism is adopted to boost the exploration capability of the algorithm, while avoiding falling into the local optimum. Additionally, adaptive
The remainder of this paper is arranged as follows: the basic GTO algorithm is briefly described in Section 2. In Section 3, a detailed description of three improved mechanisms and the proposed IGTO is presented. In Section 4, the experimental results of benchmark function optimization are reported and discussed. Besides, the applicability of the IGTO for resolving practical engineering problems and training multilayer perceptron is highlighted and analyzed in Sections 5 and 6. Finally, the conclusion of this work and potential future work directions are given in Section 7.
Gorilla troops optimizer is a recently proposed nature-inspired and gradient-free optimization algorithm, which emulates the gorillas’ lifestyle in the group [54]. The gorilla lives in a group called troop, composed of an adult male gorilla also known as the silverback, multiple adult female gorillas and their posterity. A silverback gorilla (shown in Fig. 1) typically has an age of more than 12 years and is named for the unique hair on his back at puberty. Besides, the silverback is the head of the whole troop, taking all decisions, mediating disputes, directing others to food resources, determining group movements, and being responsible for safety. Younger male gorillas at the age of 8 to 12 years are called blackbacks since they still lack silver-coloured back hairs. They are affiliated with the silverback and act as backup defenders for the group. In general, both female and male gorillas tend to migrate from the group where they were born to a second new group. Alternatively, mature male gorillas are also likely to separate from their original group and constitute troops for their own by attracting migrating females. However, some male gorillas sometimes choose to stay in the initial troop and continue to follow the silverback. If the silverback dies, these males might engage in a brutal battle for dominance of the group and mating with adult females. Based on the above concept of gorillas group behaviour in nature, the specific mathematical model for the GTO algorithm is developed. As with other intelligent algorithms, GTO contains three main parts: initialization, global exploration, and local exploitation, which are explained thoroughly below.
Suppose there are N gorillas in the D-dimensional space. The position of the i-th gorilla in the space can be defined as Xi = (xi, 1, xi, 2, ⋯, xi, D), i = 1, 2, ⋯, N. Thus, the initialization process of gorilla populations can be described as:
where ub and lb are the upper and lower boundaries of the search range, respectively, and rand(N, D) denotes the matrix with N rows and D columns, where each element is a random number between 0 and 1.
Once gorillas depart from their original troop, they will move to diverse environments in nature that they might or might not have ever seen before. In the GTO algorithm, all gorillas are considered as candidate solutions, and the optimal solution in each optimization process is deemed to be the silverback. In order to accurately simulate such natural behaviour of migration, the position update equation of the gorilla for the exploration stage was designed by employing three different approaches including migrating towards unknown positions, migrating around familiar locations, and moving to other groups, as shown in Eq. (2):
where t indicates the current iteration times, X(t) denotes the current position vector of the individual gorilla, and GX(t + 1) refers to the candidate position of search agents in the next iteration. Besides, r1, r2, r3 and r4 are all the random values between 0 and 1. XA(t) and XB(t) are two randomly selected gorilla positions in the current population. p is a constant. Z denotes a row vector in the problem dimension with values of the element are randomly generated in [ − C, C]. And the parameter C is calculated according to Eq. (3).
where cos(·) represents the cosine function, r5 is a random number in the range of 0 to 1, and Maxiter indicates the maximum iterations.
As for the parameter L in Eq. (2) could be computed as follows:
where l is a random number in between [ −1, 1].
Upon the completion of the exploration, the fitness values of all newly generated candidate GX(t + 1) solutions are evaluated. Provided that GX is better than X i.e., F(GX) < F(X), where F(·) denotes the fitness function for a certain problem, it will be retained and replace the original solution X(t). In addition, the optimal solution at this period is selected as the silverback Xsilverback.
When the troop was just established, the silverback is powerful and healthy, while the others male gorillas are still young. They obey all the decisions of silverback in search of diverse food resources and serve the silverback gorilla faithfully. Undeniably speaking, the silverback also grows old and then finally dies, with younger blackbacks in the troop might get involved into a violent conflict with the other males for mating with the adult females and the leadership. As mentioned previously, two behaviours of following the silverback and competing for adult female gorillas are modelled in the exploitation phase of GTO. At the same time, the parameter W is introduced to control the switch between them. If the value C in Eq. (3) is greater than W, the first mechanism of following the silverback is elected. The mathematical expression is as follows:
where L is also evaluated using Eq. (4), Xsilverback represents the best solution obtained so far, and X(t) denotes the current position vector. In addition, the parameter M could be calculated according to Eq. (6):
where N refers to the population size, and Xi(t) denotes each position vector of the gorilla in the current iteration.
If C < W, it implies that the latter mechanism is chosen, in this case, the location of gorillas can be updated as follows:
In Eq. (7), X(t) denotes the current position and Q stands for the impact force, which is computed using Eq. (8). In Eq. (8), r6 is a random value in the range of 0 to 1. Moreover, the coefficient A used to mimic the violence intensity in the competition is evaluated by Eq. (9), where ϕ denotes a constant and the values of E are assigned with Eq. (10). In Eq. (10), r7 is also a random number in [0, 1]. If r7≥ 0.5, E would be defined as a 1-by-D array of normal distribution random numbers, and D is the spatial dimension. Instead, if r7 < 0.5, E would be equal to a stochastic number that obeys the normal distribution.
Similarly, at the end of the exploitation process, the fitness values of the newly generated candidate GX(t + 1) solution are also calculated. If F(GX) < F(X), the solution GX will be preserved and participate in the subsequent optimization, while the optimal solution within all individuals is defined as the silverback Xsilverback. The pseudo-code of GTO is shown in Algorithm 1.
In order to further improve the performance of the basic GTO algorithm for global optimization, a novel variant named IGTO is presented in this section. First, Circle chaotic mapping is adopted to initialize the gorilla populations, which is considered from increasing the population diversity. Second, an effective lens opposition-based learning strategy is implemented to expand the search range and avoid the algorithm falling into the local optimum. Final, the modified algorithm is hybridized with the adaptive
It is indicated that the quality of the initial population individuals has a significant impact on the efficiency of most current metaheuristic algorithms [49, 56]. When applying the GTO algorithm to tackle an optimization problem, the population is usually initialized by means of a stochastic search. Though this method is accessible to implement, yet it suffers from a lack of ergodicity and excessively depends on the probability distribution, which cannot guarantee that the initial population is uniformly distributed in the search space, thereby deteriorating the solution precision and convergence speed of the algorithm.
Chaotic mapping is a complex dynamic method found in nonlinear systems with the properties of unpredictability, randomness, and ergodicity. Compared to random distribution, chaotic mapping allows the initial population individual to explore the solution space thoroughly with a higher convergence speed and sensitivity so that it is widely adopted to improve the optimization performance of algorithms. Research results have proven that Circle chaotic mapping has superior exploration performance than the commonly used Logistic chaotic mapping and Tent chaotic mapping [57]. Consequently, in order to boost the population diversity and take full advantage of the information in the solution space, Circle chaotic mapping is introduced in this study to improve the initialization mode of the basic GTO. And the mathematical expression of Circle chaotic mapping is as follows:
where a = 0.5 and b = 0.2. Under the same free independent parameters, the random search mechanism and Circle mapping are selected to be executed independently 300 times. Besides, the obtained results are shown in Fig. 2. It can be seen from the figure that the traversal of Circle chaotic mapping is wider and more homogeneously distributed in the feasible domain [0, 1] than that of random search. Hence, the proposed algorithm has a more robust global exploration ability after incorporating Circle chaotic mapping.
The pseudo-code for initializing the population using Circle chaotic mapping is outlined in Algorithm 2.
3.2 Lens Opposition-Based Learning
As a novel technique in the area of smart computing, lens opposition-based learning (LOBL), incorporating traditional opposition-based learning (OBL) [58] and convex lens imaging discipline, has been successfully employed in different intelligent algorithm optimizations [59, 60]. Its basic ideology is to simultaneously calculate and compare the candidate solution and corresponding reverse solution, and then choose the superior one to proceed with the next iteration. Theoretically demonstrated by Fan et al. [9], LOBL can produce a solution close to the global optimum with higher possibility. Therefore, in this study, LOBL is utilized to update the candidate solutions during the exploration phase, in order to enlarge the search range and help the algorithm to escape from the local optimum. Several conceptions about LOBL are represented mathematically as follows.
Lens imaging is a physical optics phenomenon, which refers to the fact that while an object is located at more than two principal focal lengths away from the convex lens, a smaller and inverted image will be produced on the opposite side of the lens. Taking the one-dimensional search space in Fig. 3 for illustration, there is a convex lens with the focal length f set at the base point O (the midpoint of search range [lb, ub]). Besides, an object p with the height h is placed on the coordinate axis, and its projection is GX (the candidate solution). Distance from the object to the lens u is greater than twice f. Through the lens imaging operation, an inverted imaging p′ of height h* could be attained, which is projected as GX*(the reverse solution) on the x-axis. In accordance with the rules of lens imaging as well as similar triangle, the geometrical relationship obtained from Fig. 3 can be expressed as:
Here, let the scale factor n = h/h*, the reverse solution GX* is calculated by transferring the Eq. (12):
It is obvious that when n = 1, Eq. (13) can be simplified as the general formulation of OBL strategy:
So, we could regard the opposition-based learning strategy as a peculiar case of LOBL. In comparison to OBL, the latter allows acquiring dynamic reverse solutions and a wider search range by tuning the scale factor n.
Generally, Eq. (13) could be extended into D-dimensional space:
where lbj and ubj are the lower and upper limits of the j-th dimension, respectively, j = 1, 2, ⋯ D,
When a new inverse solution is generated, there is no guarantee that it is always better than the current candidate solution as in the gorilla position. Therefore, it is necessary to evaluate the fitness values of the inverse solution and candidate solution, then the fitter one will be selected to continue participating in the subsequent exploitation phase, which is described as follows:
where GX* indicates the reverse solution generated by LOBL, GX is the current candidate solution, GXnext is the selected gorilla to continue the subsequent position updating, and F(·) denotes the fitness function of the problem. The pseudo-code of lens opposition-based learning mechanism is shown in Algorithm 3.
Adaptive
For the given current solution Xi = (xi, 1, xi, 2,…, xi, D), A
where U(0, 1) denotes a random number in the interval [0, 1], xi, j denotes the value of the decision variable in the j-th dimension, t denotes the current iteration, Maxiter refers to the maximum number of iterations,
Immediately after, the decision variables of new solution
where r8 is a random number in the interval [0, 1], xi, r denotes another random number chosen from the possible range of that particular dimension of the problem, β max and β min denote the maximum and minimum values of probability value β ∈ [0, 1], respectively. If the generated solution
Based on the improved mechanisms mentioned in Subsections 3.1∼3.3 above, the flowchart of the proposed IGTO algorithm for global optimization problems is illustrated in Fig. 4. Moreover, Algorithm 5 outlines the pseudo-code of IGTO.
4 Experimental Results and Discussion
In this section, a total of 19 benchmark functions from the literature [64] are selected for contrast experiments to comprehensively evaluate the feasibility and effectiveness of the proposed IGTO algorithm. First, the definitions of these benchmark functions, parameter settings, and measurements of algorithm performance are presented. Afterwards, the basic GTO and other five advanced meta-heuristic algorithms, such as GWO [65], WOA [29], SSA [66], HHO [33], and SMA [36], are employed as competitors to validate the improvements and superiority of the proposed algorithm based on the solution accuracy, boxplot, convergence behavior, average computation time, and statistical result. Final, the scalability of IGTO is investigated by solving high dimensional problems. All the simulation experiments are implemented in MATLAB R2014b with Microsoft Windows 7 system, and the hardware platform of the computer is configured as Intel(R) Core(TM) i5-7400 CPU @ 3.00 GHz, and 8 GB of RAM.
The benchmark functions used in this paper could be divided into three various categories: unimodal (UM), multimodal (MM), and fix-dimension multimodal (FM). The unimodal functions (F1∼F7) contain only one global minimum, which are frequently used to detect the development competence and convergence rate of algorithms. The multimodal functions (F8∼F13), consisting of several local minima and a single global optimum in the search space, are well suited for assessing the algorithm’s capability to explore and escape from local optima. The fix-dimension multimodal functions (F14∼F19) are combinations of the previous two forms of functions, but with lower dimensions, and they are designed to evaluate the stability of the algorithm. Table 1 shows the formulations, spatial dimensions, search ranges, and theoretical minimum of these functions. In addition, 3D images for the search space of several typical functions are displayed in Fig. 5.
In order to estimate the performance of the improved IGTO algorithm in solving global optimization problems, we select the basic GTO [54] and five state-of-the-art algorithms, namely GWO [65], WOA [29], SSA [66], HHO [33], and SMA [36]. For fair comparisons, the maximum iteration and population size of seven algorithms are set as 500 and 30, respectively. As per the references [9, 61] and extensive trials, in the proposed IGTO algorithm, we set the scale factor n = 12000, K = 30, β max = 1 and β min = 0.1. Besides, all parameter values of the remaining six algorithms are set the same as those recommended in the original papers, as shown in Table 2. These parameter settings assure the fairness of the comparison experiments by allowing each algorithm to make the most of its optimization property. All algorithms are executed independently 30 times within each benchmark function to decrease accidental error.
4.3 Evaluation Criteria of Performance
In this study, two metrics are used to measure the performance of the proposed algorithm including the average fitness value (Avg) and standard deviation (Std) of optimization results. The average fitness value intuitively characterizes the convergence accuracy and the search capability of the algorithm, which is calculated as follows:
where n denotes the times that an algorithm has run, and Si indicates the obtained result of each operation.
And the standard deviation indicates the deviation degree between the experimental results and the average value. The equation of standard deviation is available as follows:
4.4 Comparison with IGTO and Other Algorithms
In this subsection, to examine the performance of the proposed algorithm, IGTO is compared with the basic GTO and five other advanced algorithms according to benchmark function optimization results. For fair comparisons, the maximum iteration and population size of seven algorithms are set as 500 and 30, respectively, and the rest parameter settings have been given in Subsection 4.2 above. Meanwhile, each algorithm runs 30 times independently on the test function F1-F19 in Table 1 to decrease random error. The average fitness value (Avg) and standard deviation (Std) of each algorithm obtained from the experiment are reported in Table 3. In general, the closer the average fitness (Avg) to the theoretical minimum, the higher convergence accuracy of the algorithm. While the smaller the value of the standard deviation (Std), the better the stability and robustness of the algorithm.
As seen from Table 3, when solving the unimodal benchmark functions (F1∼F7), IGTO obtains the global optimal minima with regard to the average fitness on functions F1∼F4. For function F5, the convergence accuracy of IGTO has a great improvement over its predecessors GTO and it is the winner among all algorithms. For test function F6, the results of IGTO are similar to SSA and GTO, yet still marginally better them. Besides, IGTO shows superior results on function F7 in contrast to other optimizers. In terms of standard deviation, the proposed IGTO has excellent performance on all test problems. Given the properties of the unimodal functions, these results show that IGTO has outstanding search precision and local exploitation potential.
The multimodal benchmark functions (F8∼F13) have many local minima in the search space, so these functions are usually employed to analyze the algorithm’s potential to avoid the local optima. For functions F8, F12 and F13, the average fitness and standard deviation of IGTO are obviously better than the rest of the algorithms. For function F9, IGTO obtains the same global optimal minima as WOA, HHO, SMA, GTO. Moreover, HHO, SMA, GTO and IGTO obtains the same performance on functions F10 and F11. It hopefully validates that the proposed IGTO can effectively bypass the local optimum and find high quality solutions.
The fix-dimension multimodal functions (F14∼F19) consist of few local optima, which are designed to evaluate the stability of the algorithm in switching between exploration and exploitation processes. As far as the average fitness values are concerned, IGTO performs the same as SMA and GTO on function F14, albeit better than others. For functions F15, F18 and F19, IGTO can generate superior results to all competitors. For function F16, the performance of seven optimizers is identical. Although the result of the proposed IGTO is worse than HHO on function F17, it still ranks second and shows significant improvements over the basic GTO to a certain extent. On the other hand, IGTO achieves the optimal standard deviation on all test cases. This proves that our proposed IGTO is able to keep a better balance between exploration and exploitation.
In view of the above, a summary can be drawn that the proposed multi-strategy combination IGTO algorithm exhibits strong global search capability and is superior to the other six intelligent algorithms in comparison. Benefiting from the hybrid A
In order to better illustrate the stability of the proposed algorithm, the corresponding boxplots of functions 1, 2, 3, 5 and 6 from UM benchmark functions, functions 9, 10 and 12 from MM benchmark functions, and function 15 selected from FM benchmark functions are shown in Fig. 6. From Fig. 6, it can be seen that IGTO algorithm shows remarkable consistency in most issues with respect to the median, maximum and minimum values compared with the others. In addition, IGTO generates no outliers during the iterations with the more concentrated distribution of convergence values, thereby verifying the strong robustness and superiority of the improved IGTO.
Fig. 7 visualizes the convergence curves of different algorithms on nine representative benchmark functions. Likewise, where functions 1, 2, 3, 5 and 6 are unimodal, functions 9, 10 and 12 are multimodal, and function 15 belongs to the fix-dimension multimodal category. From Fig. 7, it is clear that the convergence speed of IGTO is the fastest among all algorithms on functions F1∼F3, and the proposed algorithm can rapidly reach the global optimal solution at the beginning of the search process. For functions F5 and F6, IGTO has a similar trend to HHO and GTO in the initial stage, but its efficiency is fully demonstrated in the late iterations, and eventually the proposed IGTO obtains the best result. For functions F9, IGTO remains a superior convergence rate and obtains the global optimum within 20 iterations. Although the convergence accuracy of IGTO is the same as that of HHO, SMA and the basic GTO on functions F10, yet it converges more quickly. For function F12, the proposed algorithm is still the champion compared with the remaining six optimizers in terms of final accuracy and speed. Besides, the convergence curves of seven algorithms are pretty close on the fix-dimension multimodal function F15. However, the performance of IGTO is slightly better than the others.
On the basis of experimental results of boxplot analysis and convergence curves, IGTO has a considerable enhancement in convergence speed and stability compared with the basic GTO, which is owed to the good foundation of global search laid by Circle chaotic mapping and LOBL strategy.
The average computation time spent by each algorithm on test functions F1∼F19 is reported in Table 4. For a more intuitive conclusion, the total runtime of each method is calculated and ranked as follows: SMA(14.118 s)¿IGTO(8.073 s)¿GTO(6.690 s)¿HHO(6.568 s)¿GWO(4.912 s)¿SSA(4.897 s)¿WOA(4.065 s). It can be found that IGTO uses more computation time than GTO, which is the second to last. Compared with the basic GTO algorithm, the introduction of three improved strategies increases the steps of the algorithm and extra time. Of course, the high computation cost of GTO algorithm itself is also a primary cause of this. However, the IGTO takes less time than SMA on most test functions. To improve the solution accuracy, a little more runtime is sacrificed. On the whole, the proposed algorithm is acceptable in view of the optimal search performance, and its limitation is still the need to decrease the computational time.
Moreover, since the average fitness (Avg) and standard deviation (Std) of the algorithm after 30 runs are not compared with the results of each run, it is often not accurate to evaluate the performance of an algorithm based only on the mean and standard deviation. To represent the robustness and fairness of the improved algorithm, the Wilcoxon rank-sum test [67], a nonparametric statistical test approach is used to estimate the significant differences between IGTO and other algorithms. For Wilcoxon rank-sum test, the significance level is set to 0.05 and acquired p-values are listed in Table 5. In this table, the sign “+” denotes that IGTO performs significantly better than the corresponding algorithm, “=” denotes that the performance of IGTO is analogous to that of the compared algorithm, “-” denotes that IGTO is poorer than the compared one, and the last line counts the total number of all signs. It can be seen from the table that for the 19 benchmark test functions, the proposed IGTO algorithm outperforms GWO on 19 functions, WOA and SSA on 18 functions, HHO on 16 functions, SMA on 14 functions, and the basic GTO on 13 functions, respectively. Therefore, according to the statistical theory analysis, our proposed IGTO has a significant enhancement over the other algorithms and it is the optimal optimizer among them.
Lastly, the mean absolute error (MAE) of all algorithms on 19 benchmark problems is evaluated and ranked. MAE is also a useful statistical tool to reveal the gap between the experimental results and the theoretical values [1], and its mathematical expression is as follows:
In Eq. (23), N is the number of benchmark functions used, oi represents the desired value of each test function, and ai is the actual value obtained. The MAE and relative rankings of each algorithm are reported in Table 6. From this table, it is obvious that IGTO outperforms all competitors and the MAE of IGTO is reduced by 2 orders of magnitude compared to GTO, which once again demonstrates the superiority of the proposed algorithm statistically.
Scalability reflects the execution efficiency of an algorithm in different dimensional spaces. As the dimensions of the optimization problem increase, most current intelligent algorithms are highly prone to be ineffective and subject to “dimensional disaster”. To investigate the scalability of IGTO, the proposed algorithm is utilized to optimize 13 benchmark functions F1∼F13 in Table 1 with higher dimensions (i.e., 50, 100 and 200 dimensions). The average fitness values (Avg) obtained by the basic GTO and IGTO on each function are reported in Table 7. From the data in the table, it is clear that the convergence accuracy of both algorithms gradually decreases with the increase in dimensions, which is due to the fact that the larger the dimensions, the more elements an algorithm needs to optimize. However, the experimental results of IGTO are consistently superior to GTO on functions F1∼F8, F12 and F13, and the disparity in optimization performance between them is increasingly obvious as the dimension increases. Besides, it is notable that the proposed IGTO can always obtain the theoretical optimal solution on functions F1∼F4. For functions F9∼F11, two algorithms obtain the same performance.
The overall results fully prove that IGTO is not only able to solve low-dimensional functions at ease, but also maintain good scalability in high-dimensional functions, that is to say, the performance of IGTO does not deteriorate significantly when tackling high-dimensional problems, and it can still provide high-quality solutions effectively with well exploitation and exploration capabilities.
5 IGTO for Solving Engineering Design Problems
In this section, the applicability of the proposed IGTO is tested by solving four practical engineering design problems including pressure vessel design problem, gear train design problem, welded beam design problem and rolling element bearing design problem. For the sake of convenience, the death penalty [68] function is used here to handle the infeasible solutions subjected to constraints. IGTO runs independently 30 times for each issue, with the maximum iterations and population size are set to 500 and 30, respectively. At last, the obtained results are compared against those of different advanced meta-heuristic algorithms in the literature, as well as the corresponding analysis are presented.
The pressure vessel design problem was first purposed by Kannan et al. [69], the purpose of which is to minimize the overall fabrication cost of a pressure vessel. There are four decision variables involved in this optimum design: Ts (z1, thickness of the shell), Th (z2, thickness of the head), R (z3, inner radius), and L (z4, length of the cylindrical portion). Fig. 8 illustrates the structure of pressure vessel used in this study and its related mathematical model can be defined as follows:
consider
minimize
subject to
variable range: 0 ≤ z1≤ 99, 0 ≤ z2≤ 99, 10 ≤ z3≤ 200, 10 ≤ z4≤ 200
The experimental results of IGTO for this problem are compared against those resolved by GTO, SMA [36], HHO [33], AOA [4], SSA, WOA [29], and GWO [65], as shown in Table 8. It is shown that IGTO can provide the best design among all algorithms, and the minimum cost obtained is
This is a classical mechanical engineering problem developed by Sandgren [70]. Fig. 9 shows the schematic view of the gear train. As its name suggests, the ultimate aim of this problem is to find four optimal parameters that minimize the gear ratio (
consider
minimize
variable range: 12 ≤ z1, z2, z3, z4≤ 60
Table 9 reports the detailed results of comparative experiments for the gear train design problem. From the data in Table 9, it is apparent that the proposed IGTO is better than other optimizers in handling this case and effectively finds a brilliant solution.
As its name implies, the purpose of this welded beam design problem is to reduce the total manufacturing cost as much as possible. This optimum design contains four decision parameters: the width of weld (h), the length of the clamped bar (l), the height of the bar (t), and the bar thickness (b). Besides, in the optimization process, several constraints should not be contravened such as bending stress in the beam, buckling load, shear stress and end deflection. The schematic view of this issue is shown in Fig. 10, and the related mathematical formulation is illustrated as follows: consider
minimize
subject to
variable range: 0.1 ≤ z1, z4≤ 2, 0.1 ≤ z2, z3≤ 10 where
The optimal results of IGTO vs. those achieved by GTO, MVO [21], SSA [66], HHO [33], WOA [29], MTDE [71], ESSWOA [9] are reported in Table 10. As can be seen from Table 10, it is obvious that the proposed IGTO provides better design than majority of other algorithms. The minimum cost
5.4 Rolling Element Bearing Design
Unlike the previous problems, the final objective of this issue is to maximize the dynamic load capacity of rolling element bearings as possible. The structure of a rolling element bearing is illustrated in Fig. 11. There is a total of ten structural variables involved in the solution of this optimization problem, namely: pitch diameter (Dm), ball diameter (Db), the number of balls (Z), the inner and outer raceway curvature radius coefficient (fi and fo), Kdmin, Kdmax, δ, e as well as ζ. Mathematically, the description of this problem is given as follows:
maximize
subject to
where
The results of optimum variables and fitness fetched applying different intelligent algorithms are listed in Table 11. Compared with other well-known optimizers, the proposed IGTO reveals the superior quality solution at
As a summary, it is reasonable to believe that the proposed IGTO is equally feasible and competitive in practical engineering design problems from the observed results. In addition, the excellent performance in resolving engineering design problems indicates that IGTO is able to be widely used in real-world optimization problems as well.
6 IGTO for Training Multilayer Perceptron
Multilayer perceptron (MLP), as one of the most extensively used artificial neural network models [74], has been successfully implemented for solving various real-world issues such as pattern classification [75] and regression analysis [76]. The MLP is characterized by multiple perceptron, in which there is at least one hidden layer in addition to one input layer and one output layer. The information is received as input on one side of the MLP and the output is supplied from the other side via one-way transmission between nodes in different layers. For the MLP, since the sample data space is mostly high-dimensional and multimodal, at the same time there is also a potential for data interference by noise, data redundancy and data loss. Thus, the main purpose of training the MLP is to update two crucial parameters that dominate the final output: the weights W and biases θ, which is a very challenging optimization problem [15, 77]. In this section, the Balloon and Breast cancer datasets from the University of California at Irvine (UCI) repository [78] are utilized for examining the applicability of the proposed IGTO algorithm for training MLP. Table 12 presents the specification of these datasets.
In order to measure the algorithm performance of training the MLP, the average mean square error criteria (
In Eq. (24), q represents the number of training samples, m is the number of outputs, and
Besides the optimization algorithms shown in Table 2, Tunicate Swarm Algorithm (TSA) [34], Sooty Tern Optimization Algorithm (STOA) [35], and Seagull Optimization Algorithm (SOA) [32] are also taken into account in this experiment. The variables are assumed to be in the range of [ −10, 10]. Each optimizer executes independently 10 times, with the maximum iterations and population size are set to 250 and 30, respectively. Meanwhile, the parameters of all algorithms are consistent with the original literature. With regard to the structure of the MLP, the number of nodes in the hidden layer is equal to 2n + 1 as recommended in [74], where n denotes the number of attributes in the dataset. Fig. 12 illustrates an example for the process of training the MLP by IGTO.
The
All these results demonstrate that the proposed algorithm has a stable and consistent ability to get rid of the local optimum and eventually find the global minima in the complex search space. Besides, this case also highlights the applicability of IGTO algorithm. IGTO is capable of finding more suitable crucial parameters for MLP, thus making it perform better.
In this paper, a novel improved version of the basic gorilla troops algorithm named IGTO was put forward to solve complex global optimization problems. First, Circle chaotic mapping was introduced to enhance the diversity of the initial gorilla population. Second, the lens opposition-based learning strategy was adopted to expand the search domain, thus avoiding the algorithm falling into the local optima. Moreover, the adaptive
Nevertheless, as mentioned in the experiment section above, IGTO still has the main limitation of high computation time, which needs to be improved. It is believed that this situation could be mitigated via the introduction of several parallel mechanisms, e.g., master-slave model, cell model and coordination strategy.
In the future work, we will aim to further enhance the solution accuracy of IGTO while reducing the total process consumption. Also, we plan to further investigate the impact of the lens opposition-based learning and adaptive
Acknowledgement: The authors are grateful to the editor and reviewers for their constructive comments and suggestions, which have improved the presentation.
Funding Statement: This work is financially supported by the Fundamental Research Funds for the Central Universities under Grant 2572014BB06.
Conflicts of Interest: The authors declare that they have no conflicts of interest to report regarding the present study.
1. Zhang, X., Zhao, K., Niu, Y. (2020). Improved harris hawks optimization based on adaptive cooperative foraging and dispersed doraging strategies. IEEE Access, 8, 160297–160314. DOI 10.1109/access.2020.3013332. [Google Scholar] [CrossRef]
2. Birogul, S. (2019). Hybrid harris hawk optimization based on differential evolution (HHODE) algorithm for optimal power flow problem. IEEE Access, 7, 184468–184488. DOI 10.1109/access.2019.2958279. [Google Scholar] [CrossRef]
3. Hussain, K., Salleh, M. N. M., Cheng, S., Shi, Y. H. (2019). Metaheuristic research: A comprehensive survey. Artificial Intelligence Review, 52(4), 2191–2233. DOI 10.1007/s10462-017-9605-z. [Google Scholar] [CrossRef]
4. Abualigah, L., Diabat, A., Mirjalili, S., Abd Elaziz, M., Gandomi, A. H. (2021). The arithmetic optimization algorithm. Computer Methods in Applied Mechanics and Engineering, 376, 113609. DOI 10.1016/j.cma.2020.113609. [Google Scholar] [CrossRef]
5. Liang, J., Xu, W., Yue, C., Yu, K., Song, H. et al. (2019). Multimodal multiobjective optimization with differential evolution. Swarm and Evolutionary Computation, 44, 1028–1059. DOI 10.1016/j.swevo.2018.10.016. [Google Scholar] [CrossRef]
6. Nadimi-Shahraki, M. H., Taghian, S., Mirjalili, S. (2021). An improved grey wolf optimizer for solving engineering problems. Expert Systems with Applications, 166, 113917. DOI 10.1016/j.eswa.2020.113917. [Google Scholar] [CrossRef]
7. Fan, Q., Huang, H., Yang, K., Zhang, S., Yao, L. et al. (2021). A modified equilibrium optimizer using opposition-based learning and novel update rules. Expert Systems with Applications, 170, 114575. DOI 10.1016/j.eswa.2021.114575. [Google Scholar] [CrossRef]
8. Boussaid, I., Lepagnot, J., Siarry, P. (2013). A survey on optimization metaheuristics. Information Sciences, 237, 82–117. DOI 10.1016/j.ins.2013.02.041. [Google Scholar] [CrossRef]
9. Fan, Q., Chen, Z., Zhang, W., Fang, X. (2020). ESSAWOA: Enhanced whale optimization algorithm integrated with salp swarm algorithm for global optimization. Engineering with Computers, DOI 10.1007/s00366-020-01189-3. [Google Scholar] [CrossRef]
10. Dokeroglu, T., Sevinc, E., Kucukyilmaz, T., Cosar, A. (2019). A survey on new generation metaheuristic algorithms. Computers & Industrial Engineering, 137, 106040. DOI 10.1016/j.cie.2019.106040. [Google Scholar] [CrossRef]
11. Slowik, A., Kwasnicka, H. (2018). Nature inspired methods and their industry applications–-swarm intelligence algorithms. IEEE Transactions on Industrial Informatics, 14(3), 1004–1015. DOI 10.1109/tii.2017.2786782. [Google Scholar] [CrossRef]
12. Abualigah, L., Alsalibi, B., Shehab, M., Alshinwan, M., Khasawneh, A. M. et al. (2020). A parallel hybrid krill herd algorithm for feature selection. International Journal of Machine Learning and Cybernetics, 12(3), 783–806. DOI 10.1007/s13042-020-01202-7. [Google Scholar] [CrossRef]
13. Debnath, S., Baishya, S., Sen, D., Arif, W. (2020). A hybrid memory-based dragonfly algorithm with differential evolution for engineering application. Engineering with Computers, 37(4), 2775–2802. DOI 10.1007/s00366-020-00958-4. [Google Scholar] [CrossRef]
14. Nguyen, T. T., Wang, H. J., Dao, T. K., Pan, J. S., Liu, J. H. et al. (2020). An improved slime mold algorithm and its application for optimal operation of cascade hydropower stations. IEEE Access, 8, 226754–226772. DOI 10.1109/access.2020.3045975. [Google Scholar] [CrossRef]
15. Jia, H., Sun, K., Zhang, W., Leng, X. (2021). An enhanced chimp optimization algorithm for continuous optimization domains. Complex & Intelligent Systems, DOI 10.1007/s40747-021-00346-5. [Google Scholar] [CrossRef]
16. Dehghani, M., Montazeri, Z., Givi, H., Guerrero, J., Dhiman, G. (2020). Darts game optimizer: A new optimization technique based on darts game. International Journal of Intelligent Engineering and Systems, 13(5), 286–294. DOI 10.22266/ijies2020.1031.26. [Google Scholar] [CrossRef]
17. Hamed, A. Y., Alkinani, M. H., Hassan, M. R. (2020). A genetic algorithm optimization for multi-objective multicast routing. Intelligent Automation & Soft Computing, 26(6), 1201–1216. DOI 10.32604/iasc.2020.012663. [Google Scholar] [CrossRef]
18. Jiang, A., Guo, X., Zheng, S., Xu, M. (2021). Parameters identification of tunnel jointed surrounding rock based on Gaussian process regression optimized by difference evolution algorithm. Computer Modeling in Engineering & Sciences, 127(3), 1177–1199. DOI 10.32604/cmes.2021.014199. [Google Scholar] [CrossRef]
19. Simon, D. (2008). Biogeography-based optimization. IEEE Transactions on Evolutionary Computation, 12(6), 702–713. DOI 10.1109/TEVC.2008.919004. [Google Scholar] [CrossRef]
20. Kirkpatrick, S., Gelatt, C. D., Vecchi, M. P. (1983). Optimization by simulated annealing. Science, 220(4598), 671–680. DOI 10.1126/science.220.4598.671. [Google Scholar] [CrossRef]
21. Mirjalili, S., Mirjalili, S. M., Hatamlou, A. (2015). Multi-verse optimizer: A nature-inspired algorithm for global optimization. Neural Computing and Applications, 27(2), 495–513. DOI 10.1007/s00521-015-1870-7. [Google Scholar] [CrossRef]
22. Kaveh, A., Dadras, A. (2017). A novel meta-heuristic optimization algorithm: Thermal exchange optimization. Advances in Engineering Software, 110, 69–84. DOI 10.1016/j.advengsoft.2017.03.014. [Google Scholar] [CrossRef]
23. Zhao, W., Wang, L., Zhang, Z. (2019). Atom search optimization and its application to solve a hydrogeologic parameter estimation problem. Knowledge-Based Systems, 163, 283–304. DOI 10.1016/j.knosys.2018.08.030. [Google Scholar] [CrossRef]
24. Faramarzi, A., Heidarinejad, M., Stephens, B., Mirjalili, S. (2020). Equilibrium optimizer: A novel optimization algorithm. Knowledge-Based Systems, 191, 105190. DOI 10.1016/j.knosys.2019.105190. [Google Scholar] [CrossRef]
25. Shi, Y., Eberhart, R. C. (1999). Empirical study of particle swarm optimization. Proceedings of the 1999 Congress on Evolutionary Computation, pp. 1945–1950. Washington, DC, USA. [Google Scholar]
26. Dorigo, M., Birattari, M., Stutzle, T. (2006). Ant colony optimization. IEEE Computational Intelligence Magazine, 1(4), 28–39. DOI 10.1109/MCI.2006.329691. [Google Scholar] [CrossRef]
27. Meng, X., Liu, Y., Gao, X., Zhang, H. (2014). A new bio-inspired algorithm: Chicken swarm optimization. International Conference in Swarm Intelligence, pp. 86–94. Cham, Switzerland: Springer. DOI 10.1007/978-3-319-11857-4_10. [Google Scholar] [CrossRef]
28. Mirjalili, S. (2016). Dragonfly algorithm: A new meta-heuristic optimization technique for solving single-objective, discrete, and multi-objective problems. Neural Computing and Applications, 27(4), 1053–1073. DOI 10.1007/s00521-015-1920-1. [Google Scholar] [CrossRef]
29. Mirjalili, S., Lewis, A. (2016). The whale optimization algorithm. Advances in Engineering Software, 95, 51–67. DOI 10.1016/j.advengsoft.2016.01.008. [Google Scholar] [CrossRef]
30. Dhiman, G., Kumar, V. (2017). Spotted hyena optimizer: A novel bio-inspired based metaheuristic technique for engineering applications. Advances in Engineering Software, 114, 48–70. DOI 10.1016/j.advengsoft.2017.05.014. [Google Scholar] [CrossRef]
31. Dhiman, G., Kumar, V. (2018). Emperor penguin optimizer: A bio-inspired algorithm for engineering problems. Knowledge-Based Systems, 159, 20–50. DOI 10.1016/j.knosys.2018.06.001. [Google Scholar] [CrossRef]
32. Dhiman, G., Kumar, V. (2019). Seagull optimization algorithm: Theory and its applications for large- scale industrial engineering problems. Knowledge-Based Systems, 165, 169–196. DOI 10.1016/j.knosys.2018.11.024. [Google Scholar] [CrossRef]
33. Heidari, A. A., Mirjalili, S., Faris, H., Aljarah, I., Mafarja, M. et al. (2019). Harris hawks optimization: Algorithm and applications. Future Generation Computer Systems, 97, 849–872. DOI 10.1016/j.future.2019.02.028. [Google Scholar] [CrossRef]
34. Kaur, S., Awasthi, L. K., Sangal, A. L., Dhiman, G. (2020). Tunicate swarm algorithm: A new bio-inspired based metaheuristic paradigm for global optimization. Engineering Applications of Artificial Intelligence, 90, 103541. DOI 10.1016/j.engappai.2020.103541. [Google Scholar] [CrossRef]
35. Dhiman, G., Kaur, A. (2019). STOA: A bio-inspired based optimization algorithm for industrial engineering problems. Engineering Applications of Artificial Intelligence, 82, 148–174. DOI 10.1016/j.engappai.2019.03.021. [Google Scholar] [CrossRef]
36. Li, S., Chen, H., Wang, M., Heidari, A. A., Mirjalili, S. (2020). Slime mould algorithm: A new method for stochastic optimization. Future Generation Computer Systems, 111, 300–323. DOI 10.1016/j.future.2020.03.055. [Google Scholar] [CrossRef]
37. Dhiman, G., Garg, M., Nagar, A., Kumar, V., Dehghani, M. (2020). A novel algorithm for global optimization: Rat swarm optimizer. Journal of Ambient Intelligence and Humanized Computing, 12(8), 8457–8482. DOI 10.1007/s12652-020-02580-0. [Google Scholar] [CrossRef]
38. Abualigah, L., Yousri, D., Abd Elaziz, M., Ewees, A. A., Al-qaness, M. A. A. et al. (2021). Aquila optimizer: A novel meta-heuristic optimization algorithm. Computers & Industrial Engineering, 157, 107250. DOI 10.1016/j.cie.2021.107250. [Google Scholar] [CrossRef]
39. Gonçalves, M. S., Lopez, R. H., Miguel, L. F. F. (2015). Search group algorithm: A new metaheuristic method for the optimization of truss structures. Computers & Structures, 153, 165–184. DOI 10.1016/j.compstruc.2015.03.003. [Google Scholar] [CrossRef]
40. Moosavian, N., Roodsari, B. K. (2014). Soccer league competition algorithm: A novel meta-heuristic algorithm for optimal design of water distribution networks. Swarm and Evolutionary Computation, 17, 14–24. DOI 10.1016/j.swevo.2014.02.002. [Google Scholar] [CrossRef]
41. Rao, R. V., Savsani, V. J., Vakharia, D. (2011). Teaching-learning-based optimization: A novel method for con-strained mechanical design optimization problems. Computer-Aided Design, 43(3), 303–315. DOI 10.1016/j.cad.2010.12.015. [Google Scholar] [CrossRef]
42. Wu, T., Liu, C. C., He, C. (2019). Fault diagnosis of bearings based on KJADE and VNWOA-lSSVM algorithm. Mathematical Problems in Engineering, 2019, 1–19. DOI 10.1155/2019/8784154. [Google Scholar] [CrossRef]
43. Ghosh, K. K., Ahmed, S., Singh, P. K., Geem, Z. W., Sarkar, R. (2020). Improved binary sailfish optimizer based on adaptive
44. Tang, A., Zhou, H., Han, T., Xie, L. (2021). A chaos sparrow search algorithm with logarithmic spiral and adaptive step for engineering problems. Computer Modeling in Engineering & Sciences, 129(1), 1–34. DOI 10.32604/cmes.2021.017310. [Google Scholar] [CrossRef]
45. Yan, Z., Zhang, J., Tang, J. (2021). Path planning for autonomous underwater vehicle based on an enhanced water wave optimization algorithm. Mathematics and Computers in Simulation, 181, 192–241. DOI 10.1016/j.matcom.2020.09.019. [Google Scholar] [CrossRef]
46. El-Fergany, A. A. (2021). Parameters identification of PV model using improved slime mould optimizer and Lambert W-function. Energy Reports, 7, 875–887. DOI 10.1016/j.egyr.2021.01.093. [Google Scholar] [CrossRef]
47. Wolpert, D. H., Macready, W. G. (1997). No free lunch theorems for optimization. IEEE Transactions on Evolutionary Computation, 1(1), 67–82. DOI 10.1109/4235.585893. [Google Scholar] [CrossRef]
48. Jia, H., Lang, C., Oliva, D., Song, W., Peng, X. (2019). Dynamic harris hawks optimization with mutation mechanism for satellite image segmentation. Remote Sensing, 11(12), 1421. DOI 10.3390/rs11121421. [Google Scholar] [CrossRef]
49. Ding, H., Wu, Z., Zhao, L. (2020). Whale optimization algorithm based on nonlinear convergence factor and chaotic inertial weight. Concurrency and Computation: Practice and Experience, 32(24), e5949. DOI 10.1002/cpe.5949. [Google Scholar] [CrossRef]
50. Jia, H., Lang, C. (2021). Salp swarm algorithm with crossover scheme and Lévy flight for global optimization. Journal of Intelligent & Fuzzy Systems, 40(5), 9277–9288. DOI 10.3233/jifs-201737. [Google Scholar] [CrossRef]
51. Abdel-Basset, M., Chang, V., Mohamed, R. (2020). HSMA_WOA: A hybrid novel slime mould algorithm with whale optimization algorithm for tackling the image segmentation problem of chest X-ray images. Applied Soft Computing, 95, 106642. DOI 10.1016/j.asoc.2020.106642. [Google Scholar] [CrossRef]
52. Liu, C. A., Lei, Q., Jia, H. (2020). Hybrid imperialist competitive evolutionary algorithm for solving biobjective portfolio problem. Intelligent Automation & Soft Computing, 26(6), 1477–1492. DOI 10.32604/iasc.2020.011853. [Google Scholar] [CrossRef]
53. Dhiman, G. (2019). ESA: A hybrid bio-inspired metaheuristic optimization approach for engineering problems. Engineering with Computers, 37(1), 323–353. DOI 10.1007/s00366-019-00826-w. [Google Scholar] [CrossRef]
54. Abdollahzadeh, B., Soleimanian Gharehchopogh, F., Mirjalili, S. (2021). Artificial gorilla troops optimizer: A new nature-inspired metaheuristic algorithm for global optimization problems. International Journal of Intelligent Systems, 36(10), pp. 5887–5958. DOI 10.1002/int.22535. [Google Scholar] [CrossRef]
55. Ginidi, A., Ghoneim, S. M., Elsayed, A., El-Sehiemy, R., Shaheen, A. et al. (2021). Gorilla troops optimizer for electrically based single and double-diode models of solar photovoltaic systems. Sustainability, 13(16), 9459. DOI 10.3390/su13169459. [Google Scholar] [CrossRef]
56. Duan, Y., Liu, C., Li, S. (2021). Battlefield target grouping by a hybridization of an improved whale optimization algorithm and affinity propagation. IEEE Access, 9, 46448–46461. DOI 10.1109/access.2021.3067729. [Google Scholar] [CrossRef]
57. Kaur, G., Arora, S. (2018). Chaotic whale optimization algorithm. Journal of Computational Design and Engineering, 5(3), 275–284. DOI 10.1016/j.jcde.2017.12.006. [Google Scholar] [CrossRef]
58. Tizhoosh, H. R. (2005). Opposition-based learning: A new scheme for machine intelligence. International Conference on Computational Intelligence for Modelling, Control and Automation and International Conference on Intelligent Agents, web Technologies and Internet Commerce, pp. 695–701. Vienna, Austria. [Google Scholar]
59. Ouyang, C., Zhu, D., Qiu, Y., Zhang, H. (2021). Lens learning sparrow search algorithm. Mathematical Problems in Engineering, 2021, 1–17. DOI 10.1155/2021/9935090. [Google Scholar] [CrossRef]
60. Long, W., Wu, T., Tang, M., Xu, M., Cai, S. (2020). Grey wolf optimizer algorithm based on lens imaging learning strategy. Acta Automatica Sinica, 46(10), 2148–2164. DOI 10.16383/j.aas.c180695. [Google Scholar] [CrossRef]
61. Al-Betar, M. A., Aljarah, I., Awadallah, M. A., Faris, H., Mirjalili, S. (2019). Adaptive
62. Glover, F. (1986). Future paths for integer programming and links to artificial intelligence. Computers & Operations Research, 13(5), 533–549. DOI 10.1016/0305-0548(86)90048-1. [Google Scholar] [CrossRef]
63. Mladenović, N., Hansen, P. (1997). Variable neighborhood search. Computers & Operations Research, 24(11), 1097–1100. DOI 10.1016/S0305-0548(97)00031-2. [Google Scholar] [CrossRef]
64. Long, W., Jiao, J., Liang, X., Wu, T., Xu, M. et al. (2021). Pinhole-imaging-based learning butterfly optimization algorithm for global optimization and feature selection. Applied Soft Computing, 103, 107146. DOI 10.1016/j.asoc.2021.107146. [Google Scholar] [CrossRef]
65. Mirjalili, S., Mirjalili, S. M., Lewis, A. (2014). Grey wolf optimizer. Advances in Engineering Software, 69, 46–61. DOI 10.1016/j.advengsoft.2013.12.007. [Google Scholar] [CrossRef]
66. Mirjalili, S., Gandomi, A. H., Mirjalili, S. Z., Saremi, S., Faris, H. et al. (2017). Salp swarm algorithm: A bio-inspired optimizer for engineering design problems. Advances in Engineering Software, 114, 163–191. DOI 10.1016/j.advengsoft.2017.07.002. [Google Scholar] [CrossRef]
67. García, S., Fernández, A., Luengo, J., Herrera, F. (2010). Advanced nonparametric tests for multiple comparisons in the design of experiments in computational intelligence and data mining: Experimental analysis of power. Information Sciences, 180(10), 2044–2064. DOI 10.1016/j.ins.2009.12.010. [Google Scholar] [CrossRef]
68. Mirjalili, S. (2015). Moth-flame optimization algorithm: A novel nature-inspired heuristic paradigm. Knowledge-Based Systems, 89, 228–249. DOI 10.1016/j.knosys.2015.07.006. [Google Scholar] [CrossRef]
69. Kannan, B., Kramer, S. N. (1994). An augmented lagrange multiplier based method for mixed integer discrete continuous optimization and its applications to mechanical design. Journal of Mechanical Design, 116(2), 405–411. DOI 10.1115/1.2919393. [Google Scholar] [CrossRef]
70. Sandgren, E. (1990). Nonlinear integer and discrete programming in mechanical design optimization. Journal of Mechanical Design, 112(2), 223–229. DOI 10.1115/1.2912596. [Google Scholar] [CrossRef]
71. Nadimi-Shahraki, M. H., Taghian, S., Mirjalili, S., Faris, H. (2020). MTDE: An effective multi-trial vector-based differential evolution algorithm and its applications for engineering design problems. Applied Soft Computing, 97, 106761. DOI 10.1016/j.asoc.2020.106761. [Google Scholar] [CrossRef]
72. Savsani, P., Savsani, V. (2016). Passing vehicle search (PVSA novel metaheuristic algorithm. Applied Mathematical Modelling, 40(5), 3951–3978. DOI 10.1016/j.apm.2015.10.040. [Google Scholar] [CrossRef]
73. Naruei, I., Keynia, F. (2021). A new optimization method based on COOT bird natural life model. Expert Systems with Applications, 183, 115352. DOI 10.1016/j.eswa.2021.115352. [Google Scholar] [CrossRef]
74. Mirjalili, S., Mirjalili, S. M. Lewis, A. (2014). Let a biogeography-based optimizer train your multi-layer perceptron. Information Sciences, 269, 188–209. DOI 10.1016/j.ins.2014.01.038. [Google Scholar] [CrossRef]
75. Melin, P., Sánchez, D., Castillo, O. (2012). Genetic optimization of modular neural networks with fuzzy response integration for human recognition. Information Sciences, 197, 1–19. DOI 10.1016/j.ins.2012.02.027. [Google Scholar] [CrossRef]
76. Guo, Z. X., Wong, W. K., Li, M. (2012). Sparsely connected neural network-based time series forecasting. Information Sciences, 193, 54–71. DOI 10.1016/j.ins.2012.01.011. [Google Scholar] [CrossRef]
77. Wang, L., Zhang, D., Fan, Y., Xu, H., Wang, Y. (2021). Multilayer perceptron training based on a Cauchy variant grey wolf optimizer algorithm. Computer Engineering and Science, 43(6), 1131–1140. DOI 10.3969/j.issn.1007-130X.2021.06.024. [Google Scholar] [CrossRef]
78. Dua, D., Graff, C. (2019). UCI machine learning repository. Irvine, CA: University of California, School of Information and Computer Science http://archive.ics.uci.edu/ml. [Google Scholar]
This work is licensed under a Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. |