Computers, Materials & Continua DOI:10.32604/cmc.2022.019867 | ![]() |
Article |
AMBO: All Members-Based Optimizer for Solving Optimization Problems
1Department of Mathematics and Computer Sciences, Sirjan University of Technology, Sirjan, Iran
2Department of Electrical and Electronics Engineering, Shiraz University of Technology, Shiraz, Iran
3Department of Mathematics, Faculty of Science, University of Hradec Králové, Hradec Králové, 50003, Czech Republic
4Department of Computer Science, Government Bikram College of Commerce, Patiala, Punjab, India
*Corresponding Author: Pavel Trojovský. Email: pavel.trojovsky@uhk.cz
Received: 29 April 2021; Accepted: 18 June 2021
Abstract: There are many optimization problems in different branches of science that should be solved using an appropriate methodology. Population-based optimization algorithms are one of the most efficient approaches to solve this type of problems. In this paper, a new optimization algorithm called All Members-Based Optimizer (AMBO) is introduced to solve various optimization problems. The main idea in designing the proposed AMBO algorithm is to use more information from the population members of the algorithm instead of just a few specific members (such as best member and worst member) to update the population matrix. Therefore, in AMBO, any member of the population can play a role in updating the population matrix. The theory of AMBO is described and then mathematically modeled for implementation on optimization problems. The performance of the proposed algorithm is evaluated on a set of twenty-three standard objective functions, which belong to three different categories: unimodal, high-dimensional multimodal, and fixed-dimensional multimodal functions. In order to analyze and compare the optimization results for the mentioned objective functions obtained by AMBO, eight other well-known algorithms have been also implemented. The optimization results demonstrate the ability of AMBO to solve various optimization problems. Also, comparison and analysis of the results show that AMBO is superior and more competitive than the other mentioned algorithms in providing suitable solution.
Keywords: Algorithm; all members; optimization; optimization algorithm; optimization problem; population-based algorithm
Optimization is defined as finding the best solution out of all possible solutions to a problem by considering the constraints and limitations. Therefore, each optimization problem consists of three main parts: decision variables, primary objectives, and secondary objectives. The decision variables are the same as the problem variables, the primary objectives represent the constraints of the problem, and the secondary objectives are the objective functions of the problem.
Population-based optimization algorithms (PBOAs) are one of the most effective methods for solving optimization problems. PBOAs are able to provide appropriate solutions to optimization problems based on random scan of the search space and through an iterative-based process [1]. Each optimization problem has a best basic solution called global optimum. On the other hand, the solution obtained by PBOAs for the optimization problem is not necessarily the global optimal solution. Hence, the solution provided by PBOAs is referred as quasi-optimal solution [2,3]. Whatever the quasi-optimal solution provided by an algorithm is closer to the global optimum, that algorithm has a better performance in solving that optimization problem. For this reason, various PBOAs have been introduced by scientists to provide quasi-optimal solutions to optimization problems. In this regard, optimization algorithms have been applied in various fields in the literature such as energy [4–7], protection [8], electrical engineering [9–14], and energy carriers [15,16] to achieve the optimal solution.
PBOAs are designed based on simulation of various natural phenomena, behavior of living organisms, physical laws, genetic sciences, rules of the games, and so on. In a general classification based on the design idea, PBOAs are categorized into 4 groups: swarm-based, physics-based, evolutionary-based, and game-based optimization algorithms.
Swarm-based optimization algorithms are designed based on simulating the behavior of living organisms such as animals, plants, and natural phenomena. Particle Swarm Optimization (PSO) is one of the most widely-used algorithms in this category, which is based on simulating behaviors of birds’ swarm [17]. Ant Colony Optimization (ACO) is another swarm-based optimization technique that has been introduced based on the behavior of ants in finding the shortest route between their nest and the food source [18]. Simulation of the patient treatment process by the doctor has been used in designing doctor and patient optimizer (DPO) [19]. Some of the other swarm-based optimization algorithms are: Seagull Optimization Algorithm (SOA) [20], Whale Optimization Algorithm (WOA) [21], Firefly Algorithm (FA) [22], Artificial Bee Colony (ABC) [23], Cuckoo Search (CS) [24], Bat-inspired Algorithm (BA) [25], Spotted Hyena Optimizer (SHO) [26], Monkey Search (MS) [27], Artificial Fish-Swarm Algorithm (AFSA) [28], Group Optimization (GO) [29], Dolphin Partner Optimization (DPO) [30], Hunting Search (HS) [31], Coupled Spring Forced Bat Algorithm (SFBA) [32], Teaching-Learning-Based Optimization (TLBO) [33], Grey Wolf Optimizer (GWO) [34], Following Optimization Algorithm (FOA) [35], Moth-Flame Optimization Algorithm (MFO) [36], Grasshopper Optimization Algorithm (GOA) [37], Donkey Theorem Optimization (DTO) [38], Emperor Penguin Optimizer (EPO) [39], Multi Leader Optimizer (MLO) [40], Rat Swarm Optimizer (RSO) [41], and “The Good, the Bad, and the Ugly” Optimizer (GBUO) [42].
Physics-based optimization algorithms are introduced based on simulation of various laws of physics. Spring Search Algorithm (SSA) is one of the algorithms in this group, which was designed based on the simulation of Hooke’s law in a system consisting of weights and springs [43]. Momentum Search Algorithm (MSA) is another algorithm based on the simulation of Newton’s laws of motion and “momentum conservation principle” [44]. Gravitational Search algorithm (GSA) was introduced based on the law of universal gravitation between objects [45]. Some other popular physics-based optimization algorithms are: Big-Bang Big-Crunch (BBBC) [46], Galaxy-based Search Algorithm (GbSA) [47], Charged System Search (CSS) [48], Particle Collision Algorithm (PCA) [49], Simulated Annealing (SA) [50], Binary Spring Search Algorithm (BSSA) [51], Central Force Optimization (CFO) [52], Ray Optimization (RO) algorithm [53], Curved Space Optimization (CSO) [54], Henry Gas Solubility Optimization (HGSO) [55], Small World Optimization Algorithm (SWOA) [56], and Artificial Chemical Reaction Optimization Algorithm (ACROA) [57].
Evolutionary-based optimization algorithms are inspired by genetics and inheritance laws. Genetic Algorithm (GA) is the most famous algorithm in this group, which is designed based on simulation of reproduction process and Darwin’s theory of evolution by natural selection [58]. Artificial Immune System (AIS) inspired by the mechanism of the human body and the human immune system against viruses and microbes is another evolutionary-based optimization algorithm [59]. Some other evolutionary-based optimization techniques are: Evolutionary Programming (EP) [60], Cultural Algorithm [61], Evolution Strategy (ES) [62], Differential Evolution (DE) [63], Biogeography-Based Optimizer (BBO) [64], Artificial Infectious Disease (AID) [65], Genetic Programming (GP) [66], and Improved Quantum-inspired Differential Evolution (IQDE) algorithm [67].
Game-based optimization algorithms are another POBAs, which are designed based on simulating rules of various games. Football Game-Based Optimization (FGBO) is one of the algorithms in this group, which was introduced based on simulation of football league rules and clubs’ behaviors [68]. Darts Game Optimizer (DGO) inspired by the rules of the darts game and the behavior of the players in dart throwing for collecting more points is another game-based optimization technique [69]. Some of the other game-based optimization algorithms are: Orientation Search Algorithm (OSA) [70], Hide Objects Game Optimization (HOGO) [71], Shell Game Optimization (SGO) [72], Binary Orientation Search Algorithm (BOSA) [73], and Dice Game Optimizer (DGO) [74].
In this paper, a new optimization algorithm entitled All Members-Based Optimizer (AMBO) is designed to provide suitable quasi-optimal solutions for various optimization problems. In the proposed AMBO, all members of the population, regardless of their position in the search space, participate in updating the population matrix. Various steps of implementing AMBO are explained and then its mathematical formulation is presented. The performance of AMBO in providing quasi-optimal solution is evaluated for twenty-three standard objective functions of different types.
The rest of the article is organized as follows. In Section 2, the proposed AMBO algorithm is described and modeled. In Section 3, the proposed algorithm is simulated for optimizing different objective functions and the results are presented. Statistical analysis of the results is carried out in Section 4. Finally, the conclusions of this investigation and suggestions for future studies are presented in Section 5.
In this section, various steps and mathematical modeling of the proposed optimization algorithm are presented. All Members-Based Optimizer (AMBO) is a PBOA proposed for solving optimization problems. The main idea in designing AMBO is to make more use of the population matrix information as well as the simultaneous participation of all members of the population in updating the algorithm population. The search space for each optimization problem consists of coordinate axes equal to the number of problem variables. In most of optimization algorithms, the best population member directs the population of the algorithm along these axes. Also, in some algorithms, the worst member or several members with specific characteristics are effective in updating the algorithm population. However, an ordinary member of the population may be more qualified to lead the population in some axes than the best member. Therefore, AMBO is designed based on this concept to use the information of all population members.
Each PBOA has a number of members called the algorithm population. The algorithm population can be displayed using a matrix called the population matrix. Each row of this matrix represents a population member and each column of this matrix represents a variable of the optimization problem. Therefore, the number of rows in the population matrix is equal to the number of population members and the number of columns in this matrix is equal to the number of the optimization problem variables.
In AMBO, the population matrix is represented using Eq. (1).
where X is the population matrix,
In each iteration of the algorithm, the objective function of the problem is evaluated based on the suggested values of variables provided by each population member. Therefore, the values of the objective function are specified as a vector using Eq. (2).
where
In the proposed AMBO algorithm, population members are updated in two stages. In the first stage, each member of the population is updated based on the position of different members of the population in the search space. The important point in this process is that the new position is acceptable to a population member if it improves the value of the objective function. Otherwise, the update is not acceptable and the member remains in its previous position. The first stage is simulated using Eqs. (3) to (5).
where
In the second stage, the population matrix is updated based on the best member. In this stage, similar to the first stage, the new position is acceptable to a population member if it improves the objective function. The second stage of AMBO is simulated in Eqs. (6) and (7).
where
For each iteration, the population matrix is updated through these two steps and this process is repeated until the algorithm stops. At the end of the algorithm iterations, AMBO provides the best obtained quasi-optimal solution to the optimization problem. The implementation process of the proposed algorithm in an optimization problem is shown as a flowchart in Fig. 1. The pseudo-code of the proposed AMBO algorithm is also presented in Algorithm 1.
Figure 1: Flowchart of AMBO
In this section, the ability of AMBO for solving optimization problems and providing quasi-optimal solutions is evaluated. For this purpose, AMBO is implemented on a set of twenty-three standard objective functions from three different types including unimodal, high-dimensional multimodal, and fixed-dimensional multimodal functions. Complete information of these objective functions is provided in the Appendix (Tabs. A1 to A3).
In order to analyze the performance of the proposed AMBO in providing the quasi-optimal solution, AMBO is compared with eight other optimization algorithms namely Genetic Algorithm (GA), Particle Swarm Optimization (PSO), Gravitational Search Algorithm (GSA), Teaching Learning-Based Optimization (TLBO), Grey Wolf Optimizer (GWO), Whale Optimization Algorithm (WOA), Marine Predators Algorithm (MPA), and Tunicate Swarm Algorithm (TSA).
To solve each of the objective functions, 20 independent runs of the proposed AMBO has been performed, where each run includes 1000 iterations. The average (Ave) and standard deviation (std) of the best solutions have been used to present the results of optimization for the objective functions.
3.2 Evaluation Results for Objective Functions
The objective functions F1 to F7 belong to unimodal category. These functions have been selected to evaluate the performance of the optimization algorithms. The optimization results for these objective functions using the proposed AMBO and eight other optimization algorithms are presented in Tab. 1. According to this table, AMBO is the best optimizer for F1, F2, F3, F4, and F6 objective functions.
Six high-dimensional multimodal objective functions F8 to F13 have been selected to evaluate the performance of the optimization algorithms in providing a suitable quasi-optimal solution. Tab. 2 shows the optimization results for these objective functions using AMBO and eight other optimization algorithms. AMBO is the best optimizer for F8, F9, and F10 objective functions.
The third group of objective functions, including F14 to F23, is selected from the fixed-dimensional multimodal type. The results of the implementation of the proposed algorithm and eight other optimization algorithms on these objective functions are presented in Tab. 3. The results in this table show that AMBO is the best optimizer for F15, F16, and F17 target functions.
Two important indicators for evaluating the performance of the optimization algorithms in solving optimization problems are the exploitation index and the exploration index.
Exploitation index indicates the ability of an optimization algorithm to provide a suitable quasi-optimal solution close to the global optimum for an optimization problem. An optimization algorithm must be able to provide a suitable solution at the end of its iterations. whatever this solution is closer to the global optimum, that algorithm has higher exploitation power. Unimodal objective functions (F1 to F7) have only one optimal solution. Therefore, they are suitable for evaluating the exploitation power of the optimization algorithms. The optimization results of these objective functions show that AMBO has the best performance for all F1 to F7 functions and has higher exploitation power than the other eight optimization algorithms.
Exploration power means the ability of an algorithm to scan the search space properly and accurately. This indicator is especially important for optimization problems that have several local optimal solutions. Therefore, an algorithm that can provide a suitable quasi-optimal solution by accurately scanning the search space and passing through local optimal solutions has high exploration power. The multimodal objective functions of the second and third groups (F8 to F13 and F14 to F23) have several local optimums. Hence, they are suitable for evaluating exploration power. The optimization results of these objective functions presented in Tabs. 3 and 4 indicate the high exploration ability of the proposed AMBO algorithm in solving this type of objective functions.
In this section, the sensitivity of the AMBO to the two parameters of maximum number of iteration and number of population members is evaluated.
In order to evaluate the sensitivity of the proposed algorithm to the maximum number of iterations, the AMBO for the maximum number of iterations of 100, 500, 800 and 1000 has been implemented independently on all objective functions. The results of these implementations are presented in Tab. 4. The results show that AMBO converges towards the optimal solution when the number of iterations is increased.
Also, in order to analyze the sensitivity of the proposed algorithm to the number of population members, the AMBO has been implemented independently for different populations with number of 20, 30, 50 and 80 members. The results of this simulation for the number of different members of the population are presented in Tab. 5. It is analyzed from Tab. 5 that the value of fitness function decreases when number of search agents increases.
In this section, statistical analysis on the optimization results obtained by different optimization algorithms is presented. Although presenting the results in the form of average and standard deviation provides useful information about the performance of optimization algorithms, statistical analysis of the results is also important for better evaluation. For this purpose, the Wilcoxon rank sum test has been used as a non-parametric statistical test to specify the significance of the results. Wilcoxon rank test is applied to specify whether the results obtained by the proposed AMBO are different from other eight optimization algorithms in a statistically significant way.
A p-value specifies whether the given algorithm is statistically significant or not. If p-value of the given algorithm is less than 0.05, then the corresponding algorithm is statistically significant. The result of analysis using Wilcoxon rank test for the objective functions is shown in Tab. 6. It is observed from Tab. 6 that the p-value obtained from AMBO is much smaller than 0.05 for all the objective functions. Therefore, the proposed AMBO is statistically different from the other competitor algorithms.
5 Conclusions and Future Works
Optimization algorithms are one of the most effective and widely-used methods in solving optimization problems in various fields of science and engineering. In this paper, a new optimization algorithm called All Members-Based Optimizer (AMBO) was presented for solving optimization problems. The proposed AMBO was designed to use more information of different members of the population and to participate all members in updating the algorithm population. AMBO was mathematically modeled and implemented on a set of twenty-three standard objective functions. Also, in order to analyze the results, AMBO was compared with eight optimization algorithms including Genetic Algorithm (GA), Particle Swarm Optimization (PSO), Gravitational Search Algorithm (GSA), Teaching Learning-Based Optimization (TLBO), Grey Wolf Optimizer (GWO), Whale Optimization Algorithm (WOA), Marine Predators Algorithm (MPA), and Tunicate Swarm Algorithm (TSA).
The results of optimizing the unimodal objective functions showed that AMBO is more capable than other algorithms in solving such problems and therefore, it is superior considering the exploitation index. Also, the results of optimization for multimodal objective functions showed that AMBO with high exploration power is able to provide suitable quasi-optimal solutions for this type of functions. Based on the simulation results, it can be concluded that the proposed algorithm has an acceptable ability to solve various optimization problems and is superior and much more competitive than other mentioned optimization algorithms.
The authors suggest some ideas and perspectives for future studies. Design of the binary version as well as the multi-objective version of the AMBO is an interesting potential for future investigations. Apart from this, implementing AMBO on various optimization problems and real-world optimization problems can be considered as some significant contributions, as well.
Funding Statement: PT (corresponding author) was supported by the Excelence project PřF UHK No. 2202/2020-2022 and Long-term development plan of UHK for year 2021, University of Hradec Králové, Czech Republic, https://www.uhk.cz/en/faculty-of-science/about-faculty/official-board/internal-regulations-and-governing-acts/governing-acts/deans-decision/2020#grant-competiti-on-of-fos-uhk-excellence-for-2020.
Conflicts of Interest: The authors declare that they have no conflicts of interest to report regarding the present study.
Appendix A. Information of objective function
Information on the twenty-three objective functions used in the simulations is presented in Tabs. A1 to A3.
![]() | This work is licensed under a Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. |