Open Access
ARTICLE
A Multi-Objective Particle Swarm Optimization Algorithm Based on Decomposition and Multi-Selection Strategy
1 School of Computer Science, Shaanxi Normal University, Xi’an, 710119, China
2 Fujian Provincial Key Laboratory of Big Data Mining and Applications, Fujian University of Technology, Fuzhou, 350118, China
3 Information Construction and Management Center and Institute of Artificial Intelligence and Educational New Productivity, Ningxia Normal University, Guyuan, 756099, China
* Corresponding Author: Cai Dai. Email:
Computers, Materials & Continua 2025, 82(1), 997-1026. https://doi.org/10.32604/cmc.2024.057168
Received 09 August 2024; Accepted 25 October 2024; Issue published 03 January 2025
Abstract
The multi-objective particle swarm optimization algorithm (MOPSO) is widely used to solve multi-objective optimization problems. In the article, a multi-objective particle swarm optimization algorithm based on decomposition and multi-selection strategy is proposed to improve the search efficiency. First, two update strategies based on decomposition are used to update the evolving population and external archive, respectively. Second, a multi-selection strategy is designed. The first strategy is for the subspace without a non-dominated solution. Among the neighbor particles, the particle with the smallest penalty-based boundary intersection value is selected as the global optimal solution and the particle far away from the search particle and the global optimal solution is selected as the personal optimal solution to enhance global search. The second strategy is for the subspace with a non-dominated solution. In the neighbor particles, two particles are randomly selected, one as the global optimal solution and the other as the personal optimal solution, to enhance local search. The third strategy is for Pareto optimal front (PF) discontinuity, which is identified by the cumulative number of iterations of the subspace without non-dominated solutions. In the subsequent iteration, a new probability distribution is used to select from the remaining subspaces to search. Third, an adaptive inertia weight update strategy based on the dominated degree is designed to further improve the search efficiency. Finally, the proposed algorithm is compared with five multi-objective particle swarm optimization algorithms and five multi-objective evolutionary algorithms on 22 test problems. The results show that the proposed algorithm has better performance.Keywords
Since the year 2000, a total of 46,976 scholarly articles on multi-objective optimization problems (MOPs) [1] have been indexed in the Science Citation Index. MOPs are widely used in practical applications such as path planning [2,3], task scheduling [4] and coal production [5,6], among others [7]. Unlike single-objective optimization problems, MOPs are inherently more complex due to conflicting objectives [8]. Therefore, it is impossible to optimize all objectives simultaneously; instead, one must prioritize them to find a set of solutions that achieve a relatively optimal balance.
Utilizing evolutionary algorithms to address Multi-Objective Problems (MOPs) remains a prominent area of research. Over recent decades, numerous multi-objective evolutionary algorithms have been developed, including the multi-objective genetic algorithm (MOGA) [9,10], multi-objective differential evolution algorithm (MODE) [11,12], multi-objective particle swarm optimization algorithm (MOPSO) [13,14] and others. The Particle Swarm Optimization Algorithm (PSO) [15], introduced by Eberhart and Kennedy in 1995, is a population-based stochastic search evolutionary algorithm that represents potential solutions through particles. Owing to its simplicity and rapid convergence, MOPSO has been extensively employed for solving MOPs [16,17]. Furthermore, MOPSO finds applications in various research domains. For instance, integrating MOPSO with data mining aids in resolving intricate data mining challenges and uncovering valuable insights [18]. In the past two decades, 5755 articles on MOPSO have been published in the Science Citation Index. As MOPs grow increasingly complex, achieving a better balance between diversity and convergence and devising an effective selection strategy to enhance search efficiency becomes crucial.
In certain traditional MOPSOs, the population tends to cluster around the leader particle, which represents the global optimal solution (gbest), leading to poor diversity [19]. To enhance the diversity of the evolutionary process, researchers have developed the clustering multiple-swarm multi-objective particle swarm optimization algorithm (CMOPSO) [20] and the coevolutionary multiple-swarm multi-objective particle swarm optimization algorithm (CMMOPSO) [21]. In 2011, the Coello team introduced a decomposition-based multi-objective particle swarm optimization algorithm (dMOPSO) [22] for addressing continuous unconstrained MOPs. A key feature of this algorithm is its use of memory reinitialization, which helps maintain population diversity. In 2014, Moubayed et al. [23] integrated dominance and decomposition to propose a multi-objective particle swarm optimization algorithm based on decomposition and dominance (D2MOPSO). Decomposition simplifies the problem by converting the multi-objective problem into a set of aggregation problems, while dominance constructs the leader archive. In 2015, Dai et al. [24] presented a novel decomposition-based multi-objective particle swarm optimization algorithm (MPSO/D). This algorithm maintains diversity by uniformly dividing the objective space and assigning a solution to each subspace. Generally, these decomposition-based MOPSOs exhibit good diversity and convergence when solving MOPs. However, they do not consider the distinct evolution of each subproblem. Additionally, for complex problems, such as those involving discontinuous Pareto-optimal fronts, new strategies and mechanisms are required.
Selecting the appropriate leader particles to guide the population’s evolution is crucial for MOPSOs. Leong et al. [25] introduced a dynamic population multiple-swarm multi-objective adaptive particle swarm optimization algorithm (DMOPSO), which enhanced performance through dynamic population size adjustments and adaptive local archiving, selecting leaders based on different clusters. Zheng et al. [26] proposed a novel MOPSO that employs comprehensive learning strategies to effectively maintain population diversity. Hu et al. [27] developed a new parallel cell coordinate system, from which a group of leaders is chosen from the global archive and stored in the leader group. These leaders are selected based on environmental information and entropy to expedite convergence. Nonetheless, these algorithms only improve either diversity or convergence but fail to strike a balance between both.
Inspired by the above algorithms, it is an effective strategy to improve the diversity by using decomposition method. Meanwhile, under the decomposition framework, different selection strategies are designed according to the different evolution of each subspace. Therefore, in order to improve the search efficiency of MOPSO, better balance diversity and convergence, and reduce the influence of (Pareto optimal front) PF discontinuity on solution set, this article integrates the decomposition method with the multi-selection strategy. The main contributions are as follows:
(1) For PF with various shapes, an advanced update strategy has been developed to maintain diversity and improve the quality of solutions. Initially, the objective space is partitioned using a set of uniformly distributed direction vectors, which categorizes the population accordingly. The diversity of the algorithm is maintained by a decomposition-based update strategy such that each subspace has a solution. Historical optimal solutions are stored in an external archive, which is updated using a strategy based on Pareto dominance and decomposition to improve solution quality. Upon completion of the iterations, the set of solutions with the lowest IGD value from both the evolving population and the external archive is chosen as the optimal solution set.
(2) According to the evolution of each subspace, a multi-selection strategy is designed. The first strategy is for the selected solution, the solution with the smallest penalty-based boundary intersection value is selected as its global optimal solution and the solution which is away from this selected solution and its global optimal solution is selected as its personal optimal solution. This strategy is to enhance global search. The second strategy is for non-dominated solutions which are directly classified by direction vectors and aggregate functions, their neighbor solutions are randomly selected as their global optimal solution and personal optimal solution. This strategy is to enhance local search. The third strategy is for the problems with discontinuity PF, which is identified by the cumulative number of iterations of the subspace without non-dominated solutions.
(3) An adaptive inertia weight update strategy dynamically adjusts the inertia weight based on the extent to which each particle is dominated by the reference point, balancing exploration and exploitation to enhance search efficiency.
The rest of this article is organized as follows: Section 2 provides an introduction to the main aspects of multi-objective optimization problems and multi-objective particle swarm optimization. Section 3 offers a detailed explanation of the proposed algorithm. Section 4 presents the experimental results along with relevant analysis. Lastly, Section 5 concludes the discussion.
2.1 Multi-Objective Optimization Problem
In general, a MOP can be defined as [28]:
where
2.2 Multi-Objective Particle Swarm Optimization
PSO has been widely used in engineering practice due to its straightforward concept and minimal parameter requirements. It simulates bird foraging and uses inter-group member cooperation and information sharing to find the optimal solution [29]. Each particle represents a potential solution with two properties: velocity and position. In MOPSO, the particles seek the optimal solution according to their own and other particles’ experience [30], so that the population continues to evolve and gradually approach PF. The position vector of the particle represents a candidate solution, and the velocity vector of the particle represents the velocity and direction of the particle moving in the next iteration. The information of particle
where
Firstly, a set of particles is randomly generated, initialized, and their corresponding objective values are calculated. Then, generate a set of direction vectors
where
where
To maintain population diversity and enhance solution quality, this article employs enhanced update strategies for updating the offspring population
A decomposition-based update strategy is employed to enhance population diversity. After the offspring population
Although the above update strategy can maintain diversity well, it proves less efficient for problems with degenerated PF and more complex PF shapes. To address this, this article employs an external archive to store elite solutions obtained during iterations, updating them through a strategy based on Pareto dominance and decomposition. Firstly, the solutions in the external archive
Both the evolving population and the external archive are candidate optimal solution sets. After the iteration is completed, the inverted generational distance index (IGD) [32] values of the two are calculated, respectively. This performance indicator can comprehensively reflect the convergence and diversity of the solution set, and select a set of solutions with the smaller IGD value for output. The IGD value is calculated as follows:
where
To better balance diversity and convergence and mitigate the potential negative impact of relying on a single gbest, which can cause diversity loss and PF discontinuity, this article introduces a multi-selection strategy. This strategy primarily involves selecting suitable gbest and pbest values for representative particles based on the evolution of different subspaces.
Firstly, find the
Next, the multi-selection strategy is as follows:
(1) When the subspace
where
(2) When the subspace
(3) When the subspace
where
where
Roulette wheel selection is employed to choose subspaces based on their probability distribution. Subspaces without non-dominated solutions have a higher likelihood of being chosen, although this does not preclude the selection of subspaces containing non-dominated solutions. The corresponding gbest and pbest values are then determined based on whether the selected subspace includes or excludes non-dominated solutions.
The pseudo-codes of the above selection strategies are described in Algorithm 4.
3.4 Adaptive Inertia Weight Update Strategy
Inertia weight regulates the speed of particle movement and influences the balance between exploration and exploitation. Higher inertia weights enhance particles’ exploration capabilities, while lower weights boost their exploitation abilities. An adaptive inertia weight update strategy is implemented as follows:
where
The inertia weight update strategy takes into account different iteration stages and the degree of each representative particle dominated by
The crossover operation of PSO can update the position and velocity of particles, which is described in Section 2.2.
The mutation operation uses polynomial mutation [34], and the calculation process is as follows:
3.6 Framework of Proposed Algorithm
Based on the above content, a multi-objective particle swarm optimization algorithm based on decomposition and multi-selection strategy (MOPSO/DMS) is proposed to improve the search efficiency and balance diversity and convergence. The overall framework of MOPSO/DMS is given in Algorithm 5. Firstly, a population
4.1 Comparison Algorithms and Test Problems
To evaluate the performance and effectiveness of MOPSO/DMS, this study initially compares it with five prevalent MOPSO algorithms: competitive mechanism-based multi-objective particle swarm optimizer (CMOPSO) [35], dMOPSO [22], multi-objective particle swarm optimization with multiple search strategies (MMOPSO) [36], MPSO/D [24], speed-constrained multi-objective particle swarm optimization algorithm (SMPSO) [37], in which MPSO/D [24] and dMOPSO [22] are introduced in Section 1. CMOPSO [35] makes the particles update based on pairwise competition in each generation. MMOPSO [36] designs two search strategies for particle update, which are conducive to speeding up convergence and maintaining diversity, respectively. SMPSO [37] uses a strategy of limiting the velocity of the particles and incorporates polynomial mutation as a turbulence factor. Then MOPSO/DMS is compared with five current mainstream MOEAs including multi-objective evolutionary algorithm based on decomposition (MOEA/D) [33], MOEA/D based on differential evolution (MOEA/D-DE) [38], multi-objective evolutionary algorithm based on decision variable analyses (MOEA/DVA) [39], non-dominated neighbor immune algorithm (NNIA) [40] and clustering-based adaptive multi-objective evolutionary algorithm (CA-MOEA) [41].
All algorithms are compared on ZDT [35], DTLZ [42] and UF [43] test problems. The ZDT [35] test problem set is a widely used multi-objective test problem set, including six different forms of ZDT1-6 test problems. Each test problem has two objective functions and has different shapes of PF. Since the variables in the ZDT5 test problem are encoded in binary, ZDT1-4 and ZDT6 are selected for testing in this article. The DTLZ [42] test problem set is a kind of standard test problem set, but the number of objective functions is self-defined, usually greater than or equal to two, and has different shapes of PF. UF1-10 [43] are multi-objective test problems proposed in CEC2009, where UF1-7 are the bi-objective test functions and UF8-10 are the three-objective test functions, and has different PFs.
The number of objectives
In the experiment, hypervolume (HV) [44] and IGD [32] are used to evaluate the performance of the algorithms. HV measures the volume of the objective space between the non-dominated solution set obtained by the algorithm and the reference point, which is strictly monotone in the Pareto dominance relation. HV can be used for problems with unknown real PF or reference set without the existence of known real PF or reference set, and the diversity and convergence can be evaluated simultaneously. The larger the value of HV, the better the performance of the algorithm. The calculation is as follows:
where
IGD measures the average Euclidean distance between all solutions in the actual Pareto front and the non-dominated solutions generated by the algorithm. It not only reflects the convergence of the solution set but also indicates the uniformity and diversity of the distribution. A lower IGD value signifies superior overall algorithm performance. The detailed calculation is given in Eq. (6).
4.4 Experimental Results and Analysis
This section presents the average and standard deviation of HV and IGD values obtained from running all algorithms independently 30 times across all test problems. The best results for each test question are marked in black bold. In addition, “+” means that the comparison algorithm is better than MOPSO/DMS, “−” means that the comparison algorithm is worse than MOPSO/DMS, and “=” means that the performance of the comparison algorithm is comparable to that of MOPSO/DMS. Running the algorithms 30 times independently helps calculate the average and standard deviation, minimizing chance effects and reflecting the central tendency of the experimental results.
4.4.1 Comparison with Other MOPSOs and MOEAs
Table 2 presents the HV averages and standard deviations for the five MOPSOs after 30 independent runs on all test problems. In contrast, Table 3 displays the HV averages and standard deviations for MOPSO/DMS and five MOEAs. Across 22 test problems, MOPSO/DMS attains the highest HV values on 20 problems compared to the other five MOPSOs and surpasses the five comparison MOEAs on 19 problems.
In Table 2, CMOPSO, dMOPSO, MMOPSO, MOPSO/D and SMPSO are better than MOPSO/DMS on 1, 1, 0, 0 and 0 test problems, respectively. MOPSO/DMS is better than CMOPSO, dMOPSO, MMOPSO, MOPSO/D and SMPSO on 19, 21, 19, 22 and 19 test problems, respectively. The HV values obtained by MOPSO/DMS are not significantly different from those of CMOPSO, dMOPSO, MMOPSO, MOPSO/D and SMPSO on 2, 0, 3, 0 and 3 test problems, respectively.
In Table 3, MOEA/D, MOEA/D-DE, MOEA/DVA, NNIA and CA-MOEA perform better than MOPSO/DMS on 2, 0, 0, 1 and 0 test problems, respectively. MOPSO/DMS performs better than MOEA/D, MOEA/D-DE, MOEA/DVA, NNIA and CA-MOEA on 20, 22, 22, 18 and 22 test problems, respectively. The HV values of MOPSO/DMS were not significantly different from those of NNIA on ZDT4, ZDT6 and DTLZ6.
In Tables 2 and 3, for 8 test problems with concave PF (including ZDT2, ZDT6, DTLZ2-4, UF4, UF8 and UF10), the HV values of MOPSO/DMS on 6, 7, 8, 8 and 6 test problems are greater than those of CMOPSO, dMOPSO, MMOPSO, MOPSO/D and SMPSO, respectively. And the HV values on 7, 8, 8, 7 and 8 test problems are greater than MOEA/D, MOEA/D-DE, MOEA/DVA, NNIA and CA-MOEA, respectively. For 6 test problems with convex PF (including ZDT1, ZDT3, ZDT4 and UF1-3), MOPSO/DMS performs better than CMOPSO, dMOPSO, MMOPSO, MOPSO/D and SMPSO on 5, 6, 5, 6 and 6 test problems, respectively. And it performs better than MOEA/D, MOEA/D-DE, MOEA/DVA, NNIA and CA-MOEA on 5, 6, 6, 5 and 6 test problems, respectively. For DTLZ5 with degenerated PF, NNIA performs better than MOPSO/DMS, MMOPSO and SMPSO perform similar to MOPSO/DMS, and the other seven comparison algorithms is worse than MOPSO/DMS. On DTLZ6 with degenerated PF, only MMOPSO and NNIA perform similarly to MOPSO/DMS, and the remaining comparison algorithms perform worse. MOPSO/DMS achieves larger HV values on ZDT3, UF5-6 and UF9 with disconnected PF. In addition, MOPSO/DMS has the largest HV value on DTLZ1 and UF7 with linear PF, while its HV value on DTLZ7 with complex PF is also larger than other ten comparison algorithms.
Table 4 shows the IGD average and standard deviation of five MOPSOs obtained by running 30 times independently on all test problems. Table 5 shows the IGD average and standard deviation of MOPSO/DMS and five MOEAs.
In Table 4, for 22 test problems, only SMPSO achieves smaller IGD values than MOPSO/DMS on ZDT3 and DTLZ5, while MOPSO/DMS achieves smaller IGD values than CMOPSO, dMOPSO, MMOPSO, MOPSO/D and SMPSO on 20, 22, 21, 22 and 20 test problems, respectively. The IGD values that CMOPSO and MOPSO/DMS achieved on ZDT2 and ZDT6 are quite close. Meanwhile, MMOPSO and MOPSO/DMS have very close IGD values on DTLZ5.
Table 5 shows that the IGD values of MOEA/D and NNIA are smaller than MOPSO/DMS on ZDT4 and DTLZ5, respectively. Meanwhile, MOPSO/DMS obtains smaller IGD values than MOEA/D, MOEA/D-DE, MOEA/DVA, NNIA and CA-MOEA on 19, 22, 22, 21 and 22 test problems of 22 test problems, respectively. In addition, MOPSO/DMS achieves similar IGD values with MOEA/D on ZDT1 and ZDT2.
In Tables 4 and 5, it can be found that for ZDT1, ZDT3-4 and UF1-3 with convex PF, except that SMPSO performs better than MOPSO/DMS on ZDT3 and MOEA/D performs similar to MOPSO/DMS on ZDT1 and better than MOPSO/DMS on ZDT4, other comparison algorithms are worse than MOPSO/DMS. For ZDT2, ZDT6, DTLZ2-4, UF4, UF8 and UF10 with concave PF, other comparison algorithms perform worse than MOPSO/DMS except that CMOPSO is similar to MOPSO/DMS on ZDT2 and ZDT6 and MOEA/D is similar to MOPSO/DMS on ZDT2. For ZDT3, UF5-6, and UF9 with disconnected PF, except that SMPSO has a smaller IGD value than MOPSO/DMS on ZDT3, MOPSO/DMS has smaller IGD values than other comparison algorithms. Moreover, for DTLZ5 with degenerated PF, SMPSO and NNIA perform better than MOPSO/DMS, and MMOPSO performs comparable to MOPSO/DMS, while MOPSO/DMS performs better than all comparison algorithms on DTLZ6 with degenerated PF. In addition, the IGD values of MOPSO/DMS are the smallest on DTLZ1 and UF7 with linear PF and DTLZ7 with complex PF.
Next, the performance of all algorithms is visually observed through Figs. 3–6. The final population of all algorithms on ZDT4 with m = 2 and convex PF is displayed in Fig. 3, which clearly illustrates the superior performance of MOPSO/DMS. Compared to the other ten algorithms, it exhibits superior diversity and convergence even if its distribution is not particularly uniform. Consequently, MOPSO/DMS performs better.
The final population distribution of all algorithms on the three-objective DTLZ3 with concave PF is presented in Fig. 4. Fig. 4 indicates that MOPSO/DMS, except for MOEA/D, has distinct advantages over the other nine algorithms in terms of diversity and convergence. Its distribution is reasonably uniform and quite close to the true PF, while the other nine methods have poor diversity and convergence. Therefore, MOPSO/DMS performs well on DTLZ3 with concave PF.
Fig. 5 presents the final population distribution of all algorithms on the three-objective DTLZ6 with degenerated PF. The final population distributions of CMOPSO, MMOPSO, SMPSO, MOEA/D-DE, NNIA, CA-MOEA and MOPSO/DMS are close to the true PF, among which CMOPSO, CA-MOEA and MOPSO/DMS have good diversity and convergence, while the solution sets of MMOPSO, SMPSO, MOEA/D-DE and NNIA are not evenly distributed. The diversity and convergence of the remaining four algorithms are poor. Therefore, MOPSO/DMS is effective on DTLZ6 test problems with degenerated PF.
Fig. 6 shows the final population distribution of each algorithm on the three-objective UF9 with linear and disconnected PF. From Fig. 6, it is not difficult to see that compared with the other ten algorithms, the final population distribution of MOPSO/DMS is closer to the true PF, with better diversity and convergence, and wider population coverage. Therefore, MOPSO/DMS performs better on UF9 test problem with linear and disconnected PF.
The trajectory of the IGD values for all algorithms on ZDT4, DTLZ1, and UF9 is displayed in Fig. 7, with the horizontal and vertical coordinates representing the number of evaluations and IGD values, respectively. In Fig. 7, for ZDT4, MPSOD and MOEA/DVA perform the worst, for DTLZ1, CMOPSO performs the worst, while for UF9, all the algorithms except MOPSO/DMS perform worse. It is not difficult to find that MOPSO/DMS has obvious advantages on all three test problems, especially on UF9, where MOPSO/DMS achieves the smallest IGD value, and it also verifies the fast convergence speed of the PSO framework.
The primary innovation of MOPSO/DMS lies in its decomposition of the objective space, where it employs a multi-selection strategy to choose the appropriate gbest and pbest for each search particle based on the evolution of each subspace. This approach also minimizes the impact of PF discontinuity on the solution set. Additionally, both the evolving population and the external archive are updated, with the superior solution set being selected for output post-iteration. To enhance search efficiency further, an adaptive inertia weight update strategy is introduced. Ablation experiments are conducted to validate MOPSO/DMS’s effectiveness. To assess the multi-selection strategy’s efficacy, MOPSO/DMS-I uses the selection strategy from MOPSO/D, while MOPSO/DMS-II disregards PF discontinuity to test the strategy’s ability to avoid searching discontinuous subspaces. Lastly, MOPSO/DMS-III employs a linear inertia weight adjustment strategy to evaluate the adaptive inertia weight update strategy’s impact on improving search efficiency. Tables 6 and 7 show the comparison results of HV values and IGD values of the four algorithms on 22 test problems, respectively.
In Table 6, for 22 test problems, MOPSO/DMS achieves the highest HV value on 18 test problems, and MOPSO/DMS performs better than MOPSO/DMS-I, MOPSO/DMS-II and MOPSO/DMS-III on 21, 9 and 8 test problems, respectively. Meanwhile, in Table 7, among 22 test problems, MOPSO/DMS performs best on 19 test problems, and is superior to MOPSO/DMS-I, MOPSO/DMS-II and MOPSO/DMS-III on 18, 12 and 7 test problems, respectively. In general, MOPSO/DMS performs best.
Multi-objective optimization problems frequently arise in various aspects of production and daily life. To enhance solution quality and optimize search efficiency, this article introduces a novel multi-objective particle swarm optimization algorithm that integrates decomposition and a multi-selection strategy. Initially, the decomposition technique maintains diversity significantly, while an external archive stores historical optimal solutions. The evolving population and the external archive are updated through distinct strategies. Concurrently, based on the evolution of each subspace with or without non-dominated solutions, the multi-selection strategy selects suitable global and personal optimal solutions for each particle’s update process. Additionally, this approach minimizes the impact of PF discontinuity on the final solution set. An adaptive inertia weight update strategy is also implemented to boost algorithm performance. The proposed algorithm was compared with several mainstream MOPSOs and MOEAs across 22 test problems, demonstrating superior overall performance. Current research on this algorithm is confined to two-dimensional or three-dimensional benchmark problems. Future work should focus on applying the algorithm to real-world issues and extending its applicability to many-objective evolutionary scenarios.
Acknowledgement: The authors would like to thank all reviewers and editors for their constructive comments and feedback on our manuscript, which helped us improve the quality of our manuscript.
Funding Statement: This work was supported by National Natural Science Foundations of China (nos. 12271326, 62102304, 61806120, 61502290, 61672334, 61673251), China Postdoctoral Science Foundation (no. 2015M582606), Industrial Research Project of Science and Technology in Shaanxi Province (nos. 2015GY016, 2017JQ6063), Fundamental Research Fund for the Central Universities (no. GK202003071), Natural Science Basic Research Plan in Shaanxi Province of China (no. 2022JM-354).
Author Contributions: Study conception and design: Li Ma, Cai Dai; data collection: Li Ma, Xingsi Xue; analysis and interpretation of results: Li Ma, Cai Dai, Cheng Peng; draft manuscript preparation: Li Ma. All authors reviewed the results and approved the final version of the manuscript.
Availability of Data and Materials: Not applicable.
Ethics Approval: Not applicable.
Conflicts of Interest: The authors declare no conflicts of interest to report regarding the present study.
References
1. A. Zhou, B. Y. Qu, H. Li, S. Z. Zhao, P. N. Suganthan and Q. Zhang, “Multiobjective evolutionary algorithms: A survey of the state of the art,” Swarm Evol. Comput., vol. 1, no. 1, pp. 32–49, 2011. doi: 10.1016/j.swevo.2011.03.001. [Google Scholar] [CrossRef]
2. Y. Zhang, D. Gong, and J. Zhang, “Robot path planning in uncertain environment using multi-objective particle swarm optimization,” Neurocomputing, vol. 103, no. 3, pp. 172–185, 2013. doi: 10.1016/j.neucom.2012.09.019. [Google Scholar] [CrossRef]
3. Z. Chen, H. Wu, Y. Chen, L. Cheng, and B. Zhang, “Patrol robot path planning in nuclear power plant using an interval multi-objective particle swarm optimization algorithm,” Appl. Soft Comput., vol. 116, no. 19, 2022, Art. no. 10819. doi: 10.1016/j.asoc.2021.108192. [Google Scholar] [CrossRef]
4. A. A. Alabbadi and M. F. Abulkhair, “Multi-objective task scheduling optimization in spatial crowdsourcing,” Algorithms, vol. 14, no. 3, 2021, Art. no. 77. doi: 10.3390/a14030077. [Google Scholar] [CrossRef]
5. Z. Cui et al., “Hybrid many-objective particle swarm optimization algorithm for green coal production problem,” Inf. Sci., vol. 518, no. 7, pp. 256–271, 2020. doi: 10.1016/j.ins.2020.01.018. [Google Scholar] [CrossRef]
6. X. Cai, J. Zhang, Z. Ning, Z. Cui, and J. Chen, “A many-objective multistage optimization-based fuzzy decision-making model for coal production prediction,” IEEE Trans. Fuzzy Syst., vol. 29, no. 12, pp. 3665–3675, 2021. doi: 10.1109/TFUZZ.2021.3089230. [Google Scholar] [CrossRef]
7. B. Zhang, Q. Pan, L. Gao, L. L. Meng, X. Y. Li and K. K. Peng, “A three-stage multiobjective approach based on decomposition for an energy-efficient hybrid flow shop scheduling problem,” IEEE Trans. Syst. Man Cybern. Syst., vol. 50, no. 12, pp. 4984–4999, 2019. doi: 10.1109/TSMC.2019.2916088. [Google Scholar] [CrossRef]
8. K. Li and R. Chen, “Batched data-driven evolutionary multiobjective optimization based on manifold interpolation,” IEEE Trans. Evol. Comput., vol. 27, no. 1, pp. 126–140, 2022. doi: 10.1109/TEVC.2022.3162993. [Google Scholar] [CrossRef]
9. C. Pizzuti, “A multi-objective genetic algorithm for community detection in networks,” in 2009 21st IEEE Int. Conf. Tools Artif. Intell., Nov. 2009, pp. 379–386. doi: 10.1109/ICTAI.2009.58. [Google Scholar] [CrossRef]
10. S. Wikaisuksakul, “A multi-objective genetic algorithm with fuzzy c-means for automatic data clustering,” Appl. Soft Comput., vol. 24, pp. 679–691, Nov. 2014. doi: 10.1016/j.asoc.2014.08.036. [Google Scholar] [CrossRef]
11. Y. Han et al., “Multi-strategy multi-objective differential evolutionary algorithm with reinforcement learning,” Knowl.-Based Syst., vol. 277, no. 8, Oct. 2023, Art. no. 110801. doi: 10.1016/j.knosys.2023.110801. [Google Scholar] [CrossRef]
12. X. Yu, Z. Hu, W. Luo, and Y. Xue, “Reinforcement learning-based multi-objective differential evolution algorithm for feature selection,” Inf. Sci., vol. 661, Mar. 2024, Art. no. 120185. doi: 10.1016/j.ins.2024.120185. [Google Scholar] [CrossRef]
13. H. Han, Y. Liu, Y. Hou, and J. Qiao, “Multi-modal multi-objective particle swarm optimization with self-adjusting strategy,” Inf. Sci., vol. 629, no. 1, pp. 580–598, Jun. 2023. doi: 10.1016/j.ins.2023.02.019. [Google Scholar] [CrossRef]
14. Y. Li, Y. Zhang, and W. Hu, “Adaptive multi-objective particle swarm optimization based on virtual Pareto front,” Inf. Sci., vol. 625, pp. 206–236, May 2023. doi: 10.1016/j.ins.2022.12.079. [Google Scholar] [CrossRef]
15. J. Kennedy and R. Eberhart, “Particle swarm optimization,” in Proc. ICNN’95-Int. Conf. Neural Netw., Nov. 1995, vol. 4, pp. 1942–1948. doi: 10.1109/ICNN.1995.488968. [Google Scholar] [CrossRef]
16. X. Xu, J. Li, M. Zhou, J. Xu, and J. Cao, “Accelerated two-stage particle swarm optimization for clustering not-well-separated data,” IEEE Trans. Syst. Man Cybern. Syst., vol. 50, no. 11, pp. 4212–4223, 2018. doi: 10.1109/TSMC.2018.2839618. [Google Scholar] [CrossRef]
17. J. Zheng, Z. Zhang, J. Zou, S. Yang, J. Ou and Y. Hu, “A dynamic multi-objective particle swarm optimization algorithm based on adversarial decomposition and neighborhood evolution,” Swarm Evol. Comput., vol. 69, no. 9, 2022, Art. no. 100987. doi: 10.1016/j.swevo.2021.100987. [Google Scholar] [CrossRef]
18. S. Carstensen and J. C. W. Lin, “TKU-PSO: An efficient particle swarm optimization model for top-K high-utility itemset mining,” Int. J. Interact. Multimed. Artif. Intell., Jan. 2024. doi: 10.9781/ijimai.2024.01.002. [Google Scholar] [CrossRef]
19. H. Han, L. Zhang, A. Yinga, and J. Qiao, “Adaptive multiple selection strategy for multi-objective particle swarm optimization,” Inf. Sci., vol. 624, no. 2, pp. 235–251, May 2023. doi: 10.1016/j.ins.2022.12.077. [Google Scholar] [CrossRef]
20. G. T. Pulido and C. A. Coello Coello, “Using clustering techniques to improve the performance of a multi-objective particle swarm optimizer,” in Genetic and Evolutionary Computation—GECCO 2004, Berlin, Heidelberg, Berlin Heidelberg: Springer, 2004, vol. 3102, pp. 225–237. [Google Scholar]
21. Z. H. Zhan, J. Li, J. Cao, J. Zhang, H. S. H. Chung and Y. H. Shi, “Multiple populations for multiple objectives: A coevolutionary technique for solving multiobjective optimization problems,” IEEE Trans. Cybern., vol. 43, no. 2, pp. 445–463, 2013. doi: 10.1109/TSMCB.2012.2209115. [Google Scholar] [PubMed] [CrossRef]
22. S. Z. Martínez and C. A. Coello Coello, “A multi-objective particle swarm optimizer based on decomposition,” in Proc. 13th Annu. Conf. Genetic Evol. Comput., New York, NY, USA, Association for Computing Machinery, Jul. 2011, pp. 69–76. doi: 10.1145/2001576.2001587. [Google Scholar] [CrossRef]
23. N. A. Moubayed, A. Petrovski, and J. McCall, “D2MOPSO: MOPSO based on decomposition and dominance with archiving using crowding distance in objective and solution spaces,” Evol. Comput., vol. 22, no. 1, pp. 47–77, 2014. doi: 10.1162/EVCO_a_00104. [Google Scholar] [PubMed] [CrossRef]
24. C. Dai, Y. Wang, and M. Ye, “A new multi-objective particle swarm optimization algorithm based on decomposition,” Inf. Sci., vol. 325, no. 9, pp. 541–557, 2015. doi: 10.1016/j.ins.2015.07.018. [Google Scholar] [CrossRef]
25. W. F. Leong and G. G. Yen, “PSO-based multiobjective optimization with dynamic population size and adaptive local archives,” IEEE Trans. Syst. Man Cybern. Part B Cybern., vol. 38, no. 5, pp. 1270–1293, 2008. doi: 10.1109/TSMCB.2008.925757. [Google Scholar] [PubMed] [CrossRef]
26. Y. J. Zheng, H. F. Ling, J. Y. Xue, and S. Y. Chen, “Population classification in fire evacuation: A multiobjective particle swarm optimization approach,” IEEE Trans. Evol. Comput., vol. 18, no. 1, pp. 70–81, 2013. doi: 10.1109/TEVC.2013.2281396. [Google Scholar] [CrossRef]
27. W. Hu and G. G. Yen, “Adaptive multiobjective particle swarm optimization based on parallel cell coordinate system,” IEEE Trans. Evol. Comput., vol. 19, no. 1, pp. 1–18, 2013. doi: 10.1109/TEVC.2013.2296151. [Google Scholar] [CrossRef]
28. N. Ye, C. Dai, and X. Xue, “A two-archive many-objective optimization algorithm based on D-Domination and decomposition,” Algorithms, vol. 15, no. 11, 2022, Art. no. 392. doi: 10.3390/a15110392. [Google Scholar] [CrossRef]
29. X. Shu, Y. Liu, J. Liu, M. Yang, and Q. Zhang, “Multi-objective particle swarm optimization with dynamic population size,” J. Comput. Des. Eng., vol. 10, no. 1, pp. 446–467, 2023. doi: 10.1093/jcde/qwac139. [Google Scholar] [CrossRef]
30. X. Chen, H. Tianfield, and W. Du, “Bee-foraging learning particle swarm optimization,” Appl. Soft Comput, vol. 102, no. 11, 2021, Art. no. 107134. doi: 10.1016/j.asoc.2021.107134. [Google Scholar] [CrossRef]
31. T. Guan, F. Han, and H. Han, “A modified multi-objective particle swarm optimization based on levy flight and double-archive mechanism,” IEEE Access, vol. 7, pp. 183444–183467, 2019. doi: 10.1109/ACCESS.2019.2960472. [Google Scholar] [CrossRef]
32. H. Cheng, L. Li, and L. You, “A weight vector adjustment method for decomposition-based multi-objective evolutionary algorithms,” IEEE Access, vol. 11, pp. 42324–42330, 2023. doi: 10.1109/ACCESS.2023.3270806. [Google Scholar] [CrossRef]
33. Q. Zhang and H. Li, “MOEA/D: A multiobjective evolutionary algorithm based on decomposition,” IEEE Trans. Evol. Comput., vol. 11, no. 6, pp. 712–731, 2007. doi: 10.1109/TEVC.2007.892759. [Google Scholar] [CrossRef]
34. K. Liagkouras and K. Metaxiotis, “An elitist polynomial mutation operator for improved performance of MOEAs in computer networks,” in 2013 22nd Int. Conf. Comput. Commun. Netw. (ICCCN), Jul. 2013, pp. 1–5. doi: 10.1109/ICCCN.2013.6614105. [Google Scholar] [CrossRef]
35. X. Zhang, X. Zheng, R. Cheng, J. Qiu, and Y. Jin, “A competitive mechanism based multi-objective particle swarm optimizer with fast convergence,” Inf. Sci., vol. 427, no. 2, pp. 63–76, 2018. doi: 10.1016/j.ins.2017.10.037. [Google Scholar] [CrossRef]
36. Q. Lin, J. Li, Z. Du, J. Chen, and Z. Ming, “A novel multi-objective particle swarm optimization with multiple search strategies,” Eur. J. Oper. Res., vol. 247, no. 3, pp. 732–744, 2015. doi: 10.1016/j.ejor.2015.06.071. [Google Scholar] [CrossRef]
37. A. J. Nebro, J. J. Durillo, J. Garcia-Nieto, C. C. Coello, F. Luna and E. Alba, “SMPSO: A new PSO-based metaheuristic for multi-objective optimization,” in 2009 IEEE Symp. Comput. Intell. Multi-Criteria Decis.-Mak. (MCDM), IEEE, Mar. 2009, pp. 66–73. doi: 10.1109/MCDM.2009.4938830. [Google Scholar] [CrossRef]
38. H. Li and Q. Zhang, “Multiobjective optimization problems with complicated Pareto sets, MOEA/D and NSGA-II,” IEEE Trans. Evol. Comput., vol. 13, no. 2, pp. 284–302, 2008. doi: 10.1109/TEVC.2008.925798. [Google Scholar] [CrossRef]
39. X. Ma et al., “A multiobjective evolutionary algorithm based on decision variable analyses for multiobjective optimization problems with large-scale variables,” IEEE Trans. Evol. Comput., vol. 20, no. 2, pp. 275–298, 2015. doi: 10.1109/TEVC.2015.2455812. [Google Scholar] [CrossRef]
40. M. Gong, L. Jiao, H. Du, and L. Bo, “Multiobjective immune algorithm with nondominated neighbor-based selection,” Evol. Comput., vol. 16, no. 2, pp. 225–255, 2008. doi: 10.1162/evco.2008.16.2.225. [Google Scholar] [PubMed] [CrossRef]
41. Y. Hua, Y. Jin, and K. Hao, “A clustering-based adaptive evolutionary algorithm for multiobjective optimization with irregular Pareto fronts,” IEEE Trans. Cybern., vol. 49, no. 7, pp. 2758–2770, 2018. doi: 10.1109/TCYB.2018.2834466. [Google Scholar] [PubMed] [CrossRef]
42. Y. Tian et al., “Evolutionary large-scale multi-objective optimization: A survey,” ACM Comput. Surv., vol. 54, no. 8, pp. 174:1–174:34, Oct. 2021. doi: 10.1145/3470971. [Google Scholar] [CrossRef]
43. M. Abdel-Basset, R. Mohamed, S. Mirjalili, R. K. Chakrabortty, and M. Ryan, “An efficient marine predators algorithm for solving multi-objective optimization problems: Analysis and validations,” IEEE Access, vol. 9, pp. 42817–42844, 2021. doi: 10.1109/ACCESS.2021.3066323. [Google Scholar] [CrossRef]
44. X. Li, X. L. Li, K. Wang, and Y. Li, “A multi-objective particle swarm optimization algorithm based on enhanced selection,” IEEE Access, vol. 7, pp. 168091–168103, 2019. doi: 10.1109/ACCESS.2019.2954542. [Google Scholar] [CrossRef]
Cite This Article
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.