A benchmark test problem toolkit for multi-objective path optimization

Due to the complexity of multi-objective optimization problems (MOOPs) in general, it is crucial to test MOOP methods on some benchmark test problems. Many benchmark test problem toolkits have been developed for continuous parameter/numerical optimization, but fewer toolkits reported for discrete co...

Full description

Saved in:
Bibliographic Details
Published inSwarm and evolutionary computation Vol. 44; pp. 18 - 30
Main Authors Hu, Xiao-Bing, Zhang, Hai-Lin, Zhang, Chi, Zhang, Ming-Kong, Li, Hang, Leeson, Mark S.
Format Journal Article
LanguageEnglish
Published Elsevier B.V 01.02.2019
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Due to the complexity of multi-objective optimization problems (MOOPs) in general, it is crucial to test MOOP methods on some benchmark test problems. Many benchmark test problem toolkits have been developed for continuous parameter/numerical optimization, but fewer toolkits reported for discrete combinational optimization. This paper reports a benchmark test problem toolkit especially for multi-objective path optimization problem (MOPOP), which is a typical category of discrete combinational optimization. With the reported toolkit, the complete Pareto front of a generated test problem of MOPOP can be deduced and found out manually, and the problem scale and complexity are controllable and adjustable. Many methods for discrete combinational MOOPs often only output a partial or approximated Pareto front. With the reported benchmark test problem toolkit for MOPOP, we can now precisely tell how many true Pareto points are missed by a partial Pareto front, or how large the gap is between an approximated Pareto front and the complete one.
ISSN:2210-6502
DOI:10.1016/j.swevo.2018.11.009