Abstract
Benchmarking aims to investigate the performance of one or several algorithms for a set of reference problems by empirical means. An important motivation for benchmarking is the generation of insight that can be leveraged for designing more efficient solvers, for selecting a best algorithm, and/or for choosing a suitable instantiation of a parametrized algorithm. An important component of benchmarking is its design of experiment (DoE), which comprises the selection of the problems, the algorithms, the computational budget, etc., but also the performance indicators by which the data is evaluated. The DoE very strongly depends on the question that the user aims to answer. Flexible benchmarking environments that can easily adopt to users' needs are therefore in high demand.
With the objective to provide such a flexible benchmarking environment, the recently released IOHprofiler not only allows the user to choose the sets of benchmark problems and reference algorithms, but provides in addition a highly interactive, versatile performance evaluation. However, it still lacks a few important performance indicators that are relevant to practitioners and/or theoreticians. In this discussion paper we focus on one particular aspect, the probability that a considered algorithm reaches a certain target value within a given time budget. We thereby suggest to extend the classically regarded fixed-target and fixed-budget analyses by a fixed-probability measure. Fixed-probability curves are estimated using Pareto layers of (target, budget) pairs that can be realized by the algorithm with the required certainty. We also provide a first implementation towards a fixed-probability module within the IOHprofiler environment.
With the objective to provide such a flexible benchmarking environment, the recently released IOHprofiler not only allows the user to choose the sets of benchmark problems and reference algorithms, but provides in addition a highly interactive, versatile performance evaluation. However, it still lacks a few important performance indicators that are relevant to practitioners and/or theoreticians. In this discussion paper we focus on one particular aspect, the probability that a considered algorithm reaches a certain target value within a given time budget. We thereby suggest to extend the classically regarded fixed-target and fixed-budget analyses by a fixed-probability measure. Fixed-probability curves are estimated using Pareto layers of (target, budget) pairs that can be realized by the algorithm with the required certainty. We also provide a first implementation towards a fixed-probability module within the IOHprofiler environment.
Original language | English |
---|---|
Title of host publication | GECCO '19 |
Subtitle of host publication | Proceedings of the Genetic and Evolutionary Computation Conference Companion |
Editors | Manuel López-Ibáñez |
Publisher | Association for Computing Machinery |
Pages | 1807-1812 |
Number of pages | 6 |
ISBN (Print) | 978-1-4503-6748-6 |
DOIs | |
Publication status | Published - 13 Jul 2019 |
Externally published | Yes |
Event | GECCO 2019: The Genetic and Evolutionary Computation Conference - Prague, Czech Republic Duration: 13 Jul 2019 → 17 Jul 2019 https://gecco-2019.sigevo.org |
Conference
Conference | GECCO 2019: The Genetic and Evolutionary Computation Conference |
---|---|
Country/Territory | Czech Republic |
City | Prague |
Period | 13 Jul 2019 → 17 Jul 2019 |
Internet address |
Keywords
- benchmarking
- black-box optimization
- iterative optimization heuristics
- performance evaluation