Abstract
Consider optimization problems, where a target objective should be optimized. Some auxiliary objectives can be used to obtain the optimum of the target objective in less number of objective evaluations. We call such auxiliary objective a supporting one. Usually there is no prior knowledge about properties of auxiliary objectives, some objectives can be obstructive as well. What is more, an auxiliary objective can be both supporting and obstructive at different stages of the target objective optimization. Thus, an adaptive online method of objective selection is needed. Earlier, we proposed a method for doing that, which is based on reinforcement learning. In this paper, a new algorithm for adaptive online selection of optimization objectives is proposed. The algorithm meets the interface of a reinforcement learning agent, so it can be fit into the previously proposed framework. The new algorithm is applied for solving some benchmark problems with single-objective evolutionary algorithms. Specifically, Leading Ones with OneMax auxiliary objective is considered, as well as the MH-IFF problem. Experimental results are presented. The proposed algorithm outperforms Q-learning and random objective selection on the considered problems.
Original language | English |
---|---|
Title of host publication | ICMLA '14 |
Subtitle of host publication | Proceedings of the 2014 13th International Conference on Machine Learning and Applications |
Editors | C. Ferri, G. Qu, X. Chen, M. A. Wani, P. Angelov, J.-H Lai |
Publisher | IEEE Press |
Pages | 584-587 |
Number of pages | 4 |
ISBN (Electronic) | 978-1-4799-7415-3 |
DOIs | |
Publication status | Published - 03 Dec 2014 |
Externally published | Yes |
Event | 2014 13th International Conference on Machine Learning and Applications (ICMLA) - Detroit, United States of America Duration: 03 Dec 2014 → 06 Dec 2014 |
Conference
Conference | 2014 13th International Conference on Machine Learning and Applications (ICMLA) |
---|---|
Country/Territory | United States of America |
City | Detroit |
Period | 03 Dec 2014 → 06 Dec 2014 |
Keywords
- reinforcement learning
- multi-objectivization
- evolutionary algorithms
- parameter control