Abstract
A first step towards analyzing runtime complexity of an evolutionary algorithm adaptively adjusted using reinforcement learning is made. We analyze the previously proposed EA + RL method that enhances single-objective optimization by selecting efficient auxiliary fitness functions. Precisely, Random Mutation Hill Climber adjusted with Q-learning using greedy exploration strategy is considered. We obtain both lower and upper bound for the number of fitness function evaluations needed for this EA + RL implementation to solve a modified OneMax problem. It turns out that EA + RL with an inefficient auxiliary fitness function performs on par with a conventional evolutionary algorithm, namely in Θ(N log N) fitness function evaluations, where N is the size of the OneMax problem. In other words, we show that reinforcement learning successfully ignores inefficient fitness function. A lower bound for the ε-greedy exploration strategy for ε > 0 is analyzed as well.
Original language | English |
---|---|
Title of host publication | ICMLA '13 |
Subtitle of host publication | Proceedings of the 2013 12th International Conference on Machine Learning and Applications |
Publisher | IEEE Press |
Pages | 203-208 |
Number of pages | 6 |
Volume | 1 |
ISBN (Electronic) | 978-0-7695-5144-9 |
DOIs | |
Publication status | Published - 04 Dec 2013 |
Externally published | Yes |
Event | 2013 12th International Conference on Machine Learning and Applications - Miami, United States of America Duration: 04 Dec 2013 → 07 Dec 2013 |
Conference
Conference | 2013 12th International Conference on Machine Learning and Applications |
---|---|
Country/Territory | United States of America |
City | Miami |
Period | 04 Dec 2013 → 07 Dec 2013 |
Keywords
- reinforcement learning
- onemax
- helper-objectives
- theory
- complexity