Crynodeb
A first step towards analyzing runtime complexity of an evolutionary algorithm adaptively adjusted using reinforcement learning is made. We analyze the previously proposed EA + RL method that enhances single-objective optimization by selecting efficient auxiliary fitness functions. Precisely, Random Mutation Hill Climber adjusted with Q-learning using greedy exploration strategy is considered. We obtain both lower and upper bound for the number of fitness function evaluations needed for this EA + RL implementation to solve a modified OneMax problem. It turns out that EA + RL with an inefficient auxiliary fitness function performs on par with a conventional evolutionary algorithm, namely in Θ(N log N) fitness function evaluations, where N is the size of the OneMax problem. In other words, we show that reinforcement learning successfully ignores inefficient fitness function. A lower bound for the ε-greedy exploration strategy for ε > 0 is analyzed as well.
Iaith wreiddiol | Saesneg |
---|---|
Teitl | ICMLA '13 |
Is-deitl | Proceedings of the 2013 12th International Conference on Machine Learning and Applications |
Cyhoeddwr | IEEE Press |
Tudalennau | 203-208 |
Nifer y tudalennau | 6 |
Cyfrol | 1 |
ISBN (Electronig) | 978-0-7695-5144-9 |
Dynodwyr Gwrthrych Digidol (DOIs) | |
Statws | Cyhoeddwyd - 04 Rhag 2013 |
Cyhoeddwyd yn allanol | Ie |
Digwyddiad | 2013 12th International Conference on Machine Learning and Applications - Miami, Unol Daleithiau America Hyd: 04 Rhag 2013 → 07 Rhag 2013 |
Cynhadledd
Cynhadledd | 2013 12th International Conference on Machine Learning and Applications |
---|---|
Gwlad/Tiriogaeth | Unol Daleithiau America |
Dinas | Miami |
Cyfnod | 04 Rhag 2013 → 07 Rhag 2013 |