Runtime analysis of (1 + 1) evolutionary algorithm controlled with Q-learning using greedy exploration strategy on OneMAX+ZEROMAX problem

Denis Antipov, Maxim Buzdalov, Benjamin Doerr

Research output: Chapter in Book/Report/Conference proceedingConference Proceeding (Non-Journal item)

1 Citation (Scopus)

Abstract

There exist optimization problems with the target objective, which is to be optimized, and several extra objectives. The extra objectives may or may not be helpful in optimization process in terms of the number of objective evaluations necessary to reach an optimum of the target objective. OneMax+ZeroMax is a previously proposed benchmark optimization problem where the target objective is OneMax and a single extra objective is ZeroMax, which is equal to the number of zero bits in the bit vector. This is an example of a problem where extra objectives are not good, and objective selection methods should ignore the extra objectives. The EA+RL method is a method which selects objectives to be optimized by evolutionary algorithms (EA) using reinforcement learning (RL). Previously it was shown that it runs in Θ(N logN) on OneMax+ZeroMax when configured to use the randomized local search algorithm and the Q-learning algorithm with the greedy exploration strategy. We present the runtime analysis for the case when the (1 + 1)-EA algorithm is used. It is shown that the expected running time is at most 3.12eN log N.
Original languageEnglish
Title of host publicationEvolutionary Computation in Combinatorial Optimization
Subtitle of host publication15th European Conference, EvoCOP 2015, Copenhagen, Denmark, April 8-10, 2015, Proceedings
PublisherSpringer Nature
Pages160-172
Number of pages13
ISBN (Electronic)978-3-319-16468-7
ISBN (Print)978-3-319-16467-0
DOIs
Publication statusPublished - 15 Mar 2015
Externally publishedYes

Publication series

NameLecture Notes in Computer Science
PublisherSpringer Nature
Volume9026
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Fingerprint

Dive into the research topics of 'Runtime analysis of (1 + 1) evolutionary algorithm controlled with Q-learning using greedy exploration strategy on OneMAX+ZEROMAX problem'. Together they form a unique fingerprint.

Cite this