Analysis of Q-learning with random exploration for selection of auxiliary objectives in random local search

Research output: Chapter in Book/Report/Conference proceedingConference Proceeding (Non-Journal item)

3 Citations (Scopus)

Abstract

We perform theoretical analysis for a previously proposed method of enhancing performance of an evolutionary algorithm with reinforcement learning. The method adaptively chooses between auxiliary objectives in a single-objective evolutionary algorithm using reinforcement learning. We consider the Q-learning algorithm with ϵ-greedy strategy (ϵ > 0), using a benchmark problem based on ONEMAX. For the evolutionary algorithm, we consider the Random Local Search. In our setting, ONEMAX problem should be solved in the presence of the obstructive ZEROMAX objective. This benchmark tests the ability of the reinforcement learning algorithm to ignore such an inefficient objective. It was previously shown that in the case of the greedy strategy (ϵ = 0), the considered algorithm performs on the described benchmark problem in the best possible time for a conventional evolutionary algorithm. However, the ϵ-greedy strategy appears to perform in exponential time. Furthermore, every selection algorithm which selects an inefficient auxiliary objective with probability of at least δ is shown to be asymptotically inefficient when δ > 0 is a constant.

Original languageEnglish
Title of host publication2015 IEEE Congress on Evolutionary Computation, CEC 2015
Subtitle of host publicationProceedings
PublisherIEEE Press
Pages1776-1783
Number of pages8
ISBN (Electronic)9781479974924
DOIs
Publication statusPublished - 14 Sept 2015
Externally publishedYes
EventIEEE Congress on Evolutionary Computation, CEC 2015 - Sendai, Japan
Duration: 25 May 201528 May 2015

Conference

ConferenceIEEE Congress on Evolutionary Computation, CEC 2015
Country/TerritoryJapan
CitySendai
Period25 May 201528 May 2015

Fingerprint

Dive into the research topics of 'Analysis of Q-learning with random exploration for selection of auxiliary objectives in random local search'. Together they form a unique fingerprint.

Cite this