Selecting evolutionary operators using reinforcement learning: initial explorations

Arina Buzdalova, Vladislav Kononov, Maxim Buzdalov

Research output: Chapter in Book/Report/Conference proceedingConference Proceeding (Non-Journal item)

9 Citations (Scopus)

Abstract

In evolutionary optimization, it is important to use efficient evolutionary operators, such as mutation and crossover. But it is often difficult to decide, which operator should be used when solving a specific optimization problem. So an automatic approach is needed. We propose an adaptive method of selecting evolutionary operators, which takes a set of possible operators as input and learns what operators are efficient for the considered problem. One evolutionary algorithm run should be enough for both learning and obtaining suitable performance. The proposed EA+RL(O) method is based on reinforcement learning. We test it by solving H-IFF and Travelling Salesman optimization problems. The obtained results show that the proposed method significantly outperforms random selection, since it manages to select efficient evolutionary operators and ignore inefficient ones.
Original languageEnglish
Title of host publicationGECCO Comp '14
Subtitle of host publicationProceedings of the Companion Publication of the 2014 Annual Conference on Genetic and Evolutionary Computation
PublisherAssociation for Computing Machinery
Pages1033-1036
Number of pages4
ISBN (Print)978-1-4503-2881-4
DOIs
Publication statusPublished - 12 Jul 2014
Externally publishedYes
EventGECCO 2014: The Genetic and Evolutionary Computation Conference - Vancouver, Canada
Duration: 12 Jul 201416 Jul 2014

Conference

ConferenceGECCO 2014: The Genetic and Evolutionary Computation Conference
Country/TerritoryCanada
CityVancouver
Period12 Jul 201416 Jul 2014

Keywords

  • evolutionary algorithms
  • parameter control

Fingerprint

Dive into the research topics of 'Selecting evolutionary operators using reinforcement learning: initial explorations'. Together they form a unique fingerprint.

Cite this