TY - GEN

T1 - Selection of extra objectives Using reinforcement learning in non-stationary environment

T2 - 20th International Conference on Soft Computing: Evolutionary Computation, Genetic Programming, Swarm Intelligence, Fuzzy Logic, Neural Networks, Fractals, Bayesian Methods, MENDEL 2014

AU - Petrova, Irina

AU - Buzdalova, Arina

AU - Buzdalov, Maxim

PY - 2014

Y1 - 2014

N2 - Using extra objectives in evolutionary algorithms helps to avoid getting stuck in local optima and increases genetic diversity. We consider a method based on a reinforcement learning (RL) algorithm that selects objectives in evolutionary algorithms (EA) during optimization. The method is called EA+RL. In some researches, reinforcement learning algorithms for stationary environments were used to adjust evolutionary algorithms. However, when properties of extra objectives change during the optimization process, we propose that it is better to use reinforcement learning algorithms which are specially developed for non-stationary environments. We present an initial research towards EA+RL for a non-stationary environment. A new reinforcement learning algorithm is proposed to be used in the EA+RL method. We also formulate a benchmark problem with some extra objectives, which behave differently at different stages of optimization. Thus, non-stationarity arises. The new algorithm is applied to this problem and compared with the methods which were used in other researches. It is shown that the proposed method chooses the extra objectives which are efficient at the current optimization stage more often and obtains higher values of the target objective being optimized.

AB - Using extra objectives in evolutionary algorithms helps to avoid getting stuck in local optima and increases genetic diversity. We consider a method based on a reinforcement learning (RL) algorithm that selects objectives in evolutionary algorithms (EA) during optimization. The method is called EA+RL. In some researches, reinforcement learning algorithms for stationary environments were used to adjust evolutionary algorithms. However, when properties of extra objectives change during the optimization process, we propose that it is better to use reinforcement learning algorithms which are specially developed for non-stationary environments. We present an initial research towards EA+RL for a non-stationary environment. A new reinforcement learning algorithm is proposed to be used in the EA+RL method. We also formulate a benchmark problem with some extra objectives, which behave differently at different stages of optimization. Thus, non-stationarity arises. The new algorithm is applied to this problem and compared with the methods which were used in other researches. It is shown that the proposed method chooses the extra objectives which are efficient at the current optimization stage more often and obtains higher values of the target objective being optimized.

KW - Evolutionary algorithms

KW - Fitness function

KW - Multiobjectivization

KW - Non-stationary

KW - Reinforcement learning

UR - http://www.scopus.com/inward/record.url?scp=84938065920&partnerID=8YFLogxK

M3 - Conference Proceeding (Non-Journal item)

AN - SCOPUS:84938065920

T3 - Mendel

SP - 105

EP - 110

BT - 20th International Conference on Soft Computing

PB - Brno University of Technology

Y2 - 25 June 2014 through 27 June 2014

ER -