Departmental Bulletin Paper Policy Learning Using Modified Learning Vector Quantization for Reinforcement Learning Problems

Afif Mohd Faudzi, Ahmad  ,  Murata, Junichi

20 ( 2 )  , pp.39 - 44 , 2015-07 , Faculty of Information Science and Electrical Engineering, Kyushu University
ISSN:1342-3819
NCID:AN10569565
Description
Reinforcement learning (RL) enables an agent to _nd an optimal solution to a problem by interacting with the environment. In the previous research, Q-learning, one of the popular learning meth-ods in RL, is used to generate a policy. From it, abstract policy is extracted by LVQ algorithm. In this paper, the aim is to train the agent to learn an optimal policy from scratch as well as to generate the abstract policy in a single operation by LVQ algorithm. When applying LVQ algorithm in a RL frame-work, due to an erroneous teaching signal in LVQ algorithm, the learning sometimes end up with failure or with non-optimal solution. Here, a new LVQ algorithm is proposed to overcome this problem. The new LVQ algorithm introduce, _rst, a regular reward that is obtained by the agent autonomously based on its behavior and second, a function that convert a regular reward to a new reward so that the learning system does not su_er from an undesirable e_ect by a small reward. Through these modi_cations, the agent is expected to _nd the optimal solution more e_ciently.
Full-Text

http://catalog.lib.kyushu-u.ac.jp/handle/2324/1560523/p039.pdf

Number of accesses :  

Other information