Cooperative Q-learning with heterogeneity in actions

Cooperation in learning improves the speed of convergence and the quality of learning. Special care is needed when heterogeneous agents cooperate in learning. It is discussed that, cooperation in learning may cause the learning process to diverge if heterogeneity is not handled properly. In this pap...

Full description

Saved in:
Bibliographic Details
Published inIEEE International Conference on Systems, Man and Cybernetics Vol. 4; p. 5 pp. vol.4
Main Authors Reza MirFattah, S.M., Ahmadabadi, M.N.
Format Conference Proceeding
LanguageEnglish
Published IEEE 2002
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Cooperation in learning improves the speed of convergence and the quality of learning. Special care is needed when heterogeneous agents cooperate in learning. It is discussed that, cooperation in learning may cause the learning process to diverge if heterogeneity is not handled properly. In this paper, it is assumed that two heterogeneous Q-learning agents cooperate to learn. The heterogeneity is assumed in their action order (and not in their action set). A Q-learning-based method is introduced for the agents to learn the mapping among their actions. It is shown that, the agents are able to learn this mapping while cooperating in learning. Some simulation results are reported to show the effectiveness of the proposed method.
ISBN:0780374371
9780780374379
ISSN:1062-922X
2577-1655
DOI:10.1109/ICSMC.2002.1173250