loading page

Efficient Distributional Reinforcement Learning with Kullback-Leibler Divergence Regularization
  • +3
  • Renxing Li ,
  • Zhiwei Shang ,
  • Chunhua Zheng ,
  • Huiyun Li ,
  • Qing Liang ,
  • Yunduan Cui
Renxing Li
Author Profile
Zhiwei Shang
Author Profile
Chunhua Zheng
Author Profile
Huiyun Li
Author Profile
Qing Liang
Author Profile
Yunduan Cui
Shenzhen Institutes of Advanced Technology Chinese Academy of Sciences

Corresponding Author:[email protected]

Author Profile

Abstract

In this article, we address the issues of stability and data-efficiency in reinforcement learning (RL). A novel RL approach, Kullback–Leibler divergence-regularized distributional RL (KLC51) is proposed to integrate the advantages of both stability in the distributional RL and data-efficiency in the Kullback-Leibler (KL) divergence-regularized RL in one framework. KLC51 derived the Bellman equation and the TD errors regularized by KL divergence in a distributional perspective and explored the approximated strategies of properly mapping the corresponding Boltzmann softmax term into distributions. Evaluated by several benchmark tasks with different complexity, the proposed method clearly illustrates the positive effect of the KL divergence regularization to the distributional RL including exclusive exploration behaviors and smooth value function update, and successfully demonstrates its significant superiority in both learning stability and data-efficiency compared with the related baseline approaches.
Nov 2023Published in Applied Intelligence volume 53 issue 21 on pages 24847-24863. 10.1007/s10489-023-04867-z