喵ID:9ie70M免责声明

Distributionally Robust Q-Learning

基本信息

DOI:
--
发表时间:
2022
期刊:
影响因子:
--
通讯作者:
Zijian Liu;Qinxun Bai;J. Blanchet;Perry Dong;Wei Xu;Zhengqing Zhou;Zhengyuan Zhou
中科院分区:
其他
文献类型:
--
作者: Zijian Liu;Qinxun Bai;J. Blanchet;Perry Dong;Wei Xu;Zhengqing Zhou;Zhengyuan Zhou研究方向: -- MeSH主题词: --
关键词: --
来源链接:pubmed详情页地址

文献摘要

Reinforcement learning (RL) has demonstrated remarkable achievements in simulated environments. However, carrying this success to real environments requires the important attribute of robustness, which the existing RL algorithms often lack as they assume that the future deployment environment is the same as the training environment (i.e. simulator) in which the policy is learned. This assumption often does not hold due to the discrepancy between the simulator and the real environment and, as a result, and hence renders the learned policy fragile when deployed. In this paper, we propose a novel distribution-ally robust Q -learning algorithm that learns the best policy in the worst distributional perturbation of the environment. Our algorithm first transforms the infinite-dimensional learning problem (since the environment MDP perturbation lies in an infinite-dimensional space) into a finite-dimensional dual problem and subsequently uses a multi-level Monte-Carlo scheme to approximate the dual value using samples from the simulator. Despite the complexity, we show that the resulting distributionally robust Q -learning algorithm asymptotically converges to optimal worst-case policy, thus making it robust to future environment changes. Simulation results further demonstrate its strong empirical robustness.
强化学习(RL)在模拟环境中取得了显著成就。然而,将这种成功应用到实际环境中需要具备鲁棒性这一重要属性,而现有的强化学习算法往往缺乏鲁棒性,因为它们假定未来的部署环境与学习策略的训练环境(即模拟器)相同。由于模拟器和实际环境之间存在差异,这一假定往往不成立,因此导致所学策略在部署时很脆弱。在本文中,我们提出了一种新的分布鲁棒Q学习算法,该算法能在环境的最差分布扰动下学习到最优策略。我们的算法首先将无限维学习问题(因为环境马尔可夫决策过程扰动位于无限维空间)转化为有限维对偶问题,然后使用多层蒙特卡洛方法利用来自模拟器的样本近似对偶值。尽管很复杂,但我们表明所得的分布鲁棒Q学习算法渐近收敛到最优的最差情况策略,从而使其对未来环境变化具有鲁棒性。模拟结果进一步证明了它强大的经验鲁棒性。
参考文献(79)
被引文献(29)

数据更新时间:{{ references.updateTime }}

关联基金

DMS-EPSRC: Fast Martingales, Large Deviations, and Randomized Gradients for Heavy-tailed Distributions
批准号:
2118199
批准年份:
2021
资助金额:
40
项目类别:
Continuing Grant
Zijian Liu;Qinxun Bai;J. Blanchet;Perry Dong;Wei Xu;Zhengqing Zhou;Zhengyuan Zhou
通讯地址:
--
所属机构:
--
电子邮件地址:
--
免责声明免责声明
1、猫眼课题宝专注于为科研工作者提供省时、高效的文献资源检索和预览服务;
2、网站中的文献信息均来自公开、合规、透明的互联网文献查询网站,可以通过页面中的“来源链接”跳转数据网站。
3、在猫眼课题宝点击“求助全文”按钮,发布文献应助需求时求助者需要支付50喵币作为应助成功后的答谢给应助者,发送到用助者账户中。若文献求助失败支付的50喵币将退还至求助者账户中。所支付的喵币仅作为答谢,而不是作为文献的“购买”费用,平台也不从中收取任何费用,
4、特别提醒用户通过求助获得的文献原文仅用户个人学习使用,不得用于商业用途,否则一切风险由用户本人承担;
5、本平台尊重知识产权,如果权利所有者认为平台内容侵犯了其合法权益,可以通过本平台提供的版权投诉渠道提出投诉。一经核实,我们将立即采取措施删除/下架/断链等措施。
我已知晓