Reinforcement learning (RL) has demonstrated remarkable achievements in simulated environments. However, carrying this success to real environments requires the important attribute of robustness, which the existing RL algorithms often lack as they assume that the future deployment environment is the same as the training environment (i.e. simulator) in which the policy is learned. This assumption often does not hold due to the discrepancy between the simulator and the real environment and, as a result, and hence renders the learned policy fragile when deployed. In this paper, we propose a novel distribution-ally robust Q -learning algorithm that learns the best policy in the worst distributional perturbation of the environment. Our algorithm first transforms the infinite-dimensional learning problem (since the environment MDP perturbation lies in an infinite-dimensional space) into a finite-dimensional dual problem and subsequently uses a multi-level Monte-Carlo scheme to approximate the dual value using samples from the simulator. Despite the complexity, we show that the resulting distributionally robust Q -learning algorithm asymptotically converges to optimal worst-case policy, thus making it robust to future environment changes. Simulation results further demonstrate its strong empirical robustness.
强化学习(RL)在模拟环境中取得了显著成就。然而,将这种成功应用到实际环境中需要具备鲁棒性这一重要属性,而现有的强化学习算法往往缺乏鲁棒性,因为它们假定未来的部署环境与学习策略的训练环境(即模拟器)相同。由于模拟器和实际环境之间存在差异,这一假定往往不成立,因此导致所学策略在部署时很脆弱。在本文中,我们提出了一种新的分布鲁棒Q学习算法,该算法能在环境的最差分布扰动下学习到最优策略。我们的算法首先将无限维学习问题(因为环境马尔可夫决策过程扰动位于无限维空间)转化为有限维对偶问题,然后使用多层蒙特卡洛方法利用来自模拟器的样本近似对偶值。尽管很复杂,但我们表明所得的分布鲁棒Q学习算法渐近收敛到最优的最差情况策略,从而使其对未来环境变化具有鲁棒性。模拟结果进一步证明了它强大的经验鲁棒性。