喵ID:SBS9vP免责声明

Settling the Sample Complexity of Model-Based Offline Reinforcement Learning

解决基于模型的离线强化学习的样本复杂度

基本信息

DOI:
10.48550/arxiv.2204.05275
发表时间:
2022
期刊:
ArXiv
影响因子:
--
通讯作者:
Yuting Wei
中科院分区:
文献类型:
--
作者: Gen Li;Laixi Shi;Yuxin Chen;Yuejie Chi;Yuting Wei研究方向: -- MeSH主题词: --
关键词: --
来源链接:pubmed详情页地址

文献摘要

This paper is concerned with offline reinforcement learning (RL), which learns using pre-collected data without further exploration. Effective offline RL would be able to accommodate distribution shift and limited data coverage. However, prior algorithms or analyses either suffer from suboptimal sample complexities or incur high burn-in cost to reach sample optimality, thus posing an impediment to efficient offline RL in sample-starved applications. We demonstrate that the model-based (or"plug-in") approach achieves minimax-optimal sample complexity without burn-in cost for tabular Markov decision processes (MDPs). Concretely, consider a finite-horizon (resp. $\gamma$-discounted infinite-horizon) MDP with $S$ states and horizon $H$ (resp. effective horizon $\frac{1}{1-\gamma}$), and suppose the distribution shift of data is reflected by some single-policy clipped concentrability coefficient $C^{\star}_{\text{clipped}}$. We prove that model-based offline RL yields $\varepsilon$-accuracy with a sample complexity of \[ \begin{cases} \frac{H^{4}SC_{\text{clipped}}^{\star}}{\varepsilon^{2}}&(\text{finite-horizon MDPs}) \frac{SC_{\text{clipped}}^{\star}}{(1-\gamma)^{3}\varepsilon^{2}}&(\text{infinite-horizon MDPs}) \end{cases} \] up to log factor, which is minimax optimal for the entire $\varepsilon$-range. The proposed algorithms are"pessimistic"variants of value iteration with Bernstein-style penalties, and do not require sophisticated variance reduction. Our analysis framework is established upon delicate leave-one-out decoupling arguments in conjunction with careful self-bounding techniques tailored to MDPs.
本文与离线增强学习(RL)有关,该学习使用预先收集的数据学习而无需进一步探索。有效的离线RL将能够适应分销转移和有限的数据覆盖范围。但是,先前的算法或分析要么患有次优的样本复杂性或产生高燃烧成本以达到样品最佳性,因此在样品饥饿的应用中构成了有效离线RL的障碍。我们证明,基于模型的(或“插件”)方法可实现最小的样本复杂性,而无需刻录的马尔可夫决策过程(MDPS)即可燃烧成本。具体而言,请考虑使用有限的 - horizo​​n($ \ gamma $ discoussed infinite-horizo​​n)MDP,带有$ s $ states and horizo​​n $ h $(resp.efforment horive horive horizo​​n $ \ frac {1} {1- \ \ gamma} $) ,并假设数据的分布变化反映了某些单极夹的浓度系数$ c^{\ star} _ {\ text {clipped}} $。我们证明,基于模型的离线RL产生$ \ VAREPSILON $ -CACURACY,其样品复杂性为\ [\ begin {case} \ frac {h^{h^{4} sc _ {\ text {\ text {clipped}}} \ varepsilon^{2}}}&(\ text {有限 - 霍森mdps}) \ frac {sc _ {\ text {clipped}}^{\ star}} {(1- \ gamma)^{3} \ varepsilon^{2}}&(\ text {infinite-horizo​​n mdps}) } \]到log因子,这对于整个最小值是最佳的$ \ varepsilon $ -range。所提出的算法是带有伯恩斯坦风格的惩罚的价值迭代的“悲观”变体,不需要降低复杂的方差。我们的分析框架是在精致的一对脱离脱钩论文中建立的,并结合了针对MDPS量身定制的仔细的自我结合的技术。
参考文献(2)
被引文献(59)
Bridging Offline Reinforcement Learning and Imitation Learning: A Tale of Pessimism
DOI:
10.1109/tit.2022.3185139
发表时间:
2021-03
期刊:
IEEE Transactions on Information Theory
影响因子:
2.5
作者:
Paria Rashidinejad;Banghua Zhu;Cong Ma;Jiantao Jiao;Stuart J. Russell
通讯作者:
Paria Rashidinejad;Banghua Zhu;Cong Ma;Jiantao Jiao;Stuart J. Russell

数据更新时间:{{ references.updateTime }}

Yuting Wei
通讯地址:
--
所属机构:
--
电子邮件地址:
--
免责声明免责声明
1、猫眼课题宝专注于为科研工作者提供省时、高效的文献资源检索和预览服务;
2、网站中的文献信息均来自公开、合规、透明的互联网文献查询网站,可以通过页面中的“来源链接”跳转数据网站。
3、在猫眼课题宝点击“求助全文”按钮,发布文献应助需求时求助者需要支付50喵币作为应助成功后的答谢给应助者,发送到用助者账户中。若文献求助失败支付的50喵币将退还至求助者账户中。所支付的喵币仅作为答谢,而不是作为文献的“购买”费用,平台也不从中收取任何费用,
4、特别提醒用户通过求助获得的文献原文仅用户个人学习使用,不得用于商业用途,否则一切风险由用户本人承担;
5、本平台尊重知识产权,如果权利所有者认为平台内容侵犯了其合法权益,可以通过本平台提供的版权投诉渠道提出投诉。一经核实,我们将立即采取措施删除/下架/断链等措施。
我已知晓