This paper is concerned with offline reinforcement learning (RL), which learns using pre-collected data without further exploration. Effective offline RL would be able to accommodate distribution shift and limited data coverage. However, prior algorithms or analyses either suffer from suboptimal sample complexities or incur high burn-in cost to reach sample optimality, thus posing an impediment to efficient offline RL in sample-starved applications. We demonstrate that the model-based (or"plug-in") approach achieves minimax-optimal sample complexity without burn-in cost for tabular Markov decision processes (MDPs). Concretely, consider a finite-horizon (resp. $\gamma$-discounted infinite-horizon) MDP with $S$ states and horizon $H$ (resp. effective horizon $\frac{1}{1-\gamma}$), and suppose the distribution shift of data is reflected by some single-policy clipped concentrability coefficient $C^{\star}_{\text{clipped}}$. We prove that model-based offline RL yields $\varepsilon$-accuracy with a sample complexity of \[ \begin{cases} \frac{H^{4}SC_{\text{clipped}}^{\star}}{\varepsilon^{2}}&(\text{finite-horizon MDPs}) \frac{SC_{\text{clipped}}^{\star}}{(1-\gamma)^{3}\varepsilon^{2}}&(\text{infinite-horizon MDPs}) \end{cases} \] up to log factor, which is minimax optimal for the entire $\varepsilon$-range. The proposed algorithms are"pessimistic"variants of value iteration with Bernstein-style penalties, and do not require sophisticated variance reduction. Our analysis framework is established upon delicate leave-one-out decoupling arguments in conjunction with careful self-bounding techniques tailored to MDPs.
本文与离线增强学习(RL)有关,该学习使用预先收集的数据学习而无需进一步探索。有效的离线RL将能够适应分销转移和有限的数据覆盖范围。但是,先前的算法或分析要么患有次优的样本复杂性或产生高燃烧成本以达到样品最佳性,因此在样品饥饿的应用中构成了有效离线RL的障碍。我们证明,基于模型的(或“插件”)方法可实现最小的样本复杂性,而无需刻录的马尔可夫决策过程(MDPS)即可燃烧成本。具体而言,请考虑使用有限的 - horizon($ \ gamma $ discoussed infinite-horizon)MDP,带有$ s $ states and horizon $ h $(resp.efforment horive horive horizon $ \ frac {1} {1- \ \ gamma} $) ,并假设数据的分布变化反映了某些单极夹的浓度系数$ c^{\ star} _ {\ text {clipped}} $。我们证明,基于模型的离线RL产生$ \ VAREPSILON $ -CACURACY,其样品复杂性为\ [\ begin {case} \ frac {h^{h^{4} sc _ {\ text {\ text {clipped}}} \ varepsilon^{2}}}&(\ text {有限 - 霍森mdps}) \ frac {sc _ {\ text {clipped}}^{\ star}} {(1- \ gamma)^{3} \ varepsilon^{2}}&(\ text {infinite-horizon mdps}) } \]到log因子,这对于整个最小值是最佳的$ \ varepsilon $ -range。所提出的算法是带有伯恩斯坦风格的惩罚的价值迭代的“悲观”变体,不需要降低复杂的方差。我们的分析框架是在精致的一对脱离脱钩论文中建立的,并结合了针对MDPS量身定制的仔细的自我结合的技术。