We present a bound for value-prediction error with respect to model misspecification that is tight, including constant factors. This is a direct improvement of the"simulation lemma,"a foundational result in reinforcement learning. We demonstrate that existing bounds are quite loose, becoming vacuous for large discount factors, due to the suboptimal treatment of compounding probability errors. By carefully considering this quantity on its own, instead of as a subcomponent of value error, we derive a bound that is sub-linear with respect to transition function misspecification. We then demonstrate broader applicability of this technique, improving a similar bound in the related subfield of hierarchical abstraction.
我们给出了一个关于模型误设的价值预测误差的界,这个界是紧的,包括常数因子。这是对“模拟引理”的直接改进,“模拟引理”是强化学习中的一个基础性结果。我们证明现有的界相当宽松,对于大的折扣因子会变得没有意义,这是由于对复合概率误差的次优处理。通过仔细单独考虑这个量,而不是将其作为价值误差的一个子成分,我们推导出了一个关于转移函数误设为次线性的界。然后我们证明了这种技术更广泛的适用性,改进了分层抽象相关子领域中的一个类似的界。