Latest YouTube Video

Thursday, March 16, 2017

Minimax Regret Bounds for Reinforcement Learning. (arXiv:1703.05449v1 [stat.ML])

We consider the problem of efficient exploration in finite horizon MDPs.We show that an optimistic modification to model-based value iteration, can achieve a regret bound $\tilde{O}( \sqrt{HSAT} + H^2S^2A+H\sqrt{T})$ where $H$ is the time horizon, $S$ the number of states, $A$ the number of actions and $T$ the time elapsed. This result improves over the best previous known bound $\tilde{O}(HS \sqrt{AT})$ achieved by the UCRL2 algorithm.The key significance of our new results is that when $T\geq H^3S^3A$ and $SA\geq H$, it leads to a regret of $\tilde{O}(\sqrt{HSAT})$ that matches the established lower bounds of $\Omega(\sqrt{HSAT})$ up to a logarithmic factor. Our analysis contain two key insights. We use careful application of concentration inequalities to the optimal value function as a whole, rather than to the transitions probabilities (to improve scaling in $S$), and we use "exploration bonuses" based on Bernstein's inequality, together with using a recursive -Bellman-type- Law of Total Variance (to improve scaling in $H$).



from cs.AI updates on arXiv.org http://ift.tt/2nxwaEf
via IFTTT

No comments: