Latest YouTube Video

Tuesday, April 5, 2016

Bounded Optimal Exploration in MDP. (arXiv:1604.01350v1 [cs.AI])

Within the framework of probably approximately correct Markov decision processes (PAC-MDP), much theoretical work has focused on methods to attain near optimality after a relatively long period of learning and exploration. However, practical concerns require the attainment of satisfactory behavior within a short period of time. In this paper, we relax the PAC-MDP conditions to reconcile theoretically driven exploration methods and practical needs. We propose simple algorithms for discrete and continuous state spaces, and illustrate the benefits of our proposed relaxation via theoretical analyses and numerical examples. Our algorithms also maintain anytime error bounds and average loss bounds. Our approach accommodates both Bayesian and non-Bayesian methods.

Donate to arXiv



from cs.AI updates on arXiv.org http://ift.tt/1Mc85Nx
via IFTTT

No comments: