Latest YouTube Video

Monday, April 4, 2016

Data-Efficient Off-Policy Policy Evaluation for Reinforcement Learning. (arXiv:1604.00923v1 [cs.LG])

In this paper we present a new way of predicting the performance of a reinforcement learning policy given historical data that may have been generated by a different policy. The ability to evaluate a policy from historical data is important for applications where the deployment of a bad policy can be dangerous or costly. We show empirically that our algorithm produces estimates that often have orders of magnitude lower mean squared error than existing methods---it makes more efficient use of the available data. Our new estimator is based on two advances: an extension of the doubly robust estimator (Jiang and Li, 2015), and a new way to mix between model based estimates and importance sampling based estimates.

Donate to arXiv



from cs.AI updates on arXiv.org http://ift.tt/1URxlMe
via IFTTT

No comments: