Latest YouTube Video

Monday, July 6, 2015

Emphatic Temporal-Difference Learning. (arXiv:1507.01569v1 [cs.LG])

Emphatic algorithms are temporal-difference learning algorithms that change their effective state distribution by selectively emphasizing and de-emphasizing their updates on different time steps. Recent works by Sutton, Mahmood and White (2015), and Yu (2015) show that by varying the emphasis in a particular way, these algorithms become stable and convergent under off-policy training with linear function approximation. This paper serves as a unified summary of the available results from both works. In addition, we demonstrate the empirical benefits from the flexibility of emphatic algorithms, including state-dependent discounting, state-dependent bootstrapping, and the user-specified allocation of function approximation resources.



from cs.AI updates on arXiv.org http://ift.tt/1dIA2eZ
via IFTTT

No comments: