We tackle the issue of finding a good policy when the number of policy updates is limited. This is done by approximating the expected policy reward as a sequence of concave lower bounds which can be efficiently maximized, drastically reducing the number of policy updates required to achieve good performance. We also extend existing methods to negative rewards, enabling the use of control variates.
from cs.AI updates on arXiv.org http://ift.tt/2idxTOM
via IFTTT
No comments:
Post a Comment