Deep reinforcement learning is the learning of multiple levels of hierarchical representations for reinforcement learning. Hierarchical reinforcement learning focuses on temporal abstractions in planning and learning, allowing temporally-extended actions to be transferred between tasks. In this paper we combine one method for hierarchical reinforcement learning - the options framework - with deep Q-networks (DQNs) through the use of different "option heads" on the policy network, and a supervisory network for choosing between the different options. We show that in a domain where we have prior knowledge of the mapping between states and options, our augmented DQN achieves a policy competitive with that of a standard DQN, but with much lower sample complexity. This is achieved through a straightforward architectural adjustment to the DQN, as well as an additional supervised neural network.
from cs.AI updates on arXiv.org http://ift.tt/1TeXHBL
via IFTTT
No comments:
Post a Comment