Latest YouTube Video

Sunday, November 29, 2015

Convergence in Navigational Reinforcement Learning. (arXiv:1511.08724v1 [cs.LG])

Reinforcement learning is a formal framework for modeling agents that learn to solve tasks. For example, one important task for animals is to navigate in an environment to find food or to return to their nest. In such tasks, the agent has to learn a path through the environment from start states to goal states, by visiting a sequence of intermediate states. The agent receives reward on a goal state. Concretely, we need to learn a policy that maps each encountered state to an immediate action that leads to a next state, eventually leading to a goal state. We say that a learning process has converged if eventually the policy will no longer change, i.e., the policy stabilizes. The intuition of paths and navigation policies can be applied generally. Indeed, in this article, we study navigation tasks formalized as a graph structure that, for each application of an action to a state, describes the possible successor states that could result from that application. In contrast to standard reinforcement learning, we essentially simplify numeric reward signals to boolean flags on the transitions in the graph. The resulting framework enables a clear theoretical study of how properties of the graph structure can cause convergence of the learning process. In particular, we formally study a learning process that detects revisits to states in the graph, i.e., we detect cycles, and the process keeps adjusting the policy until no more cycles are made. So, eventually, the agent goes straight to reward from each start state. We identify reducibility of the task graph as a sufficient condition for this learning process to converge. We also syntactically characterize the form of the final policy, which can be used to detect convergence in a simulation.



from cs.AI updates on arXiv.org http://ift.tt/1XBY6Re
via IFTTT

No comments: