In this paper we consider the problem of robot navigation in simple maze-like environments where the robot has to rely on its onboard sensors to perform the navigation task. In particular, we are interested in solutions to this navigation task that do not require mapping and localization. Additionally, we require that our solution can quickly adapt to new situations (e.g., changing navigation goals and new environments). To meet these criteria we frame this task as a sequence of reinforcement learning problems over related tasks. We propose a successor feature based deep reinforcement learning algorithm that can learn to transfer knowledge from previously mastered navigation tasks to new problem instances. Our algorithm can substantially decrease the required learning time after the first task instance has been solved, which makes it easily adaptable to changing environments. We validate our method in both simulated and real robot experiments with a Robotino and compare it to a set of baseline methods including classical planning-based navigation.
from cs.AI updates on arXiv.org http://ift.tt/2hP7t3h
via IFTTT
No comments:
Post a Comment