This paper describes a deep network architecture that maps visual input to control actions for a robotic planar reaching task with an average accuracy of 2.6 pixels in 20 real-world trials. The network is trained in simulation and fine-tuned by a limited number of real-world images. To facilitate successful and fast transfer of deep visuomotor policies to real world settings we introduce a bottleneck between perception and control, allowing the networks to be trained independently.
from cs.AI updates on arXiv.org http://ift.tt/2em31Mk
via IFTTT
No comments:
Post a Comment