Latest YouTube Video

Tuesday, April 5, 2016

The Curious Robot: Learning Visual Representations via Physical Interactions. (arXiv:1604.01360v1 [cs.CV])

What is the right supervisory signal to train visual representations? Current approaches in computer vision use category labels from datasets such as ImageNet to train ConvNets. However, in case of biological agents, visual representation learning does not require semantic labels. In fact, we argue that biological agents use active exploration and physical interactions with the world to learn visual representations unlike current vision systems which just use passive observations (images and videos downloaded from web). For example, babies push objects, poke them, put them in their mouth and throw them to learn representations. Towards this goal, we build one of the first systems on a Baxter platform that pushes, pokes, grasps and actively observes objects in a tabletop environment. It uses four different types of physical interactions to collect more than 130K datapoints, with each datapoint providing backprops to a shared ConvNet architecture allowing us to learn visual representations. We show the quality of learned representations by observing neuron activations and performing nearest neighbor retrieval on this learned representation. Finally, we evaluate our learned ConvNet on different image classification tasks and show improvements compared to learning without external data.

Donate to arXiv



from cs.AI updates on arXiv.org http://ift.tt/1WaTpk2
via IFTTT

No comments: