Visual recognition systems mounted on autonomous moving agents face the challenge of unconstrained data, but simultaneously have the opportunity to improve their performance by moving to acquire new views of test data. In this work, we first show how a recurrent neural network-based system may be trained to perform end-to-end learning of motion policies suited for the "active recognition" setting. Further, we hypothesize that active vision requires an agent to have the capacity to reason about the effects of its motions on its view of the world. To verify this hypothesis, we attempt to induce this capacity in our active recognition pipeline, by simultaneously learning to forecast the effects of the agent's motions on its internal representation of its cumulative knowledge obtained from all past views. Results across two challenging datasets confirm both that our end-to-end system successfully learns meaningful policies for active recognition, and that "learning to look ahead" further boosts recognition performance.
from cs.AI updates on arXiv.org http://ift.tt/24kwdpk
via IFTTT
No comments:
Post a Comment