Latest YouTube Video

Tuesday, November 17, 2015

Ask, Attend and Answer: Exploring Question-Guided Spatial Attention for Visual Question Answering. (arXiv:1511.05234v1 [cs.CV])

The problem of Visual Question Answering (VQA) requires joint image and language understanding to answer a question about a given photograph. Recent approaches have applied deep image captioning methods based on recurrent LSTM networks to this problem, but have failed to model spatial inference. In this paper, we propose a memory network with spatial attention for the VQA task. Memory networks are recurrent neural networks with an explicit attention mechanism that selects certain parts of the information stored in memory. We store neuron activations from different spatial receptive fields in the memory, and use the question to choose relevant regions for computing the answer. We experiment with spatial attention architectures that use different question representations to choose regions, and also show that two attention steps (hops) obtain improved results compared to a single step. To understand the inference process learned by the network, we design synthetic questions that specifically require spatial inference and visualize the attention weights. We evaluate our model on two published visual question answering datasets, DAQUAR and VQA, and obtain promising results.



from cs.AI updates on arXiv.org http://ift.tt/1MlcR7C
via IFTTT

No comments: