Latest YouTube Video

Wednesday, August 31, 2016

Interpreting Visual Question Answering Models. (arXiv:1608.08974v1 [cs.CV])

Deep neural networks have shown striking progress and obtained state-of-the-art results in many AI research fields in the recent years. However, it is often unsatisfying to not know why they predict what they do. In this paper, we address the problem of interpreting Visual Question Answering (VQA) models. Specifically, we are interested in finding what part of the input (pixels in images or words in questions) the VQA model focuses on while answering the question. To tackle this problem, we use two visualization techniques -- guided backpropagation and occlusion -- to find important words in the question and important regions in the image. We then present qualitative and quantitative analyses of these importance maps.



from cs.AI updates on arXiv.org http://ift.tt/2bSU6Qk
via IFTTT

No comments: