Latest YouTube Video

Thursday, November 12, 2015

Deep Multimodal Semantic Embeddings for Speech and Images. (arXiv:1511.03690v1 [cs.CV])

In this paper, we present a model which takes as input a corpus of images with relevant spoken captions and finds a correspondence between the two modalities. We employ a pair of convolutional neural networks to model visual objects and speech signals at the word level, and tie the networks together with an embedding and alignment model which learns a joint semantic space over both modalities. We evaluate our model using image search and annotation tasks on the Flickr8k dataset, which we augmented by collecting a corpus of 40,000 spoken captions using Amazon Mechanical Turk.



from cs.AI updates on arXiv.org http://ift.tt/1SMw8Ba
via IFTTT

No comments: