We propose Multi-Query Networks to answer questions like ``Find a shoe precisely like this, but with higher heel``. To respond to a question like this, one needs an image representation that captures all the different notions of similarities that shoes can be compared to. However, when learning such similarity based embeddings with siamese or triplet networks the simplifying assumption is commonly made that images are only compared to one unique measure of similarity. A main reason for this is that contradicting notions of similarities cannot be captured in a single space. To address this shortcoming, we propose Multi-Query Networks (MQNs) that learn embeddings differentiated into semantically distinct subspaces that capture the different notions of similarities. In addition, MQNs make the representation interpretable by encoding different similarities in separate dimensions. We show that our approach learns visually relevant semantic subspaces. Further, when evaluating on triplet questions from multiple similarity notions our model even outperforms the accuracy obtained by training individual specialized networks for each notion separately.
from cs.AI updates on arXiv.org http://ift.tt/1RyIbCG
via IFTTT
No comments:
Post a Comment