We introduce a technique to produce discriminative context-aware image captions (captions that describe differences between images or visual concepts) using only generic context-agnostic training data (captions that describe a concept or an image in isolation). For example, given images and captions of "siamese cat" and "tiger cat", our system generates language that describes the "siamese cat" in a way that distinguishes it from "tiger cat". We start with a generic language model that is context-agnostic and add a listener to discriminate between closely-related concepts. Our approach offers two key advantages over previous work: 1) our listener does not need separate training, and 2) allows joint inference to decode sentences that satisfy both the speaker and listener -- yielding an introspective speaker. We first apply our introspective speaker to a justification task, i.e. to describe why an image contains a particular fine-grained category as opposed to another closely related category in the CUB-200-2011 dataset. We then study discriminative image captioning to generate language that uniquely refers to one out of two semantically similar images in the COCO dataset. Evaluations with discriminative ground truth for justification and human studies for discriminative image captioning reveal that our approach outperforms baseline generative and speaker-listener approaches for discrimination.
from cs.AI updates on arXiv.org http://ift.tt/2jlaNDF
via IFTTT
No comments:
Post a Comment