Abstract

Though image-to-sequence generation models have become overwhelmingly popular in human-computer communications, they suffer from strongly favoring safe generic questions ("What is in this picture?"). Generating uninformative but relevant questions is not sufficient or useful. We argue that a good question is one that has a tightly focused purpose --- one that is aimed at expecting a specific type of response. We build a model that maximizes mutual information between the image, the expected answer and the generated question. To overcome the non-differentiability of discrete natural language tokens, we introduce a variational continuous latent space onto which the expected answers project. We regularize this latent space with a second latent space that ensures clustering of similar answers. Even when we don't know the expected answer, this second latent space can generate goal-driven questions specifically aimed at extracting objects ("what is the person throwing"), attributes, ("What kind of shirt is the person wearing?"), color ("what color is the frisbee?"), material ("What material is the frisbee?"), etc. We quantitatively show that our model is able to retain information about an expected answer category, resulting in more diverse, goal-driven questions. We launch our model on a set of real world images and extract previously unseen visual concepts.

Published at IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2019

Paper

Download the paper here.

Dataset

Download the VQA dataset here.

Download our answer categories annotations here.

Code

Download the code here

Bibtex

@inproceedings{krishna2019information,
  title={Information Maximizing Visual Question Generation},
  author={Krishna, Ranjay and Bernstein, Michael and Fei-Fei, Li},
  booktitle={IEEE Conference on Computer Vision and Pattern Recognition},
  year={2019}
}

Blog post coming soon.