Abstract

We introduce the dense captioning task, which requires a computer vision system to both localize and describe salient regions in images in natural language. The dense captioning task generalizes object detection when the descriptions consist of a single word, and Image Captioning when one predicted region covers the full image. To address the localization and description task jointly we propose a Fully Convolutional Localization Network (FCLN) architecture that processes an image with a single, efficient forward pass, requires no external regions proposals, and can be trained end-to-end with a single round of optimization. The architecture is composed of a Convolutional Network, a novel dense localization layer, and Recurrent Neural Network language model that generates the label sequences. We evaluate our network on the Visual Genome dataset, which comprises 94,000 images and 4,100,000 region-grounded captions. We observe both speed and accuracy improvements over baselines based on current state of the art approaches in both generation and retrieval settings.

* equal contribution
To appear in CVPR 2016 (Oral)

Code and Extras

Find additional resources on Github, including:
  • Training/test code (uses Torch/Lua)
  • Pretrained model
  • Live webcam demo
  • Dense Captioning metric evaluation code

Bibtex

@inproceedings{densecap,
  title={DenseCap: Fully Convolutional Localization Networks for Dense Captioning},
  author={Johnson, Justin and Karpathy, Andrej and Fei-Fei, Li},
  booktitle={Proceedings of the IEEE Conference on Computer Vision and 
             Pattern Recognition},
  year={2016}
}

Example Results: Dense Captioning

Example predictions from the model. We slightly cherry picked images in favor of high-resolution, rich scenes and no toilets.
Browse the full results on our interactive predictions visualizer page (30MB) (visualizer code also included on Github).

Example Results: Region Search

The DenseCap model can also be easily run "backwards" to search for text queries. For example, we can take arbitrary descriptions such as "head of a giraffe" and look through a collection of images to find regions that are likely to generate that description. Note that in this process we do not merely caption images and then look for string overlaps; Instead we forward the model and check the probability of generating the query condiditoned on every detected region of interest. Below are some examples of searching for a few queries in our test set: