RA-CM3: Retrieval-Augmented Multimodal Modeling
RA-CM3 is a retrieval-augmented multimodal model that can generate both text and images. RA-CM3 achieves improved text and image generation quality while reducing the training cost and model size.
DRAGON: Training a Foundation Model from Text and Knowledge Graph
DRAGON is a new foundation model pre-trained jointly from text and knowledge graphs. It helps knowledge- and reasoning-intensive applications such as question answering.
LinkBERT: Improving Language Model Training with Document Link
LinkBERT is a new language model pretrained to capture document link knowledge such as hyperlinks of the web. It helps knowledge-intensive applications such as question answering.
Reasoning with Language Models and Knowledge Graphs for Question Answering
We introduce an end-to-end question answering model, QA-GNN, that can jointly reason with pre-trained language models and knowledge graphs.
Break-It-Fix-It: Unsupervised Learning for Fixing Source Code Errors
How can we use machine learning to fix source code errors (e.g. in C, Python) for us? We introduce Break-It-Fix-It, a new unsupervised method to train code repair models.
Learning to Fix Programs from Error Messages
We study how to use machine learning to repair programs from error messages, and introduce a promising approach that leverages program-feedback graphs and self-supervised learning.