To keep up with the Big Data challenge, parallelized algorithms based on dual decomposition have been proposed to perform inference in Markov random fields.
Despite this parallelization, current algorithms struggle when the energy has high order terms and the graph is densely connected.
In this paper we propose a partitioning strategy followed by a novel message passing algorithm which is able to exploit pre-computations to only update the high-order factors when passing messages across machines.
We demonstrate the effectiveness of our approach on the task of joint layout and semantic segmentation estimation from single images, and show that our approach is orders of magnitude faster than current methods.