Translating a Visual LEGO Manual to a Machine-Executable Plan

ECCV 2022


Ruocheng Wang 1    Yunzhi Zhang 1    Jiayuan Mao 2    Chin-Yi Cheng 3    Jiajun Wu 1
1Stanford University          2MIT CSAIL          3Autodesk AI Lab

[Paper] [Code] [BibTeX]



Abstract


We study the problem of translating an image-based, step-by-step assembly manual created by human designers into machine-interpretable instructions. We formulate this problem as a sequential prediction task: at each step, our model reads the manual, locates the components to be added to the current shape, and infers their 3D poses. This task poses the challenge of establishing a 2D-3D correspondence between the manual image and the real 3D object, and 3D pose estimation for unseen 3D objects, since a new component to be added in a step can be an object built from previous steps. To address these two challenges, we present a novel learning-based framework, the Manual-to-Executable-Plan Network (MEPNet), which reconstructs the assembly steps from a sequence of manual images. The key idea is to integrate neural 2D keypoint detection modules and 2D-3D projection algorithms for high-precision prediction and strong generalization to unseen components. The MEPNet outperforms existing methods on three newly collected LEGO manual datasets and a Minecraft house dataset.



Video





Results


Stepwise visualization on the LEGO datasets. Each column contains the ground truth and the predictions from models for a single step. Components added in the step are highlighted in the manual images.
Image



Visualization of our model building LEGOs from scratch.
Image



Stepwise visualization of results on the 3D-Craft dataset.
Image