StructureNet: Hierarchical Graph Networks

for 3D Shape Generation

Kaichun Mo^{1*}
Paul Guerrero^{2*}
Li Yi^{1}
Hao Su^{3}

Peter Wonka^{4}
Niloy Mitra^{2,5}
Leonidas J. Guibas^{1,6}

*(*: indicates joint first authors)*

^{1} Stanford University
^{2} University College London
^{3} University of California San Diego

^{4} King Abdullah University of Science and Technology (KAUST)

^{5} Adobe Research
^{6} Facebook AI Research

Conditionally Accepted by*Siggraph Asia 2019*

for 3D Shape Generation

Kaichun Mo

Peter Wonka

Conditionally Accepted by

**NEW** [July 27, 2019] Paper gets accepted. Code and Data is released.

Abstract

The ability to generate novel, diverse, and realistic 3D shapes along with associated part semantics and structure is central to many applications requiring high-quality 3D assets or large volumes of realistic training data. A key challenge towards this goal is how to accommodate diverse shape , including both continuous deformations of parts as well as structural or discrete alterations which add to, remove from, or modify the shape constituents and compositional structure. Such object structure can typically be organized into a hierarchy of constituent object parts and relationships, represented as a hierarchy of n-ary graphs. We introduce StructureNet, a hierarchical graph network which (i) can directly encode shapes represented as such n-ary graphs; (ii) can be robustly trained on large and complex shape families; and (iii) be used to generate a great diversity of realistic structured shape geometries. Technically, we accomplish this by drawing inspiration from recent advances in graph neural networks to propose an order-invariant encoding of n-ary graphs, considering jointly both part geometry and inter-part relations during network training. We extensively evaluate the quality of the learned latent spaces for various shape families and show significant advantages over baseline and competing methods. The learned latent spaces enable several structure-aware geometry processing applications, including shape generation and interpolation, shape editing, or shape structure discovery directly from un-annotated images, point clouds, or partial scans.

Figure 1. StructureNet is a hierarchical graph network that produces a unified latent space to encode structured models with both continuous geometric and discrete structural variations. In this example, we projected an un-annotated point cloud (left) and un-annotated image (right) into the learned latent space yielding semantically segmented point clouds structured as a hierarchy of graphs. The shape interpolation in the latent space also produces structured point clouds (top) including their corresponding graphs (bottom). Edges correspond to specific part relationships that are modeled by our approach. For simplicity, here we only show the graphs without the hierarchy. Note how the base of the chair morphs via functionally plausible intermediate configurations, or the chair back transitions from a plain back to a back with arm-rests. |

Hierarchical Graph Network Architecture

Figure 2. |

Shape Free Generation

Figure 3. |

Part Interpolation

Figure 4. |

Shape Abstraction

Figure 5. |

Shape Editing

Figure 6. |

Acknowledgements

This project was supported by a Vannevar Bush Faculty Fellowship, NSF grant RI-1764078, NSF grant CCF-1514305, a Google Research award, an ERC Starting Grant (SmartGeometry StG-2013-335373), ERC PoC Grant (SemanticCity), Google Faculty Awards, Google PhD Fellowships, Royal Society Advanced Newton Fellowship, KAUST OSR number CRG2017-3426 and gifts from Adobe, Autodesk and Qualcomm. We especially thank Kun Liu, Peilang Zhu, Yan Zhang, and Kai Xu for the help preparing binary symmetry hierarchies for GRASS baselines on PartNet. We also thank the anonymous reviewers for their fruitful suggestions.