• Platinum Pass Platinum Pass
  • Full Conference Pass Full Conference Pass
  • Full Conference One-Day Pass Full Conference One-Day Pass

Date: Wednesday, November 20th
Time: 2:15pm - 4:00pm
Venue: Plaza Meeting Room P2
Session Chair(s): Wenping Wang, University of Hong Kong, China

RPM-Net: Recurrent Prediction of Motion and Parts from Point Cloud

Abstract: We introduce RPM-Net, a deep learning-based approach which simultaneously infers movable parts and hallucinates their motions from a single, un-segmented, and possibly partial, 3D point cloud shape. RPM-Net is a novel Recurrent Neural Network (RNN), composed of an encoder-decoder pair with interleaved Long Short-Term Memory (LSTM) components, which together predict a temporal sequence of point-wise displacements for the input shape. At the same time, the displacements allow the network to learn movable parts, resulting in a motion-based shape segmentation. Recursive applications of RPM-Net on the obtained parts can predict finer-level part motions, resulting in a hierarchical object segmentation. Furthermore, we develop a separate network to estimate part mobilities, e.g., per part motion parameters, from the segmented motion sequence. Both networks learn the deep predictive models from a training set that exemplifies a variety of mobilities for diverse objects. We show results of simultaneous motion and part predictions from synthetic and real scans of 3D objects exhibiting a variety of part mobilities, possibly involving multiple movable parts.

Authors/Presenter(s): Zihao Yan, Shenzhen University, China
Ruizhen Hu, Shenzhen University, China
Xingguang Yan, Shenzhen University, China
Luanmin Chen, Shenzhen University, China
Oliver van Kaick, Carleton University, Canada
Hao (Richard) Zhang, Simon Fraser University, Canada
Hui Huang, Shenzhen University, China

Learning Adaptive Hierarchical Cuboid Abstractions of 3D Shape Collections

Abstract: Abstracting man-made 3D objects as assemblies of primitives, i.e., shape abstraction, is an important task in 3D shape understanding and analysis. In this paper, we propose an unsupervised learning method for automatically constructing compact and expressive shape abstractions of 3D objects in a class. The key idea of our approach is an adaptive hierarchical cuboid representation that abstracts a 3D shape with a set of parametric cuboids adaptively selected from a hierarchical and multi-level cuboid representation shared by all objects in the class. The adaptive hierarchical cuboid abstraction offers a compact representation for modeling the variant shape structures and their coherence at different abstraction levels. Based on this representation, we design a convolutional neural network (CNN) for predicting the parameters of each cuboid in the hierarchical cuboid representation and the adaptive selection mask of cuboids for each input 3D shape. For training the CNN from an unlabeled 3D shape collection, we propose a set of novel loss functions to maximize the approximation quality and compactness of the adaptive hierarchical cuboid abstraction and present a progressive training scheme to refine the cuboid parameters and the cuboid selection mask effectively. We evaluate the effectiveness of our approach on various 3D shape collections and demonstrate its advantages over the existing cuboid abstraction approach. We also illustrate applications of the resulting adaptive cuboid representations in various shape analysis and manipulation tasks.

Authors/Presenter(s): Chun-Yu Sun, Tsinghua University, Microsoft Research Asia, China
Qian-Fang Zou, University of Science and Technology of China, Microsoft Research Asia, China
Xin Tong, Microsoft Research Asia, China
Yang Liu, Microsoft Research Asia, China

StructureNet: Hierarchical Graph Networks for 3D Shape Generation

Abstract: The ability to generate novel, diverse, and realistic 3D shapes along with associated part semantics and structure is central to many applications requiring high-quality 3D assets or large volumes of realistic training data. A key challenge towards this goal is how to accommodate diverse shape variations, including both continuous deformations of parts as well as structural or discrete alterations which add to, remove from, or modify the shape constituents and compositional structure. Such object structure can typically be organized into a hierarchy of constituent object parts and relationships, represented as a hierarchy of n-ary graphs. We introduce StructureNet, a hierarchical graph network which (i)~can directly encode shapes represented as such n-ary graphs, (ii)~can be robustly trained on large and complex shape families, and (iii)~be used to generate a great diversity of realistic structured shape geometries. Technically, we accomplish this by drawing inspiration from recent advances in graph neural networks to propose an order-invariant encoding of n-ary graphs, considering jointly both part geometry and inter-part relations during network training. We extensively evaluate the quality of the learned latent spaces for various shape families and show significant advantages over baseline and competing methods. The learned latent spaces enable several structure-aware geometry processing applications, including shape generation and interpolation, shape editing, or shape structure discovery directly from un-annotated images, point clouds, or partial scans.

Authors/Presenter(s): Kaichun Mo, Stanford University, United States of America
Paul Guerrero, University College London, United Kingdom
Li Yi, Stanford University, United States of America
Hao Su, University of California, San Diego, United States of America
Peter Wonka, King Abdullah University of Science and Technology (KAUST), Saudi Arabia
Niloy Mitra, University College London, Adobe, United Kingdom
Leonidas Guibas, Stanford University, Facebook, United States of America

SDM-NET: Deep Generative Network for Structured Deformable Mesh

Abstract: We introduce SDM-NET, a deep generative neural network which produces structured deformable meshes. Specifically, the network is trained to generate a spatial arrangement of closed, deformable mesh parts, which respect the global part structure of a shape collection, e.g., chairs, airplanes, etc. Our key observation is that while the overall structure of a 3D shape can be complex, the shape can usually be decomposed into a set of parts, each homeomorphic to a box, and the finer-scale geometry of the part can be recovered by deforming the box. The architecture of SDM-NET is that of a two-level variational autoencoder (VAE). At the part level, a PartVAE learns a deformable model of part geometries. At the structural level, we train a Structured Parts VAE (SP-VAE), which jointly learns the part structure of a shape collection and the part geometries, ensuring a coherence between global shape structure and surface details. Through extensive experiments and comparisons with the state-of-the-art deep generative models of shapes, we demonstrate the superiority of SDM-NET in generating meshes with visual quality, flexible topology, and meaningful structures, which benefit shape interpolation and other subsequently modeling tasks.

Authors/Presenter(s): Lin Gao, Institute of Computing Technology, Chinese Academy of Sciences, China
Jie Yang, Institute of Computing Technology, Chinese Academy of Sciences, China
Tong Wu, Institute of Computing Technology, Chinese Academy of Sciences, China
Yu-Jie Yuan, Institute of Computing Technology, Chinese Academy of Sciences, China
Hongbo Fu, School of Creative Media, City University of Hong Kong, China
Yu-Kun Lai, Cardiff University, China
Hao Zhang, Simon Fraser University, Canada