3D Scene Generation via Unsupervised Object Synthesis

Access full-text files

Date

2019

Authors

Beksi, William
Arshad, Mohammad Samiul

Journal Title

Journal ISSN

Volume Title

Publisher

Abstract

Understanding the geometric and semantic structure of a scene (scene understanding) is a crucial problem in robotics. Researchers have employed deep learning to address scene understanding problems such as instance segmentation, semantic segmentation, and object recognition. A major impediment to applying deep learning models is the requirement for enormous quantities of labeled data: performance increases in proportion to the amount of training data available. Manually accumulating these annotated datasets is an immense undertaking and not a viable long-term option. Synthetic scene generation is an active area of research at the intersection of computer graphics, computer vision, and robotics. Recent state-of-the-art systems automatically generate configurations of objects from synthetic 3D scene models using heuristic techniques. In contrast, we introduce a framework for unsupervised synthetic scene generation from raw 3D point cloud data. Our architecture is established by autoencoders and generative adversarial networks.

Description

LCSH Subject Headings

Citation