Hide/Show Apps

Auto-conversion from2D drawing to 3D model with deep learning

Yetiş, Gizem
Modeling has always been important as it transfers knowledge to end users. From the very first line on a computer screen to AR/VR applications have broaden the perception, communication and implementation of design-related industries, and each representation technique has become one another's base, information source and data supplier. Yet, transforming the information that one includes into another has still major problems. It requires precise data, qualified personnel and human intervention. This research aims to represent an automated reconstruction from low level data sources to higher level digital models in order to eliminate these problems. This autoconversion process only examines the architectural usage and makes a sample of its usability in different fields. 2D floor plans and elevation drawings in raster format, which are collected and/or produced from scratch, are used as datasets. These drawings are semantically segmented with three different Convolutional Neural Networks to obtain relevant architectural information since Deep Learning shows promising success to solve a wide range of problem with its widespread use. Semantically segmented drawings are then transformed into 3D by using Digital Geometry Processing methods. Lastly, a web application is introduced to allow any user to obtain a 3D model with ease. Semantic segmentation results in 2D and two case studies in 3D are evaluated and compared separately with different metrics to represent accuracy of the process. To conclude, this research has proposed an automated process for reconstruction of 3D models with the state-of-the-art methods and made it ready for use even for a person without technical knowledge.