Representations in design computing through 3-D deep generative models

2024-12-10
Çakmak, Başak
Öngün, Cihan
This paper aims to explore alternative representations of the physical architecture using its real-world sensory data through artificial neural networks (ANNs). In the project developed for this research, a detailed 3-D point cloud model is produced by scanning a physical structure with LiDAR. Then, point cloud data and mesh models are divided into parts according to architectural references and part-whole relationships with various techniques to create datasets. A deep learning model is trained using these datasets, and new 3-D models produced by deep generative models are examined. These new 3-D models, which are embodied in different representations, such as point clouds, mesh models, and bounding boxes, are used as a design vocabulary, and combinatorial formations are generated from them.
Artificial Intelligence for Engineering Design, Analysis and Manufacturing: AIEDAM
Citation Formats
B. Çakmak and C. Öngün, “Representations in design computing through 3-D deep generative models,” Artificial Intelligence for Engineering Design, Analysis and Manufacturing: AIEDAM, vol. 38, pp. 0–0, 2024, Accessed: 00, 2024. [Online]. Available: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85212036673&origin=inward.