TY - GEN
T1 - Deciphering Design Sketches as 3D Models A Sequence-2-Sequence Approach to Generative Modeling Using Sketches
AU - Pinochet, Diego
AU - Pinochet, Diego
N1 - Publisher Copyright:
© ACADIA 2023. All rights reserved.
PY - 2023
Y1 - 2023
N2 - In this paper, I present a human-machine, collaborative, 3D modeling system that combines human gestures with generative 3D modeling. The project seeks to explore the unfolding of design ideas while reframing the concept of design workflows, knowledge encapsulation, disembodiment, and representation in 3D CAD processes. Using machine learning and interactive computation, this project links user sketches to generative 3D modeling using a sequence-to-sequence model. By encapsulating expert knowledge related to 3D modeling, this project seeks to eliminate intermediate representations, such as, sections, elevations, and floorplans, in the design process to engender immediate, real-time, 3D model generation from hand sketches (Figure 1). Whereas, most of the projects using generative machine learning to produce 3D models focus on the one-to-one fidelity between sketches and 3D models, this research focuses on the generative power of gesture sequences to generate novel 3D models. This experiment aims to answer, among others, the following questions: Can the use of machine learning reframe the generation of 3D models in a more embodied way? Is it possible to capture design intentions from sketches to generate 3D shapes using machine learning? Can we design and explore ideas inside a computer without representing them, but focusing on the unique sequences that originate novel designs?.
AB - In this paper, I present a human-machine, collaborative, 3D modeling system that combines human gestures with generative 3D modeling. The project seeks to explore the unfolding of design ideas while reframing the concept of design workflows, knowledge encapsulation, disembodiment, and representation in 3D CAD processes. Using machine learning and interactive computation, this project links user sketches to generative 3D modeling using a sequence-to-sequence model. By encapsulating expert knowledge related to 3D modeling, this project seeks to eliminate intermediate representations, such as, sections, elevations, and floorplans, in the design process to engender immediate, real-time, 3D model generation from hand sketches (Figure 1). Whereas, most of the projects using generative machine learning to produce 3D models focus on the one-to-one fidelity between sketches and 3D models, this research focuses on the generative power of gesture sequences to generate novel 3D models. This experiment aims to answer, among others, the following questions: Can the use of machine learning reframe the generation of 3D models in a more embodied way? Is it possible to capture design intentions from sketches to generate 3D shapes using machine learning? Can we design and explore ideas inside a computer without representing them, but focusing on the unique sequences that originate novel designs?.
UR - http://www.scopus.com/inward/record.url?scp=85192824522&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:85192824522
T3 - Habits of the Anthropocene: Scarcity and Abundance in a Post-Material Economy - Proceedings of the 43rd Annual Conference of the Association for Computer Aided Design in Architecture, ACADIA 2023
SP - 606
EP - 615
BT - Proceedings Book One
A2 - Crawford, Assia
A2 - Diniz, Nancy Morgado
A2 - Beckett, Richard
A2 - Vanucchi, Jamie
A2 - Swackhamer, Marc
PB - Association for Computer Aided Design in Architecture
T2 - 43rd Annual Conference of the Association for Computer Aided Design in Architecture: Habits of the Anthropocene: Scarcity and Abundance in a Post-Material Economy, ACADIA 2023
Y2 - 21 October 2023 through 28 October 2023
ER -