TY - GEN
T1 - A Computational Gestural Making Framework A Multi-modal Approach to Digital Fabrication Mapping Human Gestures to Machine Actions
AU - Pinochet, Diego
N1 - Publisher Copyright:
© ACADIA 2023. All rights reserved.
PY - 2023
Y1 - 2023
N2 - This research project implements a multimodal body-centric approach to interactive fabrication aimed to test the conversational aspects of a design framework (Figure 1). It focuses on the development of a gesture language as the primary mode of communication, as well as the means to generate effective communication with a machine for design endeavors. To do so, we first developed a gesture recognition system that aims to establish fluid communication with a machine based on three types of gestures: symbolic, exploratory, and sequential. Second, we developed a system for machine vision to detect, recognize, and calculate physical objects in space. Third, we developed a system for robotic motion using path-planning algorithms and reinforcement learning for collision-free machine movement. Finally, those three modules were integrated into a system for human-robot interaction in real time based on gestures. The ultimate goal of this implementation is to establish a multimodal framework for interactive design that is based on human-robotic interaction through the use of gestures as a communication mechanism for exploring computational design potential toward unique and original creations.
AB - This research project implements a multimodal body-centric approach to interactive fabrication aimed to test the conversational aspects of a design framework (Figure 1). It focuses on the development of a gesture language as the primary mode of communication, as well as the means to generate effective communication with a machine for design endeavors. To do so, we first developed a gesture recognition system that aims to establish fluid communication with a machine based on three types of gestures: symbolic, exploratory, and sequential. Second, we developed a system for machine vision to detect, recognize, and calculate physical objects in space. Third, we developed a system for robotic motion using path-planning algorithms and reinforcement learning for collision-free machine movement. Finally, those three modules were integrated into a system for human-robot interaction in real time based on gestures. The ultimate goal of this implementation is to establish a multimodal framework for interactive design that is based on human-robotic interaction through the use of gestures as a communication mechanism for exploring computational design potential toward unique and original creations.
UR - http://www.scopus.com/inward/record.url?scp=85192838924&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:85192838924
T3 - Habits of the Anthropocene: Scarcity and Abundance in a Post-Material Economy - Proceedings of the 43rd Annual Conference of the Association for Computer Aided Design in Architecture, ACADIA 2023
SP - 92
EP - 103
BT - Proceedings Book One
A2 - Crawford, Assia
A2 - Diniz, Nancy Morgado
A2 - Beckett, Richard
A2 - Vanucchi, Jamie
A2 - Swackhamer, Marc
PB - Association for Computer Aided Design in Architecture
T2 - 43rd Annual Conference of the Association for Computer Aided Design in Architecture: Habits of the Anthropocene: Scarcity and Abundance in a Post-Material Economy, ACADIA 2023
Y2 - 21 October 2023 through 28 October 2023
ER -