Research Output
Gesture-Timbre Space: Multidimensional Feature Mapping Using Machine Learning and Concatenative Synthesis
  This chapter explores three systems for mapping embodied gesture, acquired with electromyography and motion sensing, to sound synthesis. A pilot study using granular synthesis is presented, followed by studies employing corpus-based concatenative synthesis, where small sound units are organized by derived timbral features. We use interactive machine learning in a mapping-by-demonstration paradigm to create regression models that map high-dimensional gestural data to timbral data without dimensionality reduction in three distinct workflows. First, by directly associating individual sound units and static poses (anchor points) in static regression. Second, in whole regression a sound tracing method leverages our intuitive associations between time-varying sound and embodied movement. Third, we extend interactive machine learning through the use of artificial agents and reinforcement learning in an assisted interactive machine learning workflow. We discuss the benefits of organizing the sound corpus using self-organizing maps to address corpus sparseness, and the potential of regression-based mapping at different points in a musical workflow: gesture design, sound design, and mapping design. These systems support expressive performance by creating gesture-timbre spaces that maximize sonic diversity while maintaining coherence, enabling reliable reproduction of target sounds as well as improvisatory exploration of a sonic corpus. They have been made available to the research community, and have been used by the authors in concert performance.

  • Date:

    10 March 2021

  • Publication Status:

    Published

  • DOI:

    10.1007/978-3-030-70210-6_39

  • Cross Ref:

    10.1007/978-3-030-70210-6_39

  • Funders:

    Historic Funder (pre-Worktribe)

Citation

Zbyszyński, M., Di Donato, B., Visi, F., & Tanaka, A. (2021). Gesture-Timbre Space: Multidimensional Feature Mapping Using Machine Learning and Concatenative Synthesis. In R. Kronland-Martinet, S. Ystad, & M. Aramaki (Eds.), Perception, Representations, Image, Sound, Music - 14th International Symposium, CMMR 2019, Marseille, France, October 14–18, 2019, Revised Selected Papers (600-622). https://doi.org/10.1007/978-3-030-70210-6_39

Authors

Keywords

Gestural interaction, Interactive machine learning, Reinforcement learning, Sonic interaction design, Concatenative synthesis, Human-computer interaction

Monthly Views:

Available Documents