Research Output
An End-to-End Musical Instrument System That Translates Electromyogram Biosignals to Synthesized Sound
  This article presents a custom system combining hardware and sortware that sense physiological signals of the performer's body resulting from muscle contraction and translates them to computer-synthesized sound. Our goal was to build upon the history of research in the field to develop a complete, integrated system that could be used by nonspecialist musicians. We describe the Embodied AudioVisual Interaction Electromyogram, an end-to-end system, spanning wearable sensing on the musician's body, custom microcontroller-based biosignal acquisition hardware, machine learning– based gesture-to-sound mapping middleware, and software-based granular synthesis sound output. A novel hardware design digitizes the electromyogram signals from the muscle with minimal analog preprocessing and treats it in an audio signal-processing chain as a class-compliant audio and wireless MIDI interface. The mapping layer implements an interactive machine learning workflow in a reinforcement learning configuration and can map gesture features to auditory metadata in a multidimensional information space. The system adapts existing machine learning and synthesis modules adapted to work with the hardware, resulting in an integrated, end-to-end system. We explore its potential as a digital musical instrument through a series of public presentations and concert performance by a range of musical practitioners.

  • Type:


  • Date:

    10 April 2024

  • Publication Status:

    In Press

  • Publisher

    MIT Press

  • DOI:


  • ISSN:


  • Funders:

    French National Research Agency; European Commission; European Research Council


Tanaka, A., Visi, F., Di Donato, B., Klang, M., & Zbyszyński, M. (in press). An End-to-End Musical Instrument System That Translates Electromyogram Biosignals to Synthesized Sound. Computer Music Journal,


Monthly Views:

Available Documents