From Movement To Sound: The Advanced Sonification Toolkit

Perfomance Dance

Turning movement into sound can open new possibilities for training, feedback, and user experience. Yet for most coaches, athletes and designers, the use of sonification often remains inaccessible. It typically requires audio-engineering expertise and the use of specialized tools and environments.

The Advanced Sonification Toolkit (AST) tries to address this gap. Developed to support real-time feedback and intuitive data understanding, the toolkit aims to make sonification usable by non-sound design experts. Its goal is to enable anyone to explore how sound can guide movement, improve self-regulation or enrich the sports experience, without interfering with the activity itself.

The first version of the toolkit provides a modular Max/MSP environment for transforming recorded sensor data into sound. These range from simple tones to additive synthesis, melodic fragments and ambient textures. All modules share a unified parameter-mapping interface, allowing for fast experimentation.

This design supports the broader research agenda: creating interactive and non-intrusive systems that try to help athletes interpret their own movement. By making data audible in accessible ways, the toolkit supports new forms of guidance, feedback and experimental exploration.

The past year brought three opportunities to test the toolkit’s direction and identify how it should evolve.

At the Audio Mostly / ICAD 2025 workshop, discussions with participants highlighted expectations around transparency and hands-on accessibility. Participants’ exploratory prototyping with Max/MSP showed how diverse mental models of movement-to-sound relationships can be.

The Adidas Sonification Expert Workshop added domain-specific insights from movement specialists and product developers. The exercises revealed common tendencies, such as interpreting many sounds as warnings, and highlighted the importance of designing unobtrusive audio feedback.

A more experimental test came from Schmiede 2025, an experimental meida art and science hackathon, where a real-time wireless IMU pipeline was developed and tested by collaborating with a dancer and visual artist.
This resulted in a performance that demonstrated how expressive movement can interact with sound and visuals.
The prototype work at Schmiede then laid the groundwork for AST v2, which includes real-time data input.

Additionally, three demonstration systems were created to explore real-time sonification in running contexts. Two were shown at the SportsHCI 2025 conference, presenting the toolkit and a mobile interaction system for on-the-run feedback. A third demo was presented at MUM 2025, enabling runners to control different sonifications through hand gestures.
Across all settings, an overall theme emerged: sonification has real potential in sports and movement, but only if tools are intuitive and flexible.

With AST v1 established and the release of AST v2 underway, the project is moving toward a widely usable sports and movement sonification environment.