QuBits (2019)
- the system: an interactive evolving sound-space created for virtual reality (VR) environment
- presentation: a user wears VR hardware and headphones and explores the sound-space by their own accord
- papers and festivals: ICMC 2020, Ars Electronica Festival 2020, dissertation and UC Berkeley
- the system: an interactive evolving sound-space created for virtual reality (VR) environment
- presentation: a user wears VR hardware and headphones and explores the sound-space by their own accord
- papers and festivals: ICMC 2020, Ars Electronica Festival 2020, dissertation and UC Berkeley
Screen-capture, Using the VR system
Papers and Festivals
(recording pins dropped on bricks)
About the Work
The QuBits project is a virtual reality (VR) sound-space environment. The environment was designed to explore a musical aesthetic valuing sound mass, spatial sound, evolution, and algorithmically generated sonic structures. Additionally, the user of the VR system plays a key role in shaping these musical elements. The user first discovers what behaviors are possible by exploring and through chance encounters. They can then shape each discovered behavior with nuance if they choose. In the VR environment, each sound has a corresponding visual component. To achieve this, a system was built with two software platforms, one for digital sound processing (Max/MSP) and another for 3D graphics and algorithmic event generation (Unity and C#). These platforms communicate via Open Sound Control (OSC). The sounds are a mix of real world sampled sound, granular synthesis, and real-time generated synthetic sound.
This system is seeded with a large sound sample library I recorded and mixed myself. Hundreds of sounds were recorded from a multitude of pins being dropped, tapped, and struck on various materials (wood, metal, tiles, bricks). For each sample, in a quiet recording environment, 4 microphones were used and later mixed to create a detailed vantage of these delicate and quiet sounds.
The QuBits project is a virtual reality (VR) sound-space environment. The environment was designed to explore a musical aesthetic valuing sound mass, spatial sound, evolution, and algorithmically generated sonic structures. Additionally, the user of the VR system plays a key role in shaping these musical elements. The user first discovers what behaviors are possible by exploring and through chance encounters. They can then shape each discovered behavior with nuance if they choose. In the VR environment, each sound has a corresponding visual component. To achieve this, a system was built with two software platforms, one for digital sound processing (Max/MSP) and another for 3D graphics and algorithmic event generation (Unity and C#). These platforms communicate via Open Sound Control (OSC). The sounds are a mix of real world sampled sound, granular synthesis, and real-time generated synthetic sound.
This system is seeded with a large sound sample library I recorded and mixed myself. Hundreds of sounds were recorded from a multitude of pins being dropped, tapped, and struck on various materials (wood, metal, tiles, bricks). For each sample, in a quiet recording environment, 4 microphones were used and later mixed to create a detailed vantage of these delicate and quiet sounds.