Have decided to develop an achievable version of the human Reactable for this commission. Put a call-out for someone who could build an Arduino-based set of ping sensors as my knowledge of the field is inadequate.
Have built an Ableton set that will allow me multiple options to feed off the audiences use of the work and to allow numerous different paths that can be explored depending again on what the audience seemed to best react to.
The Arduino interfaces with Ableton via a simple Max/MSP patch and each of the six sensors are MIDI mapped to functions within Ableton (with the ability to change at any point).
I have also built a small VDMX set which will run audio-reactive visuals to a projector in the room. The visuals will react to various components of the sound (velocity, frequency etc.) enabling another layer of user interactivity. This requires the addition of Soundflower for the internal routing of audio from Ableton to VDMX for analysis. In the past I have just used the in-built mic on the laptop for this but this project needs something a little less dependent on environmental conditions.