As I reported on The Apple Blog last Friday, there's a new iPhone app called "Kids on DSP", a full album expansion to the popular and supremely cool RJDJ app, which converts sounds picked up by the phone's microphone into a synthesized audio track. It's produced by UK musician Robert Thomas (above left), who is better known to Second Life Residents as Dizzy Banjo (above right), an avatar on the forefront of composing music for the metaverse, like this ambient soundtrack for virtual Mexico, and the groundbreaking Parsec, which uses SL's voice software in an audio/visual experience. As it happens, it's the Parsec project which helped get Robert noticed by the RJDJ team.
So Dizzy/Robert is working in two cutting edge forms of reactive music -- as it's deployed in a virtual world, and as it's integrated into an augmented reality interface (in this case, via the iPhone.) I asked him to compare and contrast the two mediums, and he sent along this thoughtful email, expressing excitement for the new Linden media API -- and hinting that major recording artists are interested in building sites in Second Life:
"I think making reactive music on RjDj is quite similar to reactive metaverse music really. However there are some important differences.
"The first difference is location. Obviously in virtual worlds its the virtual space which holds/creates the soundtrack. In the case of RjDj this is your immediate physical reality (at least it is right now - we have some additional ideas in the future.)
"The second difference is the control/interaction model. In virtual worlds this is based on very precise and definable parameters, most of which can be very accurately controlled with scripting - as everything ultimately is code. In SL if an avatar collides with something - you definitely know it happened. In the physical world we have to analyse the stream of audio or data coming in and try to understand it, in order to create a reaction. So if we want to create an event on an object collision - we need to try to analyze reality to see if an object collision happened.
"The third difference is that virtual worlds are inherently social spaces and are obviously global - 3D musical installations in them often explore this interaction with great effect. RjDj is currently a quite intimate personal headphone experience, however it is far more inclusive of your surroundings than conventional mp3 music. In the future though, we plan that RjDj experiences will become social too.
"I recently chatted with Mark Kingdon about how Second Life is well positioned, in terms of social capabilities and global reach, to deliver 3D reactive music places. Indeed the representation of a number of major artists have approached me to scope out the possibilities of creating large scale interactive musical SL sims - delivering reactive music to groups of people. However the architecture of SL, with the currently doesn't deliver the level of performance in audio handling that those projects would require. The opening up of the LL Media API is a golden opportunity for these capabilities to be extended. I hope that some third party realises the potential of this development work, and undertakes it.
Video of Chouchou's "Babel" project"Meanwhile from the Resident end, great work is being done pushing the possibilities of delivering reactive but more song based music through virtual worlds by people such as Chouchou with their The Babel installation. This work also helps demonstrate the possibilities to labels.
"I hope that as reactive music, and the concept of music as software becomes more mainstream, it will spread across many platforms and into many spaces - some of which will be physical and some virtual."
Photos courtesy Banjo/Thomas.
Dizzy is the MAN. Man.
Posted by: ColeMarie Soleil | Monday, October 05, 2009 at 04:12 PM