Video shot during last night's WDCH preview by Brooke Erdmann
If you're anywhere near downtown Los Angeles between tonight and October 6, you absolutely have to experience WDCH DREAMS. I was lucky enough to attend a preview with my wife last night (almost bumping into the Mayor, who was also there) and we were utterly dazzled by this flood of images and music echoing off Frank Gehry's famed concert hall:
The Los Angeles Philharmonic has commissioned award-winning media artist Refik Anadol to create unprecedented, breathtaking, three-dimensional projections onto the steel exterior of Walt Disney Concert Hall to signal the commencement of the LA Phil’s 100-year anniversary celebrations. Free and open to the public, nightly performances are scheduled to occur every half hour, with the first performance at 7:30 p.m., and the last at 11:30 p.m., September 28 to October 6... WDCH Dreams’ accompanying soundtrack was created from hand-picked audio from the LA Phil’s archival recordings. Sound designers Robert Thomas, and Kerim Karaoglu augmented these selections by using machine-learning algorithms to find similar performances recorded throughout the LA Phil’s history, creating a unique exploration of historic audio recordings.
Adaptive music/sound artist Robert Thomas, as longtime New World Notes readers know, has been creating ambitious audio experiences across many mediums and technologies over the years. (Full disclosure: Also a pal.) I first met him when he was creating voice-activated installations and location-reactive soundtracks in the virtual world of Second Life. After SL he went on to co-create projects with luminaries like Massive Attack, Imogen Heap, Ben Burtt (sound designer for the Star Wars movies), and Hans Zimmer. (Robert and Zimmer worked with director Chris Nolan to create an interactive audio app for his movie Inception.)
Based in London, Robert tells me how he and his collaborators used machine learning to take and shape disparate performances and composers from the LA Phil's massive archive of recordings, and sculpt them into a completely new soundtrack:
"Parag Mital made a browser which let [us] search through hundreds of terabytes of material, to be able to hear different bits of different performances," he says. (Google Arts and Culture provided the technical backing.) Robert and his collaborators then took those clips and created a wealth of new sound files, then ran those through an audio analysis.
"[We] trained machine learning processes on that audio, and then tried to get it to generate new music." So for instance, in the soundtrack, you'll hear a segment from Stravinsky's Rite of Spring. But then, "it turns into a machine learning hallucination of the melody which goes somewhere else that Stravinsky didn't write."
Another process to create the soundtrack converted their selected music files into wave forms, and put those through another machine learning algorithm to generate a new sound file. Another algorithm broke down recordings into many different fragments, then recompose into forms. "And then we would have bits of Mahler try to resynthesize bits of Stravinsky... so you would make a Stravinsky passage out of an actual recording of Mahler."
All these processes were then arranged to orchestrate with the 3D projection imagery created by Anadol.
As with his previous work, Robert says the challenge is creating a new music aesthetic within a new technology:
"If you're doing it the virtual worlds of AR and VR, or you're using biometrics, if you do it before anyone else has done it, you can't rely on other people's processes or tools, you have to invent those yourself... it's very demanding and scary, but very rewarding when you're actually doing it."
Comments
You can follow this conversation by subscribing to the comment feed for this post.