Similar to how Nettrice Gaskins was creating AI-based visual art years before Midjourney and such, adaptive music/sound artist Robert Thomas was leveraging machine learning last decade, including this amazing 2019 project for the LA Philharmonic:
"Parag Mital made a browser which let [us] search through hundreds of terabytes of material, to be able to hear different bits of different performances," he says. (Google Arts and Culture provided the technical backing.) Robert and his collaborators then took those clips and created a wealth of new sound files, then ran those through an audio analysis.
"[We] trained machine learning processes on that audio, and then tried to get it to generate new music." So for instance, in the soundtrack, you'll hear a segment from Stravinsky's Rite of Spring. But then, "it turns into a machine learning hallucination of the melody which goes somewhere else that Stravinsky didn't write."
Read about it here and watch the official video for the project below!
Again, I seriously think the latest AI hype wave will feel less like hype when the new generative AI platforms start producing ambitious works of art like.
Comments
You can follow this conversation by subscribing to the comment feed for this post.