This is an awesome hack that links Kinect to Second Life in an incredibly innovative way: Kinect reads the head movement and facial expressions of someone in real life, and those movements are translated into data which then dynamically alters a face sculpture in SL, composed of thousands of cubes. Created by an artist known in SL as Glyph Graves, he narrates this video above, featuring a friend who volunteered to demonstrate. This mixed reality sculpture has already been avidly blogged by a number of SLers starting with Chestnut Rau, but I begged Glyph to shoot this video, so we could see what was happening on the other side.
“It seemed like an obvious thing to do,” Glyph tells me, when I ask what inspired this project. “[Kinect] does face depth. I thought I could do it in SL. So I did.” The reaction in-world when he’s showed this to avatars has been pretty stellar: “Shock, amazement, some disbelief. Part of it is the potential it suggests for SL.”
To make it possible, he had to create a fairly complex interaction between Kinect and SL, read on:
“There are four programs: one on my computer, one on my hosted site, and two different ones in SL. I learned C# for this project (and sockets, about ports, etc.)” He also used Microsoft’s software development kit for Kinect, but heavily modified it so he could extract the 3D facial data and add server code.
“Microsoft released the SDK on June 17... took me until a little before the opening on July 7th, but then I had to learn C# and about server and client ports etc, and build a sim with a whole lot of other stuff as well.” (Coder Miki Gymnast helped him quite a bit in this effort.) Not bad for a evolutionary genetics scientist, which is Mr. Graves’ real life background.
The final result from an SL persective looks like this, an eerie perturbation of reality into Second Life, as if someone was pressing his face into the veil that separates both worlds:
(Video by ColeMarie Soleil for her SL arts blog)
Here’s even cooler news: He’ll open source the code that makes this Kinect hack possible, so others can do projects like this, and perhaps others even more ambitious: “Soon,” Glyph Graves tells me. “I want to finish a few more projects with it first... it was a lot of sleep lost making this, a large number of man hours. So I’ll do the projects I intend to do first.”
To get announcements of future demos, join the in-world group ArtNation. This weekend on Saturday and Sunday at 5pm and 2am Pacific (SLT), you can see Glyph demonstrate this in-world: Click here for a direct teleport.
Much thanks to Bettina Tizzy for the tip!
Glyph is really cutting edge on the virtual art scene. He has been linking up real world data with SL sculptural projects for a couple of years now.
The other part of that exhibition where the sculptural trees are linked to the fluctuations of the worlds rivers causing everchanging sounds is very worth spending time listening too.
I second Hamlet's raving about this. Check it out!
Posted by: Scarp Godenot | Thursday, August 18, 2011 at 02:30 PM
I wonder if something like this could be adapted to manipulate an avatar mesh live or semi-live? I guess that's the obvious "Max Headroom" question, huh?
Posted by: Stroker | Thursday, August 18, 2011 at 02:55 PM
Great
Reminds me of the effect in the Nine Inch Nails vid for "Only"
http://www.youtube.com/watch?v=mDsqpeiTqg8&list=PLE3098CE97A17706C&index=142
I was only looking at a vid of a presentation at siggraph showing Realtime Performance-Based Facial Animation the other day.
http://www.youtube.com/watch?v=8kbPhG3y8ts
Another person had already hacked the Kinect to do this.
http://www.youtube.com/watch?v=nYsqNnDA1l4&feature=related
It would be great to see this brought inro SL facial animation. A boon for machineanima makers
Posted by: Connie Sec | Friday, August 19, 2011 at 01:44 AM
Minecraft version of the same thing, only not restricted by prims :) http://www.youtube.com/watch?v=x2mCDkqXki0
Posted by: Robustus Hax | Friday, August 19, 2011 at 08:08 AM
http://www.youtube.com/watch?v=O7itn47uys4
Just went there made a video to of the in your head experience. kinda cool to if it worked.
Posted by: Jjccc | Friday, August 19, 2011 at 10:26 AM
Nice! Snow Crash, here we come!
Posted by: rikomatic | Friday, August 19, 2011 at 12:46 PM
Thanks Hamlet nice post and thanks for the comment Scarp!.
Id also like to thank Amase Levasseur, Chestnut Rau and Zachh Cale for providing the Art Screamer sim for the installation. Also Desdamoa
Just few points
Stroker:- Im not familiar with the viewer code but it has occurred to me using the kinect to mesh with the appearance code to produce a static look alike in the avatar may not be too hard. Updating to real time changes is I would think a different story but something that should be looked into.
Connie:- Those are all nice pieces of work and I would also add Nicolas Burrus's work with Libfreenect drivers but they are also made to work on a local computer. Im not sure if your familiar with the Second life environment but there are significant limitations with passing data to this environment and, well, the computing power you have access to to is the equivalent of a Commadore 64 not to mention display restrictions. The examples you gave while great are not really comparable.
Robustus:- yeah the minecraft thing is cool Hamlet showed that to me a week ago but as you say they don't have to deal with 2500 prims. Personally I think from a technical point of view the hardest part was to get that many prims to work smoothly together while using the media texture for only for those prims that are use in the face (youtube is streamed through them, there's a bug in the V2 and Firestorm viewers so it only works properly with phoenix and I think Imprudence ) . As an aside had fun the other night turning the neighbours cat into ceiling cat ;)
Jjccc:- Its a performance piece ..I have to be online (sitting behind my desk) and running it. Ill be doing a few this weekend if you want to see it in action. Just IM in world or add Art Nation group
Glyph
Posted by: Glyph Graves | Friday, August 19, 2011 at 09:30 PM