Using Unity3D, a studio called Esimple recently figured out how to create a Kinect hack that captures human movement and translates it in real time to an avatar. Watch, and be sure stay around for :45, when we also see some (rudimentary) human-avatar interaction with dynamic 3D objects:
Unity3D And Microsoft Kinect? Hell Yeah! from Esimple on Vimeo
Pretty impressive. There have been a number of hacks connecting Second Life to Kinect, but none work like this -- instead, as with this USC version, a human gesture merely triggers a pre-existing avatar animation. The developer of the Linden Lab version that I blogged about yesterday argues this is the better way to go. With dynamic one-to-one motion capture, Philippe Bossut argued, "[o]ne can fall into the Uncanny Valley in no time". More than that, however, there's the technical difficulty of even making this possible in Second Life; for starters, that would require reviving the avatar pupeteering project the company abandoned a couple years ago. Meantime, this Unity 3D version is already (relatively) operational.
But is dynamic motion capture that desirable?
Architect Keystone Bouchard thinks so, and sees it working in real world applications. And I'm not convinced the Uncanny Valley is as much a problem as Philippe thinks. Rather, I'd say a bigger concern is more along the lines of, "Hey your avatar is picking its nose so now I know you are too."
In any event, even if Linden Lab loses the chance to be on the forefront of Kinect dynamism, I'm still pretty certain we'll see dynamic human-to-avatar movement in Second Life, if only because it's so desirable for the avatar-based adult entertainment industry. Or to paraphrase William Gibson, "Porn finds its own uses for things."
Who needs practicality? I want this in SL NOW.
Being able to do this is important simply because it's the first thing people will assume motion captured avatars are about. If you have to stop them and say "No, we don't actually do THAT, but..." Then it's all just disappointment from there.
Posted by: Adeon Writer | Thursday, February 24, 2011 at 09:14 AM
Its fun and I am sure it has useful applications, I am not sure, which ones, though.
I don't want to pantomime all day in front of my PC all day, though - and I guess I am not alone. This *might* make sense, if the hardware would be able to detect facial expressions. Avatar communications could be simplified that way.
A one to one translation of my body's movements to avatar movements is a rather clumsy and inefficient way to move my avatar around, IMHO - simply because I can't imagine a lot of situations where I WANT to go through all the movements my avatar performs. I am much too lazy for that ;-)
Posted by: Markus Breuer | Thursday, February 24, 2011 at 09:54 AM
We need it, at least as a mean to record animations !
I hope it will be quickly implemented on a viewer!
^_^
Posted by: DD Ra | Thursday, February 24, 2011 at 10:17 AM
This would go along way toward improving the quality of SL Machinima
Posted by: Robustus Hax | Thursday, February 24, 2011 at 10:34 AM
I don't think it's a killer app for virtual sex without some exotic and rather expensive peripherals.
It is a killer app for killing. Bring on the Horde! Let them marvel upon my Florentine cross-slash as the cobbles grow slick with their foul blood!
Then a quick wash and time for some mesh-based dress design!
Posted by: Arcadia Codesmith | Thursday, February 24, 2011 at 11:45 AM
for some things it could be cool but for a lot of stuff. i'd rather have a great dancing animation than dance in my studio and look like a dork :)
Posted by: callie cline | Thursday, February 24, 2011 at 12:14 PM
I think my ass would get smaller if I actually had to get out of my chair to move my avatar. Could be a good thing for many of us. But.. what about those who can't walk at all in rl, but love SL for that reason alone.. that their avatar can walk, run, jump and of course fly... would flying around the sim be removed because we can't do this in our rl? I'd hate to lose the "magic" of a Virtual world due to real life limitations of movement.
Posted by: Stephen Venkman | Thursday, February 24, 2011 at 12:16 PM
The two examples aren't analogous. SL is networked; this Unity example is fully local.
I would bet the reason the Linden and USC people went with canned animations is that sending the real-time Kinect data over the wire is stuttery and slow and cannot be cached. Look at the video even the local version is stuttery and slow... :) Networked would be way worse.
Canned animations play smoothly on the client once cached and are triggered by a single small network event.
Posted by: Raph | Thursday, February 24, 2011 at 12:21 PM
Well, one other person here mentioned motion-capture to create animation files with this, and that would likely be one thing this would be used a lot for. One gripe I have with QAvimator sometimes is when I'm trying to create certain, complex poses... where I run headlong into the complete *lack* of inverse kinematics (where you drag the hand one way, and the rest of the arm and then the shoulders follow), making things reaaallllly tedious to get that pose right. I stand, or squat, or something in the position I want to get the av into, and then try to figure out how to get all the body parts into the same place in that window... and there are times when, in frustration, I just say, "Gee, I just want the av to be in the position my body is in! Is that too much to ask?!? I *really* wish there was some dirt-cheap motion-capture doohicky out there I could use..."
Well, now there is. Or, at least, now there's one a'comin'.
Posted by: Nathan Adored | Thursday, February 24, 2011 at 01:59 PM
As long as someone is also working on the head-mounted display.
Posted by: Nisaa Genira | Thursday, February 24, 2011 at 03:23 PM
Increasingly I'm of the opinion that real-time streaming of your exact body and face is a red herring. After all, even in real life a lot of the time you trigger 'animations' - when you smile, you don't think about all the many muscles involved and exactly how to move each of them - you have a range of smiles, and when you walk, same thing. However, if Kinect is to trigger an animation in my AV it would be nice if it was 'my' animation, tuned to my AV and reflecting 'me' (if I'm not role playing someone else...) - so using the Kinect for MoCap seems a brilliant approach - and lo and behold today I see this http://rock-vacirca.blogspot.com/2011/02/creating-sl-animations-using-kinect.html
Posted by: NeilC | Friday, February 25, 2011 at 12:11 AM
Personally, Hamlet, I think it doesn't matter. This sort of setup is far more usable for capturing and creating animations...than for day to day use socializing, shopping, etc.
Posted by: CronoCloud Creeggan | Friday, February 25, 2011 at 05:31 AM
There's no reason both systems can't exist side-by-side. Options are a good thing, not a bad thing.
Posted by: Arcadia Codesmith | Friday, February 25, 2011 at 06:08 AM
I want human to avatar movement very much. For me, this represents the interface cross-over that will bring VWs out of the niche market and into the mainstream. This, and holograms transmitted over the web. :-)
It is also a terrific advance for machinima.
It would be really nice if the camera could also render in-world the physical space one is standing in. Being able to "build" on top of that real space render would make things even more flexible and exciting!
Posted by: LifeFactory Writer | Sunday, February 27, 2011 at 01:43 PM
So, when could / will this ability on Kinetic be available?
Posted by: Semperg | Tuesday, July 10, 2012 at 09:17 AM