SLKinect 2 is a pretty cool open source project being developed by folks at Network System Laboratory of Tokyo University to connect real world motion captured by a Kinect to avatar movement in Second Life and OpenSim. Watch:
Fumi Iseki, a developer with NSL, recently told me more about the project: "These softwares are open source, so anyone can download, read code and use [them] software freely." The code has almost no licensing restrictions, and they've also created code for third party viewers to use. "We hope that third party viewers include our patch."
This is part of a larger SL/OpenSim-oriented project by Network Systems Laboratory:
"We are developing various tools for Second Life/OpenSim. Proxy Server for SL/OS, Money Server for OS and Web Interface for OS, etc. SLKinect is one of them." Click here for an English language site that explains much more.
Here's another demo, showing Kinect-based control of several avatars:
There's more tweaking to do, but ultimately, Fumi tells me, they want this software to enhance communication between avatars, become a machinima creation tool, and finally, a dance game like this one:
Which seems so goddamn cool. So SL developers, click here to find out more.
Wow I can't believe nobody else has commented. I for one am like holy cow ZOMG!
Posted by: bd | Wednesday, January 25, 2012 at 05:45 PM
I agree with the comment above, that's really amazing! Looking forward to more of such initiatives.
Posted by: Patty | Wednesday, January 25, 2012 at 06:25 PM
I love how pathetic he was at the dance game, and exhausted between rounds. It was funny and human. It showed how this technology
would humanise avatars, and add a level of intimacy.
Posted by: Alan | Wednesday, January 25, 2012 at 07:40 PM
Thank you for your interest!!
Please use SLKinect.
We are developing next version of SLKinect now. :-)
Thanks.
Posted by: Fumi.Iseki | Wednesday, January 25, 2012 at 08:11 PM
As I understand, SLKinect works only local or with connection to a special animation relay server. If someone use it in SL, the animations are only visible inside the viewer of this user (or a group which connect to the same animation relay server).
But altogether an exciting technology which still stands at the beginning. It shows a possible expansion of SL, if Linden Lab is willing to have a look at this. As of February 1st, the sale of Kinect for Windows will start and LL should not fail to jump on this train.
Posted by: Maddy Gynoid | Wednesday, January 25, 2012 at 10:27 PM
Of course I ask myself: why isn't Linden working on such a thing? That's what they should be working on, instead of the crap they thought is important for the development of Second Life for the last 3 years (like viewers nobody wanted to have).
Posted by: Moni Duettmann | Thursday, January 26, 2012 at 01:35 AM
This is exciting stuff, I saw the first video a few months ago and it's certainly something people always dream of for virtual worlds. But there are big hurdles in turning a proof of concept into something really useful and easily usable by the majority of users. If all your body motions become motions of your AV, how will you walk around in the world, change camera angles, drive the UI of the viewer? If you move to your keyboard will your av move (and then legs and arms are out of shot of the kinect - what happens to them?) There are a myriad fascinating UI challenges, but it is great work - I look forward to trying it
Posted by: NeilC | Thursday, January 26, 2012 at 03:07 AM
I may very well buy a kinect just for this.
I believe Niran's viewer recently added this to his test viewer if anyone wants to try it who can't compile their own viewer.
Posted by: Adeon Writer | Thursday, January 26, 2012 at 06:54 AM
This is awesome.
The biggest innovations in virtual worlds will come from universities banging away at things like this.
Which is why companies developing virtual world technology need to carefully cultivate relationships with academics and educators.
Posted by: Pathfinder | Thursday, January 26, 2012 at 07:01 AM
Excellent direction for both SL and OpenSim.
I sure hope LL is paying attention, will partner with these and other clever developers, and use their staff to fix existing issues such as sim-crossings and lag. I just hope they don't try to roll out some clunky Kinnect-enabled UI on their own.
But they will. Especially if breedable Gorean vampire bunnies can be linked in the marketing.
Posted by: Ignatius Onomatopoeia | Thursday, January 26, 2012 at 07:17 AM
I might be mis-remembering but I thought there was a networked version of this now. Of course, Merov Linden as is is now was working on this for Mitch Kapor before Microsoft bought the tech. I think the powers-that-be decided to forsake this angle for something webcam-oriented, presumably on the basis of the space requirements for the Kinect. Anyway, I shall be sorely tempted when the Windows Kinect emerges in a few days time.
Posted by: Graham Mills | Thursday, January 26, 2012 at 01:45 PM
IF LL can take the local data of the animation and relay it to everyone else who can see it, then this will absolutely rock and rock hard. Being able to do my own dance while the music plays would be so much fun. Sitting and talking to someone and my gestures match in VR would be incredible. You could even create that holy grail of things so many people want, and be able to create the BVH files right within the SL viewer itself.
Posted by: shockwave yareach | Thursday, January 26, 2012 at 02:37 PM
this is way way cool
Posted by: elizabeth (16) | Thursday, January 26, 2012 at 04:21 PM
This is awesome stuff.
One minor factual thingamy: Fumi's university is Tokyo University of Information Sciences, which is a university in Tokyo, but a different university to Tokyo University.
Posted by: Edmund Edgar | Saturday, January 28, 2012 at 01:52 AM
Fantastic. I remember "avatar Puppeteering" a while back and always thought there must be some way to use Kinect to drive it. At last.
Posted by: Connie Arida | Monday, January 30, 2012 at 05:36 AM
I notice no one has yet mentioned Snow Crash. Neal Stephenson cleverly sidestepped, for the most part, the question of how the interface for his Metaverse worked (unless I've forgotten). I always figured it would've required something like what the Kinect has turned out to offer.
Posted by: John Branch | Monday, January 30, 2012 at 02:32 PM
I'm not sure what hubby will think about me leaping around the living room instead of clicking buttons, but bring it on. :)
Posted by: Lynne Hand | Thursday, March 15, 2012 at 12:12 AM
Tokyo University is really doing amazing things in software development . I am very impressed by video above where man only moving his hand up and the software is converting him in animation.
Posted by: Online university | Tuesday, September 18, 2012 at 07:03 AM