Want to interact in Second Life similar to the way actors in Avatar shot their scenes? Watch this video by Anton Bogdanovych (known as shaqq Korobase in SL), a researcher with the University of Western Sydney, demonstrating a remarkably kickass motion capture software he's developing to work with Second Life:
"The hardware (suit) is a commercial product from XSens," Korobase tells me. "It's similar to those full body suits that were used in filming Avatar. The software (something we develop) is what will be unique about it." More after the break:
The software is being created as part of a larger educational project in Second Life, simulating the ancient city of Uruk. But as you can see, it's a compelling solution to Second Life's complex user interface, especially in the era of Xbox 360's Kinetic and other motion capture UI. (Handsfree 3D, a somewhat similar project backed in 2008 by Linden Lab founding investor Mitch Kapor has been subsequently shelved.)
"At the moment it's just able to control the avatar walking direction," Bogdanovych allows, "but in the future we will enable full control of the avatar with the suit (e.g. every move of every part of the body of the person wearing the suit will be translated onto the avatar.) So one would be able to dance just by dancing in the suit, play sports, build things with hands, etc. You can see it as an advanced Wii console, where there is no hand held device and the entire body is the console."
Question is, when will this project see the light of day? "[I]t's hard to precisely predict for such a research project when will we have a final product," says Korobase. He estimates a good working prototype will be ready in roughly six months.
While I applaud any interface extensions of the SL UI, I can tell you from first experience and considerable research and application, that a mocap suit is limited to ONLY avatar movement controls and/or AO/Guesture manipulation. BVH animations or any real-time streaming format are NOT progressively uploaded to SL (yet).
Basically, you could get more "bang from your buck" by using a Xbox controller and not a 30-60K mocap suit.
Until such time VW's incorporate real-time streaming of animation files (a la webcam, mocap, pupeteering) These appliances are simply expensive toys funded by education grants.I'm not complaining mind you, since we sell and support all sectors =D
There are inexpensive systems coming to the marketplace that will significantly lower pro-sumer entry price points.The Gypsy 7 for instance:
http://www.animazoo.com/index.php/gypsy-7)
However, representing them as an "interface" is speculous at best.I am unaware of any SL/OpenSim client capability that will allow for direct joint manipulation of an avatar in real time. One could only specualte the resource/lag hit the server would take with such an open port.
A "tool" would be a more accurate description. Quite an expensive tool at that!
Posted by: Stroker Serpentine | Monday, July 26, 2010 at 03:59 PM
The company that makes the logic controller for microsoft kinect is making a PC compatible version soon and it uses an open API. Therefore it will be a matter of someone wanting to develop the client/plugin for this controller when it comes out. No mocap suits needed. Affordable to consumers.
I look forward to being able to construct assemblies with my hands. The education and training use cases for this are staggering.
Posted by: Ann Otoole InSL | Monday, July 26, 2010 at 04:34 PM
I concede I am lacking bumping ugly animation and compurgator knowledge, but I think it's pretty damn cool.
Posted by: Adric Antfarm | Monday, July 26, 2010 at 05:52 PM
Well.. Just wait I guess and you'll see real time motion streaming happening pretty soon. It is certainly possible and we know how to do it.
I agree that our suit is quite an expensive alternative right now. Its price will certainly drop in the future though. We're just lucky to have one of those in our lab and why would we use something else if this one is a top of the line equipment and very precise as well :) I was looking into Gyro before we ordered our MVN suit and the animation designer that consulted us said that Gyro is not a good option as a lot of postprocessing is required and the data is a little messy. With MVN suit, in contrary (if properly calibrated) we never had to do any postprocessing at all.
Moreover, the technology we're developing is not limited to this suit only. Any motion capture equipment can be connected to it. There are cheap alternatives already available on the market with 3D cameras and even with a number of simple cameras (like http://www.youtube.com/watch?v=dTisU4dibSc).
Posted by: shaqq | Monday, July 26, 2010 at 06:10 PM
@shaqq
"At the moment it's just able to control the avatar walking direction," Bogdanovych allows, "but in the future we will enable full control of the avatar with the suit (e.g. every move of every part of the body of the person wearing the suit will be translated onto the avatar.)"
I believe this statement to be misleading somewhat. While one can certainly inferface with movement controls and UI manipulation, currently there is no real-time functionality within the SL client-server architecture that will allow for direct skeletal manipulation of the SL avatar.
Correct me if I am wrong.
Posted by: Stroker Serpentine | Monday, July 26, 2010 at 07:02 PM
@shaqq
Please do not misconstrue my comments as contrarian.I was one of the most ardent supporters of "puppeteering' and the physical avatar when the intial development was spearheaded by Cube and Ventrella Linden. The source code was posted to svn and subsequently shelved.
To my knowlege there is no current plans by LL to incorporate the physical avatar source code into any public release client now or in the future. There may however be an OpenSim application or interest.
See:
http://wiki.secondlife.com/w/index.php?title=Puppeteering&oldid=639823
http://avatarpuppeteering.com/
If your research and/or progress indeed motivates LL to re-visit the technology I would be most impressed and supportive.
However,Given recent budget/staff cuts I am not very optimistic.
Posted by: Stroker Serpentine | Monday, July 26, 2010 at 07:25 PM
1:1 Motion, live puppetry of an avatar with a full body suit would be one step closer to SnowCrash. :)
Posted by: Adeon Writer | Monday, July 26, 2010 at 08:10 PM
Second Life is completely open source now, so how can anything development related be a problem?
Posted by: shaqq | Monday, July 26, 2010 at 08:15 PM
I don't know if it's just me, but I find the idea of actually moving your own body to control you avatar a bit "primitive" compared to a method by which something akin to "neuroimpulses" would be used to control an avatar.
Back in the 80's, there was this B-movie called Robojox--it's basically like the mecha you see in Battletech or Mechwarrior, but the main difference was, the writers of the movie approached the idea of controlling the gigantic robot by "puppeteering" it inside the cockpit (i.e. the guy stands on this shiny, slippery metal plate, and he actually uses his shoe movements to move his robot/mech forward, kinda like Michael Jackson moonwalking, and his own hand/arm movements would also control the robots upper limb movements). The result was a somewhat of an awkward, silly, and possibly even fatiguing set of actions done by the mech pilot.
In contrast, the Battletech universe simply employed a "neurohelmet" system by which to control the robot/mech--i.e. impulses from the brain would control the lower and upper limb movements of the robot--a more elegant way IMHO to control a robot or an avatar (a method I believe which has already been documented in this very blog several months/years before).
-RODION
Posted by: Rodion Resistance | Monday, July 26, 2010 at 10:55 PM
2 RODION:
I know a person who is working on an interface you describe. At this stage what you suggest is impossible and there are also potential health risks involved. Measuring brain activity and trying to decipher the signal to understand whether the user wants to move the avatar left or rights is possible with some minor errors and quite some training required on the user side. Something simple like playing packman in this way can certainly be done... but something as complex as dancing and all other types of body movement... I don't think we are there yet.
I don't think there is any rock solid evidence in regards to the health risks when wearing the neurohelmet, but I would cetrainly refuse to wear one on the regular basis. As far as I know those helmets use a technique similar to MRI -the process that involves passing a strong magnetic field through the head. This can't be good for you.
Posted by: shaqq | Monday, July 26, 2010 at 11:13 PM
Actually, I'm wrong... the technology is EEG rather than MRI, so the health risk might not be there.
Posted by: shaqq | Monday, July 26, 2010 at 11:23 PM
The Lawnmower Man is in your head now Jake. There's no escape...Ever.
http://www.virtualworldlets.net/Resources/Hosted/Resource.php?Name=LawnmowerMan
Posted by: Little Lost Linden | Monday, July 26, 2010 at 11:52 PM
ok so what will win to replace a mouse and keyboard? "rigging up" in an expensive mocap system or a set top box (ala kinect) that costs a couple hundred bucks or less?
I'll go with the kinect style system because, in a couple of years, it will be ubiquitous and the defacto standard for advanced UI development.
People that want to develop the viewer for these things should be discussing things with PrimeSense so they can be ready when the units are getting ready to ship.
I.e.; we don't need full body control (not possible in SL anytime soon anyway). We do need hand motion control for the UI and that would be ground breaking enough for the edu and training simulation market to make a vast difference alone.
Posted by: Ann Otoole InSL | Tuesday, July 27, 2010 at 04:10 AM
The quality of motion is based on the actually number of joints interpreted -- at this point in time SL avis have no finger joints or toes joints and the remaining joints have limited motion. MOCAPS CAN be uploaded to SL and I have successfully loaded a number of them -- however the extra joints have to be removed from the BVH to work properly in SL. The end result is better than most SL animations, but a lot of the fluidity is lost. I'm surprised at Stroker's statements -- AO's use BVH files just like any other animation in SL -- didn't he know that?
Bottom line is all the additional joints required for high-quality MOCAP movement in SL would probably double the lag and the strain on the servers, so it just ain't gonna happen.
Posted by: Ajax Manatiso | Tuesday, July 27, 2010 at 06:21 AM
Who said anything about high quality mocap? Controlling the limited number of joints available in SL would be just peachy.
But the best we could do with the state of the art would be to trigger animations with the mocap interface. Primative.
On the other hand, if a PC verson of kinect can drive mocap software, the barriers to entry in the SL animation market would fall through the floor.
And to be honest, full-body control is a LOT like voice, in terms of breaking immersion. If you're an average graceless human being, your avatar's posture and gestures will reflect that.
I expect it would be a fantastic advance for specific applications (SL's ballet companies rejoice), but not much employed in day-to-day use. Too bad; we could all use the exercise.
Posted by: Arcadia Codesmith | Tuesday, July 27, 2010 at 06:59 AM
If there is any piece of software (I know the hazard of calling Second Life a "game") with more user "experts" in just about every area than Second Life, I've sure as hell never seen it.
Posted by: Adric Antfarm | Tuesday, July 27, 2010 at 09:00 AM
Very cool research. Ultimately, I think the ability to instantly map your physical-world body language and facial expressions onto your avatar will be a game-changer.
I spoke about this idea at length at a recent keynote I gave in Denmark: http://worlds.ruc.dk/archives/2758
"To condense fact from the vapor of nuance" is the key.
Posted by: John "Pathfinder" Lester | Tuesday, July 27, 2010 at 11:42 AM
I'm sure people that use wheelchairs or other equipment will be thrilled to see the metaverse discard them.
Posted by: Ann Otoole InSL | Tuesday, July 27, 2010 at 03:43 PM
Ann, you shouldn't treat science and technology as something that aims at leaving someone behind. Nobody is going to prohibit using keyboard and mouse. Moreover, the technology mentioned above by RODION is being developed with the goal in mind to facilitate the use of computer systems by disabled people (even completely paralysed). Nobody will be left behind! What I'm doing is just an attempt to force those who can walk and move to finally stand up from the couch and do some exercise.
Posted by: shaqq | Tuesday, July 27, 2010 at 04:50 PM
To my mind, the biggest challenge in full-body avatar control is the lack of kinetic feedback. We've seen systems to "touch" things, even "hug" people, but it's hard to envision with the current state of technology a consumer-grade system that would allow the user to be lifted or dipped in a ballroom dance, or permit a player with a virtual sword to have a blow blocked by the opponent's shield and have the momentum of the controller be checked in mid-swing.
You can do fun and amazing things with gyroscopes in this regard, but not quite enough.
I'm very sorry to say that by the time we have full-body control with feedback sufficient for an immersive experience, many of us are going to be moving very slowly and cautiously. It'd be embarassing if grandma or grandpa broke a hip playing Street Fighter XXXVII.
But I dearly hope I'm wrong. I have plans for my holosuite when it finally hits the market.
Posted by: Arcadia Codesmith | Wednesday, July 28, 2010 at 07:13 AM
So let's say that this can't/won't be done in second life any time soon.
Have the naysayers forgotten the existence of OpenSim and open source browsers such as Imprudence?
This technology doesn't have to be implemented on Linden Lab's version of Second Life for it to make it part of the experience, opensim will do just as well.
Posted by: bodzette Coignet | Friday, August 20, 2010 at 11:58 AM