Friday, April 15, 2011

« Why Second Life Can't Survive on Its Current Users Alone (And Why Only Mass Growth Will Save It) | Main | What Are Your Second Life Avatar Digits (And How Do They Translate to RL Measurements)? »

Jeffrey Ventrella on Kinect & Avatar Puppeteering's Future

Virtual Body Language Jeffrey Ventrella

Jeffrey Ventrella has an interesting post on the future of avatar puppeteering in the Kinect era, and he's well-suited to write it: As Ventrella Linden, he was the creator of Linden Lab's puppeteering system, which alas has not been incorporated into the official viewer. But Jeffrey has a new book on the topic, Virtual Body Language, and he connects his thoughts there with the recent introduction of Kinect to Second Life avatar interaction, noting "direct gestural interfaces are not for everyone, and... not all the time! Also, some people have physical disabilities, and so they cannot 'be themselves' gesturally." I'm also struck by this prediction, which hadn't quite occurred to me, but now that he says so, seems totally obvious:

[E]ventually we will have Kinect-like devices installed everywhere – in our homes, our business offices, even our cars. Public environments will be installed with the equivalent of the Vicon motion capture studio. Natural body language will be continually sucked into multiple ubiquitous computer input devices. They will watch our every move.

Read it all here, and check out more on his book here.


TrackBack URL for this entry:

Listed below are links to weblogs that reference Jeffrey Ventrella on Kinect & Avatar Puppeteering's Future:


Feed You can follow this conversation by subscribing to the comment feed for this post.

Adeon Writer

Is it possible to develop and sell depth-camera based gadgets without infringing on Microsoft's Kinect patent?


Adeon - you can do the same sorts of things without the special hardware if you're smart about it:


I can hardly wait for the day when I'll have to say "Do what I say, not what I do!" to a computer... :-/


qarl - Predator is very very impressive for sure - it isn't obvious to me yet though if you could get reliable depth info out of it, which is one of the big features of Kinect.

Arcadia Codesmith

I'm a fan of full-body puppeteering for the same apps that are compelling with the Wii, Kinect and PS3 Move; games, sports and expressive art.

For general social use, I'd like to see a webcam interface capable of "reading" a seated, typing/mousing figure and translating it into an active standing figure -- without conscious direction by the user.

Rather than waggling a finger to trigger a head nod, the system should trigger a nod when you naturally nod in response to something. It should more your avatar in response to slight leans forward or to the sides. It might even change the camera angle in response to eye movement.

The less the end user has to think about it, the more natural, intuitive, subtle and powerful the interface becomes.

Post a comment

If you have a TypeKey or TypePad account, please Sign In.