Jeffrey Ventrella has an interesting post on the future of avatar puppeteering in the Kinect era, and he's well-suited to write it: As Ventrella Linden, he was the creator of Linden Lab's puppeteering system, which alas has not been incorporated into the official viewer. But Jeffrey has a new book on the topic, Virtual Body Language, and he connects his thoughts there with the recent introduction of Kinect to Second Life avatar interaction, noting "direct gestural interfaces are not for everyone, and... not all the time! Also, some people have physical disabilities, and so they cannot 'be themselves' gesturally." I'm also struck by this prediction, which hadn't quite occurred to me, but now that he says so, seems totally obvious:
[E]ventually we will have Kinect-like devices installed everywhere – in our homes, our business offices, even our cars. Public environments will be installed with the equivalent of the Vicon motion capture studio. Natural body language will be continually sucked into multiple ubiquitous computer input devices. They will watch our every move.
Read it all here, and check out more on his book here.
Is it possible to develop and sell depth-camera based gadgets without infringing on Microsoft's Kinect patent?
Posted by: Adeon Writer | Friday, April 15, 2011 at 11:01 AM
Adeon - you can do the same sorts of things without the special hardware if you're smart about it:
http://www.i-programmer.info/news/105-artificial-intelligence/2310-predator-better-than-kinect.html
Posted by: qarl | Friday, April 15, 2011 at 12:01 PM
I can hardly wait for the day when I'll have to say "Do what I say, not what I do!" to a computer... :-/
Posted by: Riisu | Friday, April 15, 2011 at 12:13 PM
qarl - Predator is very very impressive for sure - it isn't obvious to me yet though if you could get reliable depth info out of it, which is one of the big features of Kinect.
Posted by: NeilC | Saturday, April 16, 2011 at 04:24 AM
I'm a fan of full-body puppeteering for the same apps that are compelling with the Wii, Kinect and PS3 Move; games, sports and expressive art.
For general social use, I'd like to see a webcam interface capable of "reading" a seated, typing/mousing figure and translating it into an active standing figure -- without conscious direction by the user.
Rather than waggling a finger to trigger a head nod, the system should trigger a nod when you naturally nod in response to something. It should more your avatar in response to slight leans forward or to the sides. It might even change the camera angle in response to eye movement.
The less the end user has to think about it, the more natural, intuitive, subtle and powerful the interface becomes.
Posted by: Arcadia Codesmith | Monday, April 18, 2011 at 08:32 AM