« Firestorm Users Donate Only $8417 in Linden Dollars to Volunteer Team of Second Life's Most-Used Viewer | Main | High Fidelity Achieves Fully-Articulated, Real-Time, Motion-Captured Avatar Finger Movement, Leads to the Inevitable »

Friday, February 19, 2016


Feed You can follow this conversation by subscribing to the comment feed for this post.


Nah, it isn't there yet for me.

There is no corresponding stretch in the face to indicate HOW the expression was made. Expressions on real humans use the FULL BODY. We read much more than just the triangle made by the eyes, nose and mouth.

Computer animation just animates the triangle and nothing else. This is why it is repellent to people and they avoid computerized imagery.

And also note, really successful computerized images are only successful when they have real actors underneath the digital suits. Hence the wonder of all of Andy Serkis' animated mime work.

Real Burger

I'm surprised you didn't mention the Project Bento from Linden Lab currently in work (and in beta testing on Aditi) that will add 30 bones for facial expressions : https://community.secondlife.com/t5/Featured-News/Introducing-Project-Bento-New-Bones-Added-to-Second-Life-Avatar/ba-p/2987206

At the moment some good creators are working on mesh head animations and expressions, like Catwa that you are showing, but also Lelutka, TMP, etc..

The question is now: will theses head mesh be compatible with the new Bento skeleton? Because the Bento Project will allow many animation maker to create expression face animation over-riders very smooth at 30 or 60fps, with head mesh deformations rendered in real time. I can suppose that Bento will also gives the ability to sculpt mesh head with sliders like we do usually with classic avatars.

I suppose that Project Bento will also bring an open-source default mesh avatar that will be available to download and modify by everyone as new standard avatar. Thus I am not sure that the current mesh heads will last in time, unless they update their creations.


I'd call this a step towards making completely lifeless avatar expressions a little less lifeless, but nowhere near the uncanny valley, or the believably realistic on the other side.
Consciously pay attention once to all your small, quick movements of blinking, squinting, eyebrows, and look direction while doing any mundane task around your house and you'll see just how far away from the uncanny valley this still is. No 'click HUD' with timed animations or scripted sequences is going to get there. Facial recognition with real-time point translation might, but the hardware and software needed for that is still in its infancy.
Yes, there's some cute and better looking 'gestures' in what's shown here, but it still remains no more useful in conveying a message or emotion to another person as the default LL gesture system is.

Verify your Comment

Previewing your Comment

This is only a preview. Your comment has not yet been posted.

Your comment could not be posted. Error type:
Your comment has been posted. Post another comment

The letters and numbers you entered did not match the image. Please try again.

As a final step before posting your comment, enter the letters and numbers you see in the image below. This prevents automated programs from posting comments.

Having trouble reading this image? View an alternate.


Post a comment

Your Information

(Name is required. Email address will not be displayed with the comment.)

Wagner James Au
Really Needy HUD for Second Life roleplay
Dutchie SL waterfront cottages
Sinespace virtual world Unity free home
Samsung Edge computing reports NWN
my site ... ... ...