Do you want an avatar with Uncanny Valley-crossing levels of realism and expression? Of course you do, of course you do. Let SL avatar master Strawberry Singh show you how it's done through the latest user-made avatar enhancements. In fact, you can even watch her doing so in this video above.
Post a comment
Your Information
(Name is required. Email address will not be displayed with the comment.)
Nah, it isn't there yet for me.
There is no corresponding stretch in the face to indicate HOW the expression was made. Expressions on real humans use the FULL BODY. We read much more than just the triangle made by the eyes, nose and mouth.
Computer animation just animates the triangle and nothing else. This is why it is repellent to people and they avoid computerized imagery.
And also note, really successful computerized images are only successful when they have real actors underneath the digital suits. Hence the wonder of all of Andy Serkis' animated mime work.
Posted by: melponeme_k | Friday, February 19, 2016 at 01:10 PM
I'm surprised you didn't mention the Project Bento from Linden Lab currently in work (and in beta testing on Aditi) that will add 30 bones for facial expressions : https://community.secondlife.com/t5/Featured-News/Introducing-Project-Bento-New-Bones-Added-to-Second-Life-Avatar/ba-p/2987206
At the moment some good creators are working on mesh head animations and expressions, like Catwa that you are showing, but also Lelutka, TMP, etc..
The question is now: will theses head mesh be compatible with the new Bento skeleton? Because the Bento Project will allow many animation maker to create expression face animation over-riders very smooth at 30 or 60fps, with head mesh deformations rendered in real time. I can suppose that Bento will also gives the ability to sculpt mesh head with sliders like we do usually with classic avatars.
I suppose that Project Bento will also bring an open-source default mesh avatar that will be available to download and modify by everyone as new standard avatar. Thus I am not sure that the current mesh heads will last in time, unless they update their creations.
Posted by: Real Burger | Friday, February 19, 2016 at 03:06 PM
I'd call this a step towards making completely lifeless avatar expressions a little less lifeless, but nowhere near the uncanny valley, or the believably realistic on the other side.
Consciously pay attention once to all your small, quick movements of blinking, squinting, eyebrows, and look direction while doing any mundane task around your house and you'll see just how far away from the uncanny valley this still is. No 'click HUD' with timed animations or scripted sequences is going to get there. Facial recognition with real-time point translation might, but the hardware and software needed for that is still in its infancy.
Yes, there's some cute and better looking 'gestures' in what's shown here, but it still remains no more useful in conveying a message or emotion to another person as the default LL gesture system is.
Posted by: Dana | Sunday, February 28, 2016 at 08:07 AM