
Tim Hannan, an indie creator who's been doing some interesting experiments blending Second Life video/images with gen AI (as here), recently sent me these two fairly alarming -- but actually not that surprising -- images.
The image on the left is the photo of high end basketball shoe. The image on the right is the same shoe in Second Life.
In this case, I do mean "same", in the sense that the virtual shoe shares the underlying digital substrate as the RL shoe, so to speak. While it's possible to recreate the shoe in Blender by hand, the modeling was done by an algorithm which first ingested and then extrapolated the 2D image itself to output it into a 3D version.
"There is now a free open source [AI] model for image to 3D mesh that can be run locally," Tim tells me. "Just a quick sample I don’t know crap about 3D modeling etc." To do this, he uses an all-in-one AI program with a plug-in for the 3D conversion. "Give it an image it returns a .gilb (?) file, then I just load it in Blender, [then] convert to .DAE for Second Life."
This does not mean, I should quickly add, than an avatar right now can just come along and wear this virtual version of the shoe. It would first need to be rigged to the SL skeleton and optimized with the various mesh bodies. For SL fashion merchants, this technology might still be useful for prototype/showcase purposes. (I.E. "Hey fam, do you want a shoe like this in SL?")
For creators like Tim, who might use the shoe (or anything else that can be so modeled) as a non-interactive prop in his machinima, it's good enough to use right now: