Aspiring filmmaker Tim Hannan has been experimenting with using generative AI programs lately for an upcoming project he's developing. Hannan also has a background with shooting Second Life machinima back in the day, when he was creating videos for Metaverse TV. (Avatar name: Robustus Hax.) So he's been using raw SL footage as a visual guide for ComfyUI, a free, open source AI image generator.
"The original videos were taken in SL," he tells me, referring to this rather crude dinosaur (above). "The AI just needs a little guidance to do its thing, for example, to create depth."
Here's the footage after being processed through ComfyUI:
Much of the background remains, just with much more detail -- the Second Life footage becomes a rough sketch for AI to fill out:
"You could drop a bunch of blank cubes around like skyscrapers and pan around it and tell the AI it’s a city and it will fill in the detail," as Hannan puts it. "You just need to give the AI enough of a nudge to get what you want. I can pan around in SL like I can’t in real life. I can film an avatar walking and make it anyone of similar stature size etc. Second Life can be great for long panning shots etc."
Here's another before/after example enhancing SL footage with ComfyUI:
Continue reading "Watch How Second Life Footage Can Be Transformed Through AI Enhancement" »