Last time I was talking with indie filmmaker Tim Hannan, he was feeding raw footage of Second Life dinosaurs into a gen AI platform to create pretty cool low budget clips. That early attempt was rudimentary, but this new segment he just showed me (watch above) is several steps improved.
"Midjourney has a new feature where it will re-texture an image and it's pretty good at going from lower graphics to realistic," he tells me.
He then feeds the resultant footage into Hailuo, a gen AI video platform.
"Reoccurring locations / backgrounds are tough with AI," he allows.
Hence using multiple platforms:
"Any kind of tricks I can get to give me more angles/lighting. You can get Hailuo to do a time lapse and it will turn that alleyway into a midday scene. Go in, grab a good frame of the new lighting, and boom you have your location during the day and at night."
This is for Sam Atom, a superhero film he's creating, starring himself, but as rendered by AI. (Previous video above.)
"It’s probably going to be a big project I’m hoping to maybe pull off one short or a trailer and see if there is interest in funding for it," he explained to me last Summer. So basically the gen AI-enhanced footage here is a prototype / demo reel for potential investors.
Definitely looks like a handy approach for filmmakers working on a low budget. To get started, Tim recommends finding a gen AI community on Discord like this one "and get some advice based on what you want to do".
As a side note, this makes me wonder if Linden Lab and other metaverse platforms have updated their IP rights policies around machinima footage of user-created content in the era of generative AI. Historically, Linden's policy is roughly that it's OK to do, if the filmmaker captures video of content that's in a publicly viewable area of Second Life that doesn't have any stated policies against screen/video capture. (Much the same way a filmmaker use footage of a public New York street, without having to get the permission from every single building owner in the shot.) But what happens if that footage is then fed into a gen AI platform and then put into a commercial project?
That's a whole other topic I'll get to at some point, and get an official comment from Linden Lab and others. But I'm told they're somewhat busy on another project as the moment.
Please support posts like these by buying Making a Metaverse That Matters and joining my Patreon!
Comments
You can follow this conversation by subscribing to the comment feed for this post.