Courtesy founder/lead engineer Samuel Matson (who last worked in Google's VR division), here's some video which shows how Saga intends to create that virtual world. What you're looking at above was actually first captured by a drone in the real world, with an algorithmic process which converted it into this virtual scene:
AI is allowed on Primfeed, however it needs to follow simple rules: If it's a vendor ads, you must use the AI tag. If it's not, it doesn't need the AI tag. - Your content can't be ALL AI generated content, even if it's trained on SL content. As per the Terms of Service, it needs to be 80-90% of SL content.
"[I]t's just a requirement to put the label render or AI on vendors ads," Primfeed owner/lead developer Lucas Rowley tells me. "My goal was to address customers' worries about the abuse of use of generative AI for vendors ads, and the tag AI was an answer to these worries." Rowley says it's been a frequent complaint of new Primfeed users.
Scaniverse, a new-ish app from Niantic (they of Pokémon GO fame), is one of the cooler products I got to play with at AWE last week. It's a consumer model version of gaussian splatting, an annoyingly obscure term to describe a new process of quickly converting real world objects into usable 3D file data.
What I like about Scaniverse's approach (watch above) is it's actually fun, as if you're waving a magic wand that gradually renders a real world object into the digital world.
Speaking of which, the demo team of Niantic told me Scaniverse will be part of its upcoming "real world metaverse" announced a couple years ago (though we haven't heard much about it lately). That's because it already comes with virtual world conversion features:
Aspiring filmmaker Tim Hannan has been experimenting with using generative AI programs lately for an upcoming project he's developing. Hannan also has a background with shooting Second Life machinima back in the day, when he was creating videos for Metaverse TV. (Avatar name: Robustus Hax.) So he's been using raw SL footage as a visual guide for ComfyUI, a free, open source AI image generator.
"The original videos were taken in SL," he tells me, referring to this rather crude dinosaur (above). "The AI just needs a little guidance to do its thing, for example, to create depth."
Here's the footage after being processed through ComfyUI:
Much of the background remains, just with much more detail -- the Second Life footage becomes a rough sketch for AI to fill out:
"You could drop a bunch of blank cubes around like skyscrapers and pan around it and tell the AI it’s a city and it will fill in the detail," as Hannan puts it. "You just need to give the AI enough of a nudge to get what you want. I can pan around in SL like I can’t in real life. I can film an avatar walking and make it anyone of similar stature size etc. Second Life can be great for long panning shots etc."
Here's another before/after example enhancing SL footage with ComfyUI:
Here's the result of last week's survey, asking how much (or how little) generative AI programs like Midjourney and Leonardo AI have penetrated the Second Life economy, specifically in advertisements or product displays for SL content that clearly leverages that technology. (Example below.)
Among 64 respondents, a heavy majority (59%) say they "often" see AI-enhanced promotions, with nearly 30% saying they "sometimes" do. As someone who falls in the "sometimes" category, I'm somewhat surprised by these results.
More on these results later, but if the survey is accurate, it's safe to say the generative AI era of Second Life is here.
Watch above! There's quite a lot of procedural/automated graphics happening in the background, but unlike prompts, Tiny Glade puts the emphasis on user choice and creativity. The goal with the AI here is not to replace human creativity, but to make it easier, more delightful. It also seems robust enough that very dedicated people could, with enough practice, create especially amazing worlds.
Here's a survey asking how much (or how little) generative AI programs like Midjourney and Leonardo AI have penetrated the Second Life economy. (Take with a mouse/trackpad for best results.) In the last couple years I've started seeing SL images heavily upgraded with gen AI, but I'm specifically referring to advertisements or product displays for SL content that are clearly leveraging that technology.
As an example of what I mean, check out the image below, widely shared across SL-themed social media), showing a fashion item as advertised on social media, and how it actually looks when worn in-world:
Here's a first look at Readyverse, the upcoming metaverse platform based on Ready Player One by Ernest Cline, with Cline himself helping guide development. What the teaser video actually depicts doesn't much resemble RP1 (neither the book or movie), beyond an avatar that resembles the lead character. What it does heavily focuses on is generative AI-based creation, with very little input by the user in the creation process of the world beyond text prompting.
Maybe there's more to the building than mere user prompts (little is explained on the website), but the trailer at least misses the intrinsic pleasure of user-generated building in an immersive 3D space in itself --, which on most platforms, is very easy (if difficult to master), something anyone with a mouse/videogame controller can do on a basic level. This also misses how the labor of the creation process confers a genuine feeling of ownership -- almost in the classic Lockean sense of mixing one's labor with the earth.
That isn't a showoff philosophical reference, but what I've seen time and again in reporting across many metaverse platforms: People truly feel they "own" the digital space they created, because they put in the time, tears, and personal creativity to bring it into digital being.
I keep seeing ridiculously bold predictions that artificial intelligence is going to wipe out massive numbers of jobs in the very near future -- within the next five years, even! -- but these forecasts seem to overlook a highly inconvenient fact: The US is currently enjoying historically low unemployment.
Actually, two inconvenient facts: Since the launch of leading generative AI programs like ChatGPT and Midjourney in mid-late 2022, the US unemployment rate has dropped even lower, largely remaining below 4%. (See above.) With these platforms on the market for nearly 2 years and quickly gaining mass adoption, shouldn’t we already see some kind of consistent increase of unemployment?
There is definitely turmoil and anxiety over AI replacement, and substantial job cuts may be happening in highly concentrated areas (more on that below), but to me that’s a related but different topic. A new Challenger Report estimates the job loss numbers due to AI on the scant side:
When I recently wrote about all the impressive innovation happening on Wolf Grid, an OpenSim-based virtual world developed by small team, I didn't even mention one of its coolest breakthroughs:
Wolf Grid has an option which uses ChatGPT to generate usable terrain with a prompt.
Watch above, with the money shot happening around 6:30 in. In the first demo, requesting a "cat" turns the land into a mountain range that's a picture of a cat (kinda sorta), while requesting a "maze" actually generates a working maze that springs up on the land.
"Basically," lead developer Lone Wolf tells me, "we have some software between the AI and the grid that deals with any issues."
From the user perspective, they communicate with Bobby, the Wolf Grid's AI assistant, which is integrated with ChatGPT.
"[We] use our software to decide what to do? Do we need to generate an image? Do we need a terrain? Do we need to answer a query? Then it works out which 'bit' of ChatGPT to talk with, meanwhile recording information so it's able to know who said what to him, and then interpreting the information back from the AI and translating to a useable thing."
Wolf tells me Bobby will eventually be able to make all this work on the grid live. Unless I'm mistaken, this is the first instance of a virtual world actually using ChatGPT to generate physics-enabled terrain, as opposed to non-interactive, diorama-type backgrounds. (Correct me if I'm wrong, readers!)
Getting generative AI to create working virtual world terrain is more challenging than it might seem, but Mr. Wolf tells me they've done that: