Here's a survey asking how much (or how little) generative AI programs like Midjourney and Leonardo AI have penetrated the Second Life economy. (Take with a mouse/trackpad for best results.) In the last couple years I've started seeing SL images heavily upgraded with gen AI, but I'm specifically referring to advertisements or product displays for SL content that are clearly leveraging that technology.
As an example of what I mean, check out the image below, widely shared across SL-themed social media), showing a fashion item as advertised on social media, and how it actually looks when worn in-world:
Here's a first look at Readyverse, the upcoming metaverse platform based on Ready Player One by Ernest Cline, with Cline himself helping guide development. What the teaser video actually depicts doesn't much resemble RP1 (neither the book or movie), beyond an avatar that resembles the lead character. What it does heavily focuses on is generative AI-based creation, with very little input by the user in the creation process of the world beyond text prompting.
Maybe there's more to the building than mere user prompts (little is explained on the website), but the trailer at least misses the intrinsic pleasure of user-generated building in an immersive 3D space in itself --, which on most platforms, is very easy (if difficult to master), something anyone with a mouse/videogame controller can do on a basic level. This also misses how the labor of the creation process confers a genuine feeling of ownership -- almost in the classic Lockean sense of mixing one's labor with the earth.
That isn't a showoff philosophical reference, but what I've seen time and again in reporting across many metaverse platforms: People truly feel they "own" the digital space they created, because they put in the time, tears, and personal creativity to bring it into digital being.
I keep seeing ridiculously bold predictions that artificial intelligence is going to wipe out massive numbers of jobs in the very near future -- within the next five years, even! -- but these forecasts seem to overlook a highly inconvenient fact: The US is currently enjoying historically low unemployment.
Actually, two inconvenient facts: Since the launch of leading generative AI programs like ChatGPT and Midjourney in mid-late 2022, the US unemployment rate has dropped even lower, largely remaining below 4%. (See above.) With these platforms on the market for nearly 2 years and quickly gaining mass adoption, shouldn’t we already see some kind of consistent increase of unemployment?
There is definitely turmoil and anxiety over AI replacement, and substantial job cuts may be happening in highly concentrated areas (more on that below), but to me that’s a related but different topic. A new Challenger Report estimates the job loss numbers due to AI on the scant side:
When I recently wrote about all the impressive innovation happening on Wolf Grid, an OpenSim-based virtual world developed by small team, I didn't even mention one of its coolest breakthroughs:
Wolf Grid has an option which uses ChatGPT to generate usable terrain with a prompt.
Watch above, with the money shot happening around 6:30 in. In the first demo, requesting a "cat" turns the land into a mountain range that's a picture of a cat (kinda sorta), while requesting a "maze" actually generates a working maze that springs up on the land.
"Basically," lead developer Lone Wolf tells me, "we have some software between the AI and the grid that deals with any issues."
From the user perspective, they communicate with Bobby, the Wolf Grid's AI assistant, which is integrated with ChatGPT.
"[We] use our software to decide what to do? Do we need to generate an image? Do we need a terrain? Do we need to answer a query? Then it works out which 'bit' of ChatGPT to talk with, meanwhile recording information so it's able to know who said what to him, and then interpreting the information back from the AI and translating to a useable thing."
Wolf tells me Bobby will eventually be able to make all this work on the grid live. Unless I'm mistaken, this is the first instance of a virtual world actually using ChatGPT to generate physics-enabled terrain, as opposed to non-interactive, diorama-type backgrounds. (Correct me if I'm wrong, readers!)
Getting generative AI to create working virtual world terrain is more challenging than it might seem, but Mr. Wolf tells me they've done that:
Johansson said Monday that OpenAI used an "eerily similar" voice to hers for their new GPT-4o chatbot despite having declined the company's request to provide her voice.
Earlier in the day, OpenAI announced it would no longer be using the voice, but did not indicate why.
"Last September, I received an offer from Sam Altman, who wanted to hire me to voice the current ChatGPT 4.0 system," Johansson wrote in a statement, which a representative shared with NBC News. "He told me that he felt that by my voicing the system, I could bridge the gap between tech companies and creatives and help consumers to feel comfortable with the seismic shift concerning humans and AI. He said he felt that my voice would be comforting to people."
... Johansson voiced an artificial intelligence chatbot in the 2013 movie “Her,” which OpenAI CEO Sam Altman referenced in relation to the company's new voice offerings — something Johansson noted in her statement.
While Altman seems to think the movie Her was basically a product demo, his detractors on social media are largely depicting Her as a dystopian cautionary tale.
However, I checked back on what I wrote about Her when it came out in 2013/2014, and its depiction of sentient AI is much more interesting than either depiction (spoilers ahead if you haven't seen it yet, and you totally should):
To go by social media froth, the rise of AI technology like Large Language Models is soon going to utterly Change Games Forever, leading to waves of roiling layoffs of game industry employees. Why pay game writers, one common argument goes, when we can just get ChatGPT to churn out reams of NPC game dialog?
Speaking with several actual writers of hit acclaimed games, however, I got a much more thoughtful perspective, bereft of doomsaying.
Leigh Alexander, an independent writer who contributed to the award-winning Reigns: Game of Thrones (2018) among many other titles, sees understanding tools like LLMs as crucial to her profession:
“Kinda feel like writers need a nuanced view of generative technology to keep up in the game industry right now,” as she puts it. “There's a big big big gulf between ‘working with generative systems to have an emergent relationship with my own writing’ and ‘having ChatGPT write the NPCs.
“The latter is an exploitive fad that I expect to soon collapse. The former isn't going anywhere and probably if you work in games you'll need to be more literate with it than [saying] ‘all AI is lazy’-- which is itself a lazy view. You still gotta hand-author everything in your own models, and consider applications of emergence deliberately and for a reason, because it often makes things harder.”
She likens the “AI is lazy” charge to a photographer saying, “‘Photoshop is only for lazy photographers’.
"It misunderstands the purpose and breadth of the tool, and it actually makes you sound like you don't understand photography (which seems bad for your prospects if you are working as a photographer).”
Above: Reigns: Game of Thrones, co-written by Leigh Alexander
By contrast, Leigh adds, “professional longevity for storytellers in the game industry means not just writing, but literacy with tools -- tools are the backbone of narrative design to me.”
Charlene Putney did narrative design for the recent hit fantasy RPGs Baldur’s Gate 3 and was a writer on Divinity: Original Sin 2. She also co-developed LAIKA, a LLM tool for game writers “to offer suggestions using their voice, characters and concepts.” So as you might guess, her view on AI is also nuanced:
"I think AI will bring big changes to every creative field over the next few years, but not in an apocalyptic way,” she tells me. That’s even more the case with the game industry, which unlike the other arts, is only a few decades old. By her lights, AI offers games a chance to differentiate itself from other medium even more:
Above: Baldur's Gate 3, featuring narrative design by Charlene Putney
“To my mind, this is another tool. I don’t see it replacing writers in the long run, I see it as a brand new spark of potential that we can use. And that beautiful little spark of potential in it that delights me is the same little spark that delighted me in the independent game development scene a decade ago: dream it up, make it, share it -- now using the tenets of consensus reality as LEGO bricks. Not trying to recreate the media of the past: not novels, not movies, not songs. But some strange new creatures, new forms that are no more than shadows right now, coalescing into shape just out of the corners of our eyes.”
To that end, she’s creating other LLM-based tools for game developers, though notes that many of them are too busy -- or perhaps too apprehensive -- to experiment with them.
"With the strong productivity focus of the current games industry, it's hard for people to take time to play, to explore, and to bounce around with experimental things when there are hard deadlines and such a hostile general approach to working with AI,” as she puts it. “One needs a strong sense of self to weather the social media vitriol, and a strong sense of your own voice and what you want to say in order to experiment wildly."
As the lead writer of the classic Deus Ex franchise, set in a dystopian future amid the rise of sentient AI, Sheldon Pacotti actually contributed greatly to our popular apocalyptic conception of artificial intelligence, especially in Silicon Valley. (Elon Musk, among many other AI technophiles, is a fan of the game.)
“My usual experience writing science fiction is getting things wrong,” he acknowledges now. “Writing in 2000, I had a kid in the 2020s still buying CDs. Etc.”
“But one thing we might have gotten right in Deus Ex is people’s desire for AI in their lives. Morpheus, one of the Ais in the game, says that people crave being seen, and will take an omniscient AI in place of an omniscient god.” (Watch above.)
That’s illustrated even more in the game’s sequel, Invisible War, with an AI popstar character, NG Resonance. (Watch below).
“As an amateur futurist I’m gratified but also a little freaked out that in 2024 ‘companionship apps’ are a thing and in fact one of the fastest-growing applications of AI,” Pacotti puts it to me. “Whether as companions or “agents,” we’re likely to invite AI (and mass surveillance) deeper and deeper into our lives, and that should be great fodder for many dystopian plotlines to come, if not the key to the final triumph of the Illuminati in our time.”
As for potential use of LLMs in game development, Sheldon wrote me a mini-essay worth quoting in full:
Everyone in tech is seemingly psyched for the latest update of Chat GPT4o -- mainly for its genuinely impressive version of the AI from the movie Her (watch below) -- but for the record, it's still having trouble with at least one obvious question: What's my definition of the Metaverse?
Ask GPT4o that, and you get this farrago of flim flam:
Wagner James Au defines the Metaverse as a collective virtual shared space that combines virtually enhanced physical reality with persistent virtual reality. He sees it as an interconnected network of 3D spaces where users can interact with each other and the environment in real-time. The Metaverse is not just one platform but a convergence of various virtual worlds and augmented reality experiences, creating a seamless digital and physical blend.
Again, what I actually wrote on this blog years ago, and in several online articles in major news sites, and in the very opening page of my latest book, is this:
The Metaverse is a vast, immersive virtual world simultaneously accessible by millions of users through VR and other devices, highly customizable avatars and powerful experience creation tools. It is integrated with the real world economy and compatible with external technology.
Instead of outputting that, however, GPT4o insists on ascribing what others have written, then duct tapes it onto what I've quoted others having said. (Along with the usual hallucinations -- WTF is even "a seamless digital and physical blend".)
"I can see where it's picking up some of this stuff," my friend Adam Frisby, who ran the GPT4o query for me, surmises. "You're talking about other people and what they're doing, and ChatGPT is basically transposing you commenting as your personal opinion."
The fundamental problem, he suggests, is happening on a deep epistemic level:
Following up on a reader's comment to my rant about ChatGPT totally botching my writing, I went with their suggestion to try Microsoft's Copilot instead. I asked the same question, "What is Wagner James Au's definition of the Metaverse?" and got this output:
Wagner James Au defines the Metaverse as a vast, immersive virtual world simultaneously accessible by millions of people through highly customizable avatars and powerful experience creation tools integrated with the offline world through its virtual economy and external technology. His book, “Making a Metaverse That Matters: From Snow Crash & Second Life to A Virtual World Worth Fighting For,” explores nascent metaverse platforms, interviews metaverse creators like Neal Stephenson, and aims to dispel myths and misconceptions around this interconnected universe of virtual worlds. In essence, the Metaverse represents a convergence of digital and physical experiences, where users can interact, create, and explore within a shared digital space.
Much better than ChatGPT! I also like how Copilot's output includes source web links for further reading, enforcing its role as your assistant, and not the ultimate purveyor of truth.
Still, Copilot is decidedly not perfect, adding in bizarre inaccuracies:
I rarely write about the Metaverse being "a convergence of digital and physical experiences" (though others emphasize that angle much more), and I've devoted thousands of words explaining why the Metaverse should not be defined as an "interconnected universe of virtual worlds". Worst of all, putting Neal Stephenson in the "metaverse creators like" category is so profoundly, face-palmingly wrong, if I were teaching a class on the topic and a student wrote that in a paper, I'd deduct a whole grade or two.
So overall I still question the usefulness of LLMs beyond being a highly imperfect, unreliable assistant. Anyway, here's the comment from reader "N" who makes some good points and even shows how Copilot is pretty impressive with discussing Second Life-only content:
Now that ChatGPT is openly available online, I've been checking in on the system's knowledge of virtual worlds -- and yep, it's still confused. And worse it's actively confusing what I've written about virtual worlds. Ask it, "How does Wagner James Au define the Metaverse?", and you get something like:
[He] defines the Metaverse as a collective virtual shared space, created by the convergence of virtually enhanced physical reality and physically persistent virtual reality. This definition emphasizes the interconnectedness of virtual spaces and their integration with our physical world, suggesting a seamless blending of digital and physical experiences.
Like the saying goes, this is not right, it's not even wrong. I don't know how ChatGPT came up with this muddy extrusion, when I painstakingly researched and put my definition on the web (let alone my book, let alone several online articles about my book), starting years ago:
The Metaverse is a vast, immersive virtual world simultaneously accessible by millions of users through VR and other devices, highly customizable avatars and powerful experience creation tools. It is integrated with the real world economy and compatible with external technology.
A properly targeted Google search leads to that definition, but ChatGPT's authoritative, mansplain-y format is meant to ensure confidence in its answers. (Even the fine print qualification, "ChatGPT can make mistakes", belies its rampant potential for off-base laziness.)
It's amusing to read AI evangelists assert that programs like ChatGPT will soon replace writers, when I mostly see ChatGPT causing more tedious work for writers -- making us spend extra time chasing down its errors, turning its mediocre, bland answers into something that's readable.
Longtime journalist/editor Mitch Wagner, who uses ChatGPT as a side assistant tool for spellchecking and a thesaurus reference while he's writing his own articles, made some similar points recently:
Representative Adam Schiff recently announced the Generative AI Copyright Disclosure Act, a bill that would require gen AI companies like Midjourney and OpenAI to disclose if any copyrighted works were used in its training data. As he put it in the announcement:
We must balance the immense potential of AI with the crucial need for ethical guidelines and protections. My Generative AI Copyright Disclosure Act is a pivotal step in this direction. It champions innovation while safeguarding the rights and contributions of creators, ensuring they are aware when their work contributes to AI training datasets. This is about respecting creativity in the age of AI and marrying technological progress with fairness.
The bill is backed by a raft of major film and music industry groups (Congressman Schiff is based in Los Angeles), and it seems intended to primarily protect those mediums from unauthorized AI training of their intellectual property.
But does it apply to video games, virtual worlds, and metaverse platforms? That's not made clear in the wording.
So I contacted Adam Schiff's office. Asking exactly that:
"Would the Act apply to 3D models, i.e., content in video games, virtual worlds, and metaverse platforms? So for instance, 3D models of castles, heroes, user-generated 3D models, and so on."
Rep. Schiff's communications officer confirmed to me that indeed it does:
"Yes, it would apply to any copyrighted material."
That's an important clarification, as the game/metaverse industry clamors to add generative AI content into their experiences. (And the bill, if passed, would retroactively apply "to previously released generative AI systems".) Indeed, Midjourney's CEO recently told me the company plans to create something like the Holodeck of Star Trek.
But if Schiff's act becomes a law, every generative AI platform would probably need to seek permission, license, and compensate creators of 3D content in their system, once they or the copyright holder (such as a game publisher) is notified. Which will be another daunting cost for most platforms. (Though some gen AI platforms already train only on licensed content.)
But will it become a law? Schiff's office tells me there's no timeline for when it's expected to be put up for a vote.
My semi-informed forecast is it has a very good chance of being made a law -- after the 2024 election, depending on its outcome:
Microsoft Copilot Seems Better Than ChatGPT at Summarizing My Writing -- But Still Adds Errors!
Following up on a reader's comment to my rant about ChatGPT totally botching my writing, I went with their suggestion to try Microsoft's Copilot instead. I asked the same question, "What is Wagner James Au's definition of the Metaverse?" and got this output:
Much better than ChatGPT! I also like how Copilot's output includes source web links for further reading, enforcing its role as your assistant, and not the ultimate purveyor of truth.
Still, Copilot is decidedly not perfect, adding in bizarre inaccuracies:
I rarely write about the Metaverse being "a convergence of digital and physical experiences" (though others emphasize that angle much more), and I've devoted thousands of words explaining why the Metaverse should not be defined as an "interconnected universe of virtual worlds". Worst of all, putting Neal Stephenson in the "metaverse creators like" category is so profoundly, face-palmingly wrong, if I were teaching a class on the topic and a student wrote that in a paper, I'd deduct a whole grade or two.
So overall I still question the usefulness of LLMs beyond being a highly imperfect, unreliable assistant. Anyway, here's the comment from reader "N" who makes some good points and even shows how Copilot is pretty impressive with discussing Second Life-only content:
Continue reading "Microsoft Copilot Seems Better Than ChatGPT at Summarizing My Writing -- But Still Adds Errors!" »
Posted on Monday, April 29, 2024 at 02:45 PM in AI, Comment of the Week | Permalink | Comments (6)
| |