To go by social media froth, the rise of AI technology like Large Language Models is soon going to utterly Change Games Forever, leading to waves of roiling layoffs of game industry employees. Why pay game writers, one common argument goes, when we can just get ChatGPT to churn out reams of NPC game dialog?
Speaking with several actual writers of hit acclaimed games, however, I got a much more thoughtful perspective, bereft of doomsaying.
Leigh Alexander, an independent writer who contributed to the award-winning Reigns: Game of Thrones (2018) among many other titles, sees understanding tools like LLMs as crucial to her profession:
“Kinda feel like writers need a nuanced view of generative technology to keep up in the game industry right now,” as she puts it. “There's a big big big gulf between ‘working with generative systems to have an emergent relationship with my own writing’ and ‘having ChatGPT write the NPCs.
“The latter is an exploitive fad that I expect to soon collapse. The former isn't going anywhere and probably if you work in games you'll need to be more literate with it than [saying] ‘all AI is lazy’-- which is itself a lazy view. You still gotta hand-author everything in your own models, and consider applications of emergence deliberately and for a reason, because it often makes things harder.”
She likens the “AI is lazy” charge to a photographer saying, “‘Photoshop is only for lazy photographers’.
"It misunderstands the purpose and breadth of the tool, and it actually makes you sound like you don't understand photography (which seems bad for your prospects if you are working as a photographer).”
Above: Reigns: Game of Thrones, co-written by Leigh Alexander
By contrast, Leigh adds, “professional longevity for storytellers in the game industry means not just writing, but literacy with tools -- tools are the backbone of narrative design to me.”
Charlene Putney did narrative design for the recent hit fantasy RPGs Baldur’s Gate 3 and was a writer on Divinity: Original Sin 2. She also co-developed LAIKA, a LLM tool for game writers “to offer suggestions using their voice, characters and concepts.” So as you might guess, her view on AI is also nuanced:
"I think AI will bring big changes to every creative field over the next few years, but not in an apocalyptic way,” she tells me. That’s even more the case with the game industry, which unlike the other arts, is only a few decades old. By her lights, AI offers games a chance to differentiate itself from other medium even more:
Above: Baldur's Gate 3, featuring narrative design by Charlene Putney
“To my mind, this is another tool. I don’t see it replacing writers in the long run, I see it as a brand new spark of potential that we can use. And that beautiful little spark of potential in it that delights me is the same little spark that delighted me in the independent game development scene a decade ago: dream it up, make it, share it -- now using the tenets of consensus reality as LEGO bricks. Not trying to recreate the media of the past: not novels, not movies, not songs. But some strange new creatures, new forms that are no more than shadows right now, coalescing into shape just out of the corners of our eyes.”
To that end, she’s creating other LLM-based tools for game developers, though notes that many of them are too busy -- or perhaps too apprehensive -- to experiment with them.
"With the strong productivity focus of the current games industry, it's hard for people to take time to play, to explore, and to bounce around with experimental things when there are hard deadlines and such a hostile general approach to working with AI,” as she puts it. “One needs a strong sense of self to weather the social media vitriol, and a strong sense of your own voice and what you want to say in order to experiment wildly."
As the lead writer of the classic Deus Ex franchise, set in a dystopian future amid the rise of sentient AI, Sheldon Pacotti actually contributed greatly to our popular apocalyptic conception of artificial intelligence, especially in Silicon Valley. (Elon Musk, among many other AI technophiles, is a fan of the game.)
“My usual experience writing science fiction is getting things wrong,” he acknowledges now. “Writing in 2000, I had a kid in the 2020s still buying CDs. Etc.”
“But one thing we might have gotten right in Deus Ex is people’s desire for AI in their lives. Morpheus, one of the Ais in the game, says that people crave being seen, and will take an omniscient AI in place of an omniscient god.” (Watch above.)
That’s illustrated even more in the game’s sequel, Invisible War, with an AI popstar character, NG Resonance. (Watch below).
“As an amateur futurist I’m gratified but also a little freaked out that in 2024 ‘companionship apps’ are a thing and in fact one of the fastest-growing applications of AI,” Pacotti puts it to me. “Whether as companions or “agents,” we’re likely to invite AI (and mass surveillance) deeper and deeper into our lives, and that should be great fodder for many dystopian plotlines to come, if not the key to the final triumph of the Illuminati in our time.”
As for potential use of LLMs in game development, Sheldon wrote me a mini-essay worth quoting in full:
LLMs are most interesting to me as reasoning engines, rather than dialogue-generators, as paradoxical as that sounds. LLMs can generate realistic Bugs Bunny dialogue because they can be trained on a huge corpus of existing Bugs Bunny dialogue. In conveying character, though—imagine the characters in Disco Elysium—I think LLMs fall short of creating vivid, idiosyncratic dialogue. Even genre-based characters like gangsters or cowboys have individual mental lives and speech patterns when written well. For a long time to come, I expect LLM dialogue to have an airbrushed feel to it, meaning that human writers will be needed.
Using LLMs to drive motivation, mood, and behavior, “under the hood,” so to speak, might be a more promising application. The widely discussed Stanford experiment (Generative Agents: Interactive Simulacra of Human Behavior) drove complex agent behavior by combining an LLM with other mechanisms built around it. In a commercial video game, these “other mechanisms” would be precisely the game mechanics of the simulated world. A fruitful approach might be to design story-generating mechanics within some bounded space of possibility (e.g. the Sims, Rimworld, the Nemesis system in the 2010s Middle Earth games) then use the LLM to suggest plot arcs or flesh out internal lives for the characters. This approach would leverage LLMs’ facility with programming languages to generate content that plugs into the game systems. (I had an analogous experience with GPT4, prompting it with a JSON format for 3D primitives and then asking it to design a convenience store.)
In this way, LLMs could be used to create more coherent, long-form procedural content, leveraging their common sense and understanding of narrative structure. How those narratives come to life—i.e. the media assets with which they are rendered—will still require the human touch. We may be creating content more systemically (pools of dialogue lines, procedural character models), but we will be creating nonetheless.
Addendum 1:
That isn’t to say LLMs have no role at all to play in direct content generation. I can imagine LLMs doing a great job writing in-game emails, newspaper articles, wanted posters, and so on based on player actions, which could make players feel seen to a degree not possible any other way. However, we’ll want to use LLM content as just one layer within more deliberately crafted narrative environments.
Addendum 2:
The limitations of the Stanford generative agent architecture—memory-retrieval accuracy, scaling to long memory-streams—may be due to the heavy reliance on prose itself as the data format. Memories are stored as prose, actions are described in prose; etc. This leads to seeming “hacks,” like embedding vectors to help the game make sense of game scenarios. Less open-ended virtual environments—like video games—could provide clearer building blocks for agent plans and actions. The language of game mechanics would provide a communication format much less ambiguous than natural language.
As for the ethical dimension of using LLMs in game development, Leigh Alexander notes we’re talking about it in an industry that, say, regularly develops big budget military FPS games that inevitably wind up seeming like hawkish propaganda.
“I hate to tell you but there is no inherent writerly artistic sanctity existing in the mainstream commercial game industry,” says Leigh. “Whatever it is we want to protect through creative purism has been rotten for a while.”
An even more crucial component in this conversion is not how we humans use AI, but how we rally together to secure our own rights.
"I think one of the key things the workers in the games industry are grappling with right now is the need to unionize,” as Charlene Putney puts it. “If we look at the example of the Writers Guild of America - their stance on AI usage allows for experimentation and brainstorming, but holds high regard for the human writer. In this current climate of mass layoffs and huge change, strong unions would make a vital difference to workers."
Please support posts like these by buying Making a Metaverse That Matters!
Many moons ago, I registered "NG Resonance" as an avatar. She's still active. :) https://i.gyazo.com/572bb988596dfcc73c35a05d468eb7a4.jpg
Posted by: Pathfinder | Thursday, May 16, 2024 at 10:19 AM
Some of them are right and it is similar for coders. If you simply prompt "code me this app", you shouldn't expect too much. If you use it as a sort of pattern-matching translator, instead, and with your professional competence and understanding of what's needed, you describe a function in natural language, specifying the arguments, outlining the algorithm, and defining what the function should return, the model (even better a fine-tuned one) is likely to do a decent or even good job, saving you time. Your skills and understanding are still needed.
If you understand how actually LLMs work, with their capabilities as models of language and language processing and their limitations, they are useful, not a fad at all. If you're expecting a knowledge database, an oracle of truth, or a Star Trek's Data to do magically your job for you, then you're approaching it the wrong way.
That's now. In future? In years or decades? Unless you believe the human brain operates by magic, its capabilities will be replicated, eventually (not necessarily by transformer-based architectures). The "never" thing was said also about Go, until AlphaGo came along.
Posted by: ¯\_(ツ)_/¯ | Wednesday, May 29, 2024 at 09:12 AM