Interesting Axios story illustrates just how much Silicon Valley has shifted its interest and investment into Generative AI startups like OpenAI and Midjourney, away from metaverse platforms and related technology:
According to PitchBook data compiled by Axios Media Deals' Tim Baysinger, through March 16, 2022, companies that played in the metaverse or web3 space had raised nearly $2 billion in funding.
So far this year, metaverse and web3 companies have raised $586.7 million, a bit more than a quarter of last year's total. The totals for generative AI companies are the inverse: Through March 16, 2022, the generative AI space saw $612.8 million in funding. This year, it's up to $2.3 billion.
Driving the news: In a note Tuesday announcing Meta would lay off 10,000 more employees, Zuckerberg spotlighted AI work and reduced the metaverse to an "also." “Our single largest investment is in advancing AI and building it into every one of our products,” Zuckerberg wrote. “Our leading work building the metaverse and shaping the next generation of computing platforms also remains central to defining the future of social connection.”
As I've explained before, conflating web3 with the Metaverse is a huge mistake, as is assuming Meta leads the metaverse industry. That aside, it's clearly the case that the Valley has shifted its buzz toward generative AI.
Is that smart? Obviously I'm biased when I say this, but there's already 520 million+ active users across many metaverse platforms, while the Metaverse's addressable market is at minimum everyone who regularly enjoys muti-user immersive experiences (i.e. 3D games online), roughly 1-2 billion people.
On the other side, there's several reasons to believe Generative AI is not as transformative as its most bullish boosters assume. For instance:
It's often just an iterative version of technology we already have, and its shortcomings quickly become apparent in many contexts:
When you ask ChatGPT a question, you get a response that fits the patterns of the probability distribution of language that the model has seen before. It does not reflect knowledge, facts, or insights. And to make this even more fun, in that compression of language patterns, we also magnify the bias in the underlying language... [and] because of the way these models are designed, they are at best, a representation of the average language used on the internet. By design, ChatGPT aspires to be the most mediocre web content you can imagine.
It's going to contribute to a security/hacking nightmare:
[Philip Rosedale's] interest isn’t exactly about Nostr becoming an alternative to Facebook or whatever, but being a solution to a completely different but equally concerning problem in technology: The growing power of AI programs to spoof or deep fake real people. “I think that stuff is going to become an ultraviolet catastrophe in the next year,” he tells me. “Maybe less than a year because of the AIs.”
You’ve probably seen audio recordings like this, where a deep fake is able to imitate Obama and other public figures. That’s all fun and games at the moment, but what happens when that same technology is used to initiate a good friend of yours -- and then that “friend” calls you up, telling you they’re stranded in a foreign country, and they need you to wire them $2500?
“I think one of the things that's going to happen with AI is that all our messages are gonna become [spoofable by AI deep fakes]-- we can't trust them anymore,” as Philip Rosedale puts it.
In terms of 3D graphics, it's probably not going to be a killer app in game/metaverse development:
"AI isn't going to affect any field that doesn't have a giant database of free (stolen) training data for it to absorb," she explained. "There aren't enough 3d models in existence for basically any model to eat and spit out anything usable. Even most bespoke, handmade 3d model generation algorithms spit out models that are completely unusable in games because the logic behind character creation and topology is extremely precise and needs to be carefully thought out. So: it's not going to change it."
It's on a collision course with intellectual property litigation:
The Lensa app has gone viral in recent weeks, with much excitement over its new AI-driven “magic avatars” feature.
Small problem: It's not exactly magic nor purely artificial intelligence. Instead, to create these avatars, the app is apparently scraping up artists' images without their consent. The image appropriation is so blatant in many cases, the Lensa-generated images even include the original artist's signature...
"I think they didn’t think artists would stand up for themselves because we don’t [have] industry labels the way the music industry does," Lauren tells me. She says that, because Stable Diffusion, the AI company which provides Lensa's neural network, is very careful about how the platform samples and trains from recorded music.
"The fact that they do so with their music model shows they are well aware of copyright (I mean it’s a basic concept, anyone who isn’t a little kid is aware of copyright), and that it’s not something that was too complicated to implement."
And so on. For the most part, I strongly suspect generative AI is mainly going to improve on existing applications -- while also introducing costly new problems like those I just mentioned and more.
To be clear, I do think much of generative AI is very exciting and will lead to some extremely cool use cases, such as this one:
Imagine entire digital personae you can engage with that are directly based on yourself. Or for that matter, NPCs directly based on novelists, poets, and public speakers from history and fiction.
Michelle agrees on that front:
"This is the stuff I think that has the most interesting ramifications: more broadly, more immersive human / computer interface loops, from conversation with virtual therapists to in-game interactions for virtual worlds, given there is user input, AI could be used to train highly customizable responses or generate unique storylines per use."
In other words: Ironically, some of the best applications of generative AI will be as middleware inside metaverse platforms.
You can generate 3d models with NVIDIA's generative AI.
There's a ChatGPT plugin via GitHub for Unity as of today. There's already a startup working on using procedural generation combined with LLMs for game development.
Some of the article is off the mark but the ability to create more purposeful NPCs is definitely more exciting for the metaverse. The downside is cost, because it will be prohibitly expensive to run in realtime with thousands of NPCs interacting with their own agency with players/ users.
Posted by: Johhny5 | Monday, March 20, 2023 at 03:42 PM
"It does not reflect knowledge, facts, or insights"
This basically untrue and doesn't follow logically from "you get a response that fits the patterns of the probability distribution of language that the model has seen before". In fact, the internet abounds with proof that ChatGPT does, in fact, provide insight and shows understanding of what it is dealing with, even if with patterns and expectations that aren't human-like.
One such case, among the very, very many: https://www.reddit.com/r/ChatGPT/comments/zkce3z/chatgpt_can_hypothesize_with_published_data_and/
Posted by: Fabio | Tuesday, March 21, 2023 at 09:25 AM
Many interesting and discussion-worthy ideas here. I'd like to focus on one stub:
Generative AI is "on a collision course with copyright/intellectual disaster".
SCRAPING CREATIVE WORKS & PERSONAL INFORMATION
It's worth considering that while lots of people are currently screaming about this, it is not at all new. Meta & Alphabet's globe dominating surveillance capitalism is built on scraping user/personal/private information, paying nothing for it, and then using it to reap large profits.
OF ARTBOTS & SURVEILLANCE CAPITALISTS
It's hard to imagine government regulations reigning in the destructive business models of Meta & Alphabet. Still, it's at least interesting to ponder a world where Artists protesting theft by AI Artbots leads to legislation that shuts down or forces compensation for Meta & Alphabet's data piracy practices. I use the word "piracy" not in the sense that what Meta & Alphabet do is illegal, but that it is unethical, carries dire social consequences, and should be illegal. Sadly, even if such fantasy legislation were to arrive, it might only serve to lock in the gargantuan behavioral databases Meta & Alphabet already possess and lock out future competitors. They have such detailed models of human behavior already that they may not need more data at this point.
EVERYBODY SCRAPES
Only Meta & Alphabet are true and near-total surveillance capitalists. The other horsemen of the apocalypse, Microsoft, Apple & Amazon dabble in surveillance, but it is not their primary business model. But scraping/stealing data is everywhere. And has been for years. As one small example, the college anti-plagiarism app TurnItIn scrapes millions of student essays to compare them against new submissions. Like so many others, that company makes money by scraping/stealing the work of others and paying them nothing for it. (per TurnItIn's website, they receive 200,000 student papers per day)
COPYRIGHT REFORM
On the other hand, copyright today is out of control. In the US case, Sonny Bono's "Micky Mouse Protection Act" extended copyright to the life of the author plus 70 years. Let's say I'm today 20 years old and I create a character here in 2023, say Luke Skywalker or Harry Potter. I live for another 80 years and die at 100. The Luke Skywalker that enters your cultural consciousness in 2023 can't be freely played with (enter the public domain) until the year 2173. That's a long time to wait to be able to legally write Harry Potter fanfic.
MEET IN THE MIDDLE?
We have two polar opposites. Oppressive copyright that locks down human culture for vast generations of time. And scraping/theft of personal and creative information and works on a massive scale with zero compensation. What these opposites have in common is that they both tend to advantage corporations and disadvantage individuals.
I dream of, but don't expect, a middle path that includes sensible, contemporarily relevant Copyright Reform, and then applying it to Meta, Alphabet, all the Chatbots, all the Artbots, and even the smaller fish like TurnItIn. I'd love a scenario where individuals are paid for their creativity and personal data. And where fees were not so high that AI couldn't afford to train on available data.
Posted by: Kate Nova | Tuesday, March 21, 2023 at 11:32 AM
The most work with AI is vision ...
It took over 6 million years for humans to evolve from ape like creatures. AI is only about 20 or so years. It is evolving at an explosive rate, as is, hardware fast enough to run the neural networks. AI is doing some amazing feats right now that is much faster than the human brain can comprehend. I'm thinking in the next 15~20 years AI will be making advancements much faster than the human brain ever could. But Vision is where it's at. Once AI can see like humans, then they will be taking a staggering amount of jobs like driving, and the food industry, to packing and picking produce.
Posted by: ericlp | Wednesday, March 22, 2023 at 06:03 PM
A "Generative AI" in the hand is worth twenty "metaverses" in the bush. In other words, the metaverse advocates are simply mad that for all the hype of the last decade or so they don't have anything that's useable and accessible to ordinary people right here and right now. But AUTOMATIC1111 can be run on a desktop TODAY.
So to my ears, this is all bullshit and sour grapes.
Posted by: Han Held | Thursday, March 23, 2023 at 01:47 PM