I'm blogging Matthew Ball’s must-read, nine part metaverse primer over the Summer; my take on Part 1 is here, my coverage of Part 2 is here, with Part 3 coverage here.
Part 4 of Matthew Ball's Metaverse Primer includes a personal story highlighting just how much computing power a Metaverse worth the name will require: The launch of Rival Peak, a "Massively Interactive Live Event" produced by one of the venture capitalist's portfolio companies. (Trailer above: Basically a reality TV competition show but starring NPCs in a virtual world, streamed to Facebook Watch.) The show required so much computing power to deploy, Amazon Web Services briefly capitulated to its demands:
In fact, it barely operated on AWS. With eight environments (production, backup, staging, QA and development), each of which was supported by over a dozen GPUs and hundreds of other CPUs, Rival Peak once ran out of GPU servers on AWS, and, during testing, routinely exhausted available spot servers.
Because there were no specific players (let alone a ‘player one’), Rival Peak doesn’t fit the instinctive definition of the Metaverse. However, the operation of a persistent and unending virtual world that supports unlimited interactions, each with lasting consequences, is as close to the end-state Metaverse as any other. And even in its nascent form, and without requiring meaningful consumer-side processing, it was running out of compute.
Rival Peak is not even a single shard virtual world with live user-controlled avatars dynamically creating content, yet the computing power required to achieve even its limited amount of interactivity, while having it simultaneously experienced by tens of thousands of concurrent users, was just too much for AWS to handle.
This takeaway stands out most to me with Part 4 (just as it did with Part 3 and networking): The ideal Metaverse will require a significant leap in our existing technology.
Other highlights:
Mobile devices and networking enabled virtual worlds to become more popular than the traditional, AAA game industry
"It was only by the mid-2010s that millions of consumer-grade devices could process a game with 100 real players in a single match, and that enough affordable, server-side hardware was available and capable of synchronizing this information in near real-time. Once this technical barrier was broken, the games industry was quickly overtaken by games focused on rich UGC and high numbers of concurrent users (Free Fire, PUBG, Fortnite, Call of Duty: Warzone, Roblox, Minecraft)." Matthew notes that the top battle royale games alone count 350 million or so daily active users -- far more than the user base of AAA consoles/PC games.
Practically speaking, 50 users is still largely the local concurrency cap in major virtual worlds/online games
As someone who's reported on Second Life events a decade ago with 100+ nearby users (albeit with much lag), this somewhat surprised me:
[W]hen Fortnite does bring players together into a more confined space for a social event, such as a concert, it reduces the number of participants to 50, and limits what they can do versus the standard game modes. And for users with less-powerful processors, more compromises are made. Devices a few years old will choose not to load the custom outfits of other players (as they have no gameplay consequence) and instead just represent them as stock characters.
Matthew does note recent experiments like Improbable's concurrency test last June, which handled 1000s in the same space.
Making a Metaverse may first require a market for GPU sharing
After reviewing some of the challenges inherent in cloud rendering a Metaverse (a la Stadia), Matthew makes the tantalizing suggestion that we'll need a SETI@Home-style network to share the GPU required to instantiate it:
Imagine, as you navigate immersive spaces, your account continuously bidding out the necessary computing tasks to mobile devices held but unused by people near you, perhaps people walking down the street next to you, in order to render or animate the experiences you encounter. Of course, later, when you’re not using your own devices, you would be earning tokens as they return the favor. Proponents of this crypto-exchange concept see it as an inevitable feature of all future microchips. Every computer, no matter how small, would be designed to always be auctioning off any spare cycles. Billions of dynamically arrayed processors will power the deep compute cycles of even the largest industrial customers and provide the ultimate and infinite computing mesh that enables the Metaverse.
Much more here. Just as Ian "Epredator" Hughes suggested that a Metaverse-of-our-dreams won't be possible until quantum computing becomes feasible (to eliminate latency), I'm struck by all the technical leaps that may need to come first, to achieve it. Leaps that are so high and so transformative, they will probably lead to new products that may obviate our conception of what the Metaverse should be -- or even challenge our desire to make one in the first place.
In 2003 we had at least 50 people in a general area, communicating, playing and transacting in There.com and that was with a crappy DSL connection with 300ms latency. Sure it was annoying sometimes, but it sure was fun… for hours at a time.
Posted by: Osiris Indigo | Thursday, August 05, 2021 at 06:03 PM