"Edge computing is opening a door to the metaverse" is a just-published article I wrote for a new web magazine from Samsung NEXT, the venture funding arm of the tech giant, and as the name suggests, it attempts to explain how edge technology will transform our expectations for VR, cloud computing, and building a mestaverse worth the name:
Edge computing -- in which compute, storage, security, and networking occurs physically closer to end users and their devices -- will enable us to think about the metaverse and related applications in a new light...
In the eyes of many industry insiders, edge computing will be an essential pillar in the metaverse’s creation, by enabling developers to reduce latency in virtual worlds. This technology could empower new and better devices and new virtual experiences that support many thousands -- and eventually, millions -- of concurrent users in the same shared 3D space.
As an elevator pitch to explain edge computing, I often say, "It's basically a real life version of the Piper Net from Silicon Valley." And as you imagine, in it I talk with Philip Rosedale, Jim Purbrick, and a lot of other luminaries who've been working in virtual worlds for nearly two decades, waiting for this technology to come. Philip and Jim of course have some interesting perspectives on edge computing, such as how it will improve the audio experience of virtual worlds:
“Audio servers on the edge could be used to combine ambisonic streams from the server with individual streams from nearby users to provide higher quality spatialized audio, or participate in a Distributed Partial Mixing network to optimally trade off quality, network and processor bandwidth,” says Pubrick.
The practical upshot would be that virtual worlds could truly convey the vocal presence of hundreds of people all around you from across the globe, with enough spatialized audio quality to still pick out the voice of a personal friend.
Indeed, Philip Rosedale is doing just this with High Fidelity, a kind of mixed-reality virtual world where the voices of its many users are simulated within a shared 3D environment. To create the sense of live, immersive audio -- the sensation that are you in the same location as hundreds or even thousands of people around the world -- Rosedale turned to edge computing:
“[W]e deploy audio servers that are like cell towers into the cloud,” he explains. “Each one of them handles the audio for 100 or so end users that are near each other in the virtual world.” High Fidelity users already employ this technology to host virtual cocktail parties, conference sessions, and beyond.
I always said cloud was a stupid idea for mass computing. So now computing gets closer to the users who demnd it ... Can't wait what they come up next with... users being able to compute stuff on their own devices?
Posted by: Fionalein | Thursday, October 08, 2020 at 07:28 PM