Above: One of these pics is reality, the other is Microsoft Flight Simulator -- but which?
I'm blogging Matthew Ball’s must-read, nine part metaverse primer this Summer; my take on Part 1 is here, and my coverage of Part 2 is here.
Part 3 of Matthew's Metaverse Primer, Networking and the Metaverse, touches on similar territory as the paper I wrote for Samsung Next last year, but with a big difference. (More on that below.) This section explores all the networking technology needed to make highly immersive, highly multiplayer applications possible. As an example of that, Matthew touches on how Microsoft's Flight Simulator (current version) uses streaming to display a mirror world:
Microsoft Flight Simulator is the most realistic and expansive consumer simulation in history. It includes 2 trillion individually rendered trees, 1.5 billion buildings and nearly every road, mountain, city and airport globally… all of which look like the ‘real thing’, because they’re based on high-quality scans of the real thing. But to pull this off, Microsoft Flight Simulator requires over 2.5 petabytes of data — or 2,500,000 gigabytes... Microsoft Flight Simulator works by storing a core amount of data on your local device (which also runs the game, like any console game and unlike cloud-based game-streaming services like Stadia). But when users are online, Microsoft then streams immense volumes of data to the local player’s device on an as-needed basis.
This becomes even more complicated when other users, not to mention new content they create, are part of your virtual world. But while broadband penetration and bandwidth speeds continue to increase, any multi-user Metaverse worth the name will run up against the simple fact that our broadband can't ever run faster than the speed of light:
[W]hile the Metaverse isn’t a fast-twitch AAA game, its social nature and desired importance means it will require low latency. Slight facial movements are incredibly important to human conversation — and we’re incredibly sensitive to slight mistakes and synchronization issues (hence the uncanny valley problem in CGI)...
Unfortunately, latency is the hardest and slowest to fix of all network attributes. Part of the issue stems from, as mentioned above, how few services and applications need ultra-low latency delivery. This constrains the business case for any network operator or latency-focused content-delivery network (CDN) — and the business case here is already challenged and in contention with the fundamental laws of physics.
At 11,000–12,500km, it takes light 40–45ms to travel from NYC to Tokyo or Mumbai. This meets all low-latency thresholds. Yet while most of the internet backbone is fiber optics, fiber-optic cable falls ~30% short of the speed of light as it’s rarely in a vacuum (+ loss is typically 3.5 dB/km)... Furthermore, network congestion can result in traffic being routed even less directly in order to ensure reliable and continuous delivery, rather than minimizing latency. This is why average latency from NYC to Tokyo is over 3× the time it takes light to travel between the two cities, and 4–5× from NYC to Mumbai.
Emphasis mine. Or to put it another way: The metaverse of our dreams will always be hobbled by the speed of light.
In fact, the way Matthew lays out all the bandwidth and latency hurdles facing metaverse applications leaves me significantly less confident than I was while writing the Samsung Next article. Which takes me to my own thoughts on Part 3: